I am working on Lasso regression. I have 155 rows and 6 input columns in the dataset, so there is an overfitting problem in my last models(decision tree reg, SVR, rfr..). I tried lasso regression with k fold cross validation then I obtained these results below. Can I evaluate the lasso model as sufficient when ..

#### Category : regression

I’m trying to forecast a dataset composed for two attributes, being these two also the target. I’ve followed this example: from statsmodels.tsa.ar_model import AutoReg from random import random # contrived dataset data = [x + random() for x in range(1, 100)] # fit model model = AutoReg(data, lags=1) model_fit = model.fit() # make prediction yhat ..

I am trying to understand GPR, and I am testing it to predict some values. The response is the first component of a PCA, so it has relatively good data without outliers. The predictors also come from a PCA(n=2), and buth predictors columns has been standarized with StandardScaler().fit_transform, as I saw it was better in ..

I want to fit a Theil-Sen regression (using scikit-learn) on a time series. I tried two things: fitting the regressor directly on the years (X = {2002:2019} fitting the regressor directly on the years – minimum year (X = {0:18} I would have expected the results to be the same, but they are different. If ..

import numpy as np import matplotlib.pyplot as plt import tensorflow as tf def line(x): return 2*x+4 X = np.arange(0,20) y = [k for k in line(X)] a = tf.Variable(1.0) b = tf.Variable(0.2) y_in = a*X + b loss = tf.reduce_mean(tf.square(y_in – y)) #this is my old code #optimizer = tf.train.GradientDescentOptimizer(0.2) #train = optimizer.minimize(loss) #new Code ..

I’m working on a problem where I’m using 3 features (3 columns) to predict a price using Elastic Net Regression. Without normalization or scaling and even with only 20 rows of training data, I’m getting okay results but I’ve read by normalizing or scaling, you can get better results. I’ve seen so many ways of ..

I’m using pyro.contrib.gp to train my data but I have encountered a weird issue. Say we define 2 GPR models as follows: kernel_init = gp.kernels.RBF(input_dim= dimension, variance=torch.tensor(1.), lengthscale=length_scale_init) gpr_init = gp.models.GPRegression(train_x, train_y, kernel_init, noise=torch.tensor(0.), jitter = jitter) kernel = gp.kernels.RBF(input_dim= dimension, variance=torch.tensor(1.), lengthscale=length_scale_init) gpr_opt = gp.models.GPRegression(train_x, train_y, kernel, noise=torch.tensor(0.), jitter = jitter) Then after one ..

I started to learn machine learning and I have a dataset that I want to find the prediction of global sale. I use regression to do that. I predicted models for each sales like america, europe etc. But when I’ve done the same thing for japonese I got lower predictions so I decided to use ..

I am training a multi-output CNN regression model which predicts x and y coordinate values given a single image input. Both the image data and associated x and y target labels have been normalized to be within the range of 0 and 1. However when using the model to predict the x and y values ..

I started to learn machine learning and I have a dataset that I want to find the prediction of global sale. I use regression to do that. I predicted models for each sales like america, europe etc. But when I’ve done the same thing for japonese I got lower predictions so I decided to use ..

## Recent Comments