Category : regression

I’m trying to forecast a dataset composed for two attributes, being these two also the target. I’ve followed this example: from statsmodels.tsa.ar_model import AutoReg from random import random # contrived dataset data = [x + random() for x in range(1, 100)] # fit model model = AutoReg(data, lags=1) model_fit = model.fit() # make prediction yhat ..

Read more

import numpy as np import matplotlib.pyplot as plt import tensorflow as tf def line(x): return 2*x+4 X = np.arange(0,20) y = [k for k in line(X)] a = tf.Variable(1.0) b = tf.Variable(0.2) y_in = a*X + b loss = tf.reduce_mean(tf.square(y_in – y)) #this is my old code #optimizer = tf.train.GradientDescentOptimizer(0.2) #train = optimizer.minimize(loss) #new Code ..

Read more

I’m working on a problem where I’m using 3 features (3 columns) to predict a price using Elastic Net Regression. Without normalization or scaling and even with only 20 rows of training data, I’m getting okay results but I’ve read by normalizing or scaling, you can get better results. I’ve seen so many ways of ..

Read more

I’m using pyro.contrib.gp to train my data but I have encountered a weird issue. Say we define 2 GPR models as follows: kernel_init = gp.kernels.RBF(input_dim= dimension, variance=torch.tensor(1.), lengthscale=length_scale_init) gpr_init = gp.models.GPRegression(train_x, train_y, kernel_init, noise=torch.tensor(0.), jitter = jitter) kernel = gp.kernels.RBF(input_dim= dimension, variance=torch.tensor(1.), lengthscale=length_scale_init) gpr_opt = gp.models.GPRegression(train_x, train_y, kernel, noise=torch.tensor(0.), jitter = jitter) Then after one ..

Read more