This post is about the same issue, but no proper answer has been given. And since this problem seems to be widespread, I’ll keep my code behind the scene. Following this source, I’ve written a network which does well when I give it a training example with a target vector. Using gradient descent I minimize ..
I am a beginner in artificial neural network and I am still trying to understand few concepts. I am trying to change an online learning multiple-layer perceptron (With feedforward and backpropagation) code to batch learning. But I am confused about how to do it. I am going to train the data, and test the data ..
I’m working with this dataset: https://drive.google.com/file/d/1j7Gdeq5IiZlmRB7aMrJheLu1cFt9G5d7/view?usp=sharing I’m trying to train a neural network that predicts fraud, based in a variable called ‘fraud’ (a dummy variable, 1 for fraud, 0 for not fraud). What happens is that my model predicts properly not fraud (8429), but it can’t predict the frauds (68), and when I print my ..
I’m creating my own version of the tic-tac-toe game in order to practice my python skills and thought it may be funny to instead of using X’s and O’s I could use the player faces retrieved from a picture. I would like to know if there’s any free already trained NN that I could use ..
I’m trying to define 2 neural network model in tensorflow 1.4, which are equal in architecture but are different in name of variables. for example: class myModel(object): def __init__(k): with tf.variable_scope("var_name"+str(k)): … with tf.variable_scope("scope_name"+str(k)): … with tf.variable_scope("decoder_name"+str(k)): … model_1 = myModel(k=1) model_2 = myModel(k=2) I want to pretrain both networks on dataset_x , and then ..
I want to ask whether it works if I add an additional attention layer on the top of the BERT model to perform a text classification task since my model is like the code below and I got unexpected low accuracy in this way. class my_model(nn.Module): def __init__(self): super(my_model, self).__init__() self.bert = transformers.BertModel.from_pretrained(BERT_PATH) self.self_attn = ..
user1:~/BCNet/code$ python infer.py –inputs . –save-root ../recs Traceback (most recent call last): File "infer.py", line 117, in net.load_state_dict(tor ,True) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1052, in load_state_dict self.class.name, "nt".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ImageReconstructModel: size mismatch for garPcapsLayers.1.mean: copying a param with shape torch.Size() from checkpoint, the shape in current model is torch.Size([8454 ]). size ..
I’m facing a lot of problems with my code. I have made a deep neural network for a dataset. I was running the code below: def valgenerator(): while 1: batch_size=32 x_time_data = np.zeros((batch_size, time_steps//subsample, 32)) yy =  for i in range(batch_size): random_index = np.random.randint(0, len(xval)-time_steps) x_time_data[i] = xval[random_index:random_index+time_steps:subsample] yy.append(yval[random_index + time_steps]) yy = np.asarray(yy) ..
I am quite new to using tensorflow and I would really appreciate some help on this. I am training an autoencoder and I am trying to load the data input with the tensorflow.data pipeline. However, after doing this, I’ve been having problems with the input shape etc. Does anyone know how to fix this? Thank ..
Trying to make a raw neural network with python, I reach this question. ¿Does ReLu disturb SoftMax by creating non-necessary zeros? import numpy as np from nnfs.datasets import spiral_data class Activation_ReLu: def forward(self,inputs): self.output=np.maximum(0,inputs) class Activation_Softmax: def forward(self,inputs): exp_values=np.exp(inputs-np.max(inputs,axis=1,keepdims=True)) porcentage=exp_values/np.sum(exp_values,axis=1,keepdims=True) self.output=porcentage Here are the both functions , for creting the layer use this code: class ..