So I have this code: with torch.no_grad(): X_test = mnist_test.test_data.view(-1, 28 * 28).float().to(device) Y_test = mnist_test.test_labels.to(device) prediction = linear(X_test) correct_prediction = torch.argmax(prediction, 1) == Y_test accuracy = correct_prediction.float().mean() print(‘Accuracy:’, accuracy.item()) r = 1504207845 X_single_data = mnist_test.test_data[r:r + 1].view(-1, 28 * 28).float().to(device) Y_single_data = mnist_test.test_labels[r:r + 1].to(device) print(‘Label: ‘, Y_single_data.item()) single_prediction = linear(X_single_data) print(‘Prediction: ‘, torch.argmax(single_prediction, ..
I’ve been trying to run this code from a previous project and I keep getting this error. ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead. When I run the main file in python3 I get the above error. I have tried several solutions already but none have been helpful. If someone could help me ..
I want to add two torch tensor ‘list’ for example, let a = tensor([[1., 1., 2.], [1., 1., 2.], [1., 1., 2.], [1., 1., 2.], [1., 1., 2.], [1., 1., 2.]]) b = tensor([[4., 5., 6., 7., 8., 9.], [4., 5., 6., 7., 8., 9.], [4., 5., 6., 7., 8., 9.], [4., 5., 6., 7., ..
I am trying to build a convolutional conditional variational autoencoder (CVAE) from scratch in PyTorch to work on the MNIST dataset. However, I am missing two parts of my network. First, I must add a fully connected layer to incorporate the additional y label as an addition to my encoder. Second, I must also account ..
I am trying to train several classification models with BERT with different number of categories in each model, to later use these in a type tree decision model or classification pipeline. To do so I built a training function, mainly based on https://colab.research.google.com/drive/1pTuQhug6Dhl9XalKB0zUGf4FIdYFlpcX. The idea is to pass the prepared dataframes (df) one by one ..
I have 2D matrix with shape [batch_size, seq_len] where values of the 1st dimension are from range [0, num_classes – 1]. I also have another embedding matrix with shape [num_classes, embedding_size]. How can I transform 2D matrix to 3D with shape [batch_size, seq_len, embedding_size] by replacing the class labels (scalar) with its embedding vector? I ..
I am new to Torch and using a code template for a masked-cnn model. In order to be prepared if the training is interrupted, I have used torch.save and torch.load in my code, but I think I cannot use this alone for continuing training sessions? I start training by: model = train_mask_net(64) This calls the ..
I am looking for a nice way of overriding the backward operation in nn.Module for example: class LayerWithCustomGrad(nn.Module): def __init__(self): super(LayerWithCustomGrad, self).__init__() self.weights = nn.Parameter(torch.randn(200)) def forward(self,x): return x * self.weights def backward(self,grad_of_c): # This gets called during loss.backward() # grad_of_c comes from the gradient of b*23 grad_of_a = some_operation(grad_of_c) # perform extra computation # ..
I am trying to run code from Github in Windows 10. Link to the code is as follows: https://github.com/kashyap7x/QGN I am getting the following errors when I run the train.py file using PyCharm. Traceback (most recent call last): File "C:/Users/ashah29/Desktop/QGN-master/QGN-master/train.py", line 13, in <module> from dataset import TrainDataset File "C:Usersashah29DesktopQGN-masterQGN-masterdataset.py", line 4, in <module> import ..
My goal is to execute code via. an Azure DevOps pipeline. Why does my code exit? Theories: I thought it exits because it fails to download torch-xla==1.8. I now suspect this fails due to a log limit?: I *think* it exits because it fails to download `torch-xla==1.8`. How do you increase a log limit on ..