I have FP32 tensor inputs which shape are [1, 4, 1024, 256] I need to quantize the tensor to INT8, but naive quantization has triggered a problem in my NLP model that actually get rid of EOS. So I have to do calibration rather than go with ‘absolute min, max value’. And I have a ..
I have a simple NN: import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 5) self.fc2 = nn.Linear(5, 10) self.fc3 = nn.Linear(10, 1) def forward(self, x): x = self.fc1(x) x = torch.relu(x) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() opt = ..
I am a beginner in NLP and have undertaken a challenge. I am trying to train and evaluate a hate detection model using the HuggingFace Transformers library and this dataset. Model performance is secondary, just trying to get it going. My code so far: import pandas as pd import numpy as np from numpy.random import ..
I am running Alexnet on CIFAR10 dataset using Pytorch Lightning, here is my model: class SelfSupervisedModel(pl.LightningModule): def __init__(self, hparams=None, num_classes=10, batch_size=128): super(SelfSupervisedModel, self).__init__() self.batch_size = batch_size self.loss_fn = nn.CrossEntropyLoss() self.hparams["lr"] = ModelHelper.Hyperparam.Learning_rate self.model = torchvision.models.alexnet(pretrained=False) def forward(self, x): return self.model(x) def training_step(self, train_batch, batch_idx): inputs, targets = train_batch predictions = self(inputs) loss = self.loss_fn(predictions, targets) ..
I am trying to use catalyst to train the custom pytorch neural network that I have created, however, when I run the code for the first time in Jupiter, it always gives me an AssertionError with no clear explanation. When I run the cell for the second time, it seems to work normally. How can ..
I have multiple torch tensors with the following shapes x1 = torch.Size([1, 512, 177]) x2 = torch.Size([1, 512, 250]) x3 = torch.Size([1, 512, 313]) How I can pad all these tensors by 0 over the last dimension, to have a unique shape like ([1, 512, 350]). What I tried to do is to convert them ..
I am new to pythorch and what I would like to do will probably be easy, but I have not found anything online regarding actually increasing the number of observations without adding them into the image (in my case) folder. I don’t want to add images to the folder because I want to play around ..
I am trying to train a neural network with pyTorch, but I get the error in the title. I followed this tutorial, I just applied some small changes to meet my needs. Here’s the network: class ChordClassificationNetwork(nn.Module): def __init__(self, train_model=False): super(ChordClassificationNetwork, self).__init__() self.train_model = train_model self.flatten = nn.Flatten() self.firstConv = nn.Conv2d(3, 64, (3, 3)) self.secondConv ..
I have to compute score = dot(a, LeakyReLU(x_i+y_j)) for each i, j in [N], where a, x_i, y_j is the D-dimensional vecotr, and dot() is the dot-product that outputs a scalar value. So finally, I have to get NxN score. In keras, I implemented as: #given X (N x D), Y(N x D), A (D ..
The PyTorch previously installed in the remote Linux system is problematic (version 1.8.0). It is in the system folders so I don’t have privilege to uninstall or upgrade it because I am not a super user. As a result, I installed another PyTorch in my user space using command pip3 install –user –ignore-installed torch There ..