I was trying to build a gradient descent function in python. I have used the binary-crossentropy as the loss function and sigmoid as the activation function. def sigmoid(x): return 1/(1+np.exp(-x)) def binary_crossentropy(y_pred,y): epsilon = 1e-15 y_pred_new = np.array([max(i,epsilon) for i in y_pred]) y_pred_new = np.array([min(i,1-epsilon) for i in y_pred_new]) return -np.mean(y*np.log(y_pred_new) + (1-y)*np.log(1-y_pred_new)) def gradient_descent(X, ..
Category : deep-learning
I am currently trying to add a preloaded embedded from Glove from a model and can’t seem to load the glove text file to parse it’s data. I always get the file not found error. My code is as follows: outname = ‘glove.6B.100d.txt’ outdir = ‘./Downloads’ if not os.path.exists(outdir): os.mkdir(outdir) fullname = os.path.join(outdir, outname) def ..
I loaded custom pytorch model from storage and I want to find out it’s input shape. Something like this: model.input_shape So, is it possible to get it’s input shape? Source: Python..
I’m facing BrokenPipeError when I’m trying to run sentiment analysis with hugging face. It’s returning [Error No] 32 Broken Pipe. The code is def create_data_loader(df, tokenizer, max_len, batch_size): ds = GPReviewDataset( reviews=df.content.to_numpy(), targets=df.sentiment.to_numpy(), tokenizer=tokenizer, max_len=max_len ) return DataLoader( ds, batch_size=batch_size, num_workers=4 ) Followed by below code BATCH_SIZE = 16 train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE) ..
I am stuck in one project as i am using tensorflow hub speech to text model to convert the audio to text but tensorflow hub has not inbuild functionality to provide me WER(word error rate) and then how can i calculate the WER from audio to transcript text(also i dont have true labels) i am ..
Are the simplicity of the language and it’s garbage collector the main reasons to use it for these two types of software programming? And if people are using it also because of the libraries that it has, then my question is why don’t write similar libraries for other language? I mean you probably don’t want ..
How to concatenate two tensors in tensorflow 2.4.1 with shapes as shown below: t1: [4,3,2] t2: [4,10,256] I want shape: t3: [4,13,258] Thanks in advance! Source: Python..
Focal Loss is a loss aimed at addressing class imbalance for a classification task. Here is my attempt class FocalLoss(nn.Module): def __init__( self, weight=None, gamma=2., reduction=’none’ ): nn.Module.__init__(self) self.weight = weight self.gamma = gamma self.reduction = reduction def forward(self, input_tensor, target_tensor): log_prob = F.log_softmax(input_tensor, dim=-1) prob = torch.exp(log_prob) return F.nll_loss( ((1 – prob) ** self.gamma) ..
I’m trying to do a image classification task and for that I’m using the VGG model. Right now, I’m using 3 epochs as I don’t want the training to take a lot of time, but from the start my model is giving really bad accuracy. Can anyone tell me how can I make this model ..

I have a code here using the tensorflow.keras.datasets.imdb data, after splitting my data for train and validation (line 45-50) so my training data and training labels are ndarray of (15000,). while training my model I can see that the model is not iterating through the entire dataset and I’m getting an accuracy of 49.8% I ..
Recent Comments