I have a multidimensional input (None, 8, 105) I need to access the value – i[-1:][0][-1:][0][:1] and make comparisons between y_py_actual, y_predicted and input_tensor This is more or less what I got, but the function doesn’t work def custon_loss(self, input_tensor): def loss(y_actual, y_predicted): i = input_tensor[0][-1:][0][:1] mse = K.mean(K.sum(K.square(y_actual – y_predicted))) return K.switch((K.greater(i, y_predicted) & ..

#### Category : loss-function

In my neural network (RNN), I am defining the loss function such that the output of the neural network is used to find the index (binary) and then the index is used to extract the required element from an array which in turn will be used to calculate MSELoss. However, the program gives parameter().grad = ..

I am new to image processing and optimization. Let y be a noisy image described by the relationship y = x+n, where u is a noise-free image and n is the noise. The goal is to recover x from n. min || y-x ||^2_2+lambda|| x ||^2_2 It should be solved by the gradient method for ..

After studying autograd, I tried to make loss function myself. And here are my loss def myCEE(outputs,targets): exp=torch.exp(outputs) A=torch.log(torch.sum(exp,dim=1)) hadamard=F.one_hot(targets, num_classes=10).float()*outputs B=torch.sum(hadamard, dim=1) return torch.sum(A-B) and I compared with torch.nn.CrossEntropyLoss here are results for i,j in train_dl: inputs=i targets=j break outputs=model(inputs) myCEE(outputs,targets) : tensor(147.5397, grad_fn=<SumBackward0>) loss_func = nn.CrossEntropyLoss(reduction=’sum’) : tensor(147.5397, grad_fn=<NllLossBackward>) values were same. I ..

I have a weird performance which I can not explain. During the training of my network, and after I am done with the calculation of my training batch, I directly go as well over the validation set and take a batch and test my model on it. So, my validation is not done on a ..

I received TypeError: Expected bool, got 0.0 of type ‘float’ instead. In the first line of the following custom loss function @tf.function def reduce_fp(y_true, y_pred): mask_0 = tf.cast(y_true == 0.0, float) mask_1 = tf.cast(y_true == 1.0, float) dist_0 = y_pred * mask_0 dist_1 = y_pred * mask_1 discounted_0 = tf.reduce_mean(dist_0) discounted_1 = 1.0 – tf.reduce_max(dist_1) ..

I am training a sparse multi-label text classification problem using Hugging Face models which is one part of SMART REPLY System. ` The task which I am doing is mentioned below: I classify Customer Utterances as input to the model and classify to which Agent Response clusters it belongs to. I have 60 clusters and ..

I read the publication entitled "IMPROVED TRAINABLE CALIBRATION METHOD FOR NEURAL NETWORKS ON MEDICAL IMAGING CLASSIFICATION" available at https://arxiv.org/pdf/2009.04057.pdf. In this study, they have proposed a custom loss function that incorporates calibration into the model training process. They included the calibration component to the categorical cross entropy loss to create this custom function. I have ..

I am using a custom cauchy-schwarz divergence-based loss function available at https://gist.github.com/Jarino/cb6d9b39abcf773a1fb0e9a90ee67db9 to train a DL model for a multi-class classification task in Tensorflow 2.4 with Keras 2.4.0. The y_true labels are one-hot encoded. The loss function is given below: from math import sqrt from math import log from scipy.stats import gaussian_kde from scipy import ..

As a Pytorch newbie (coming from tensorflow), I am unsure of how to implement Early Stopping. My research has led me discover that pytorch does not have a native way to so this. I have also discovered torchsample, but am unable to install it in my conda environment for whatever reason. Is there a simple ..

## Recent Comments