Category : loss-function

I have a multidimensional input (None, 8, 105) I need to access the value – i[-1:][0][-1:][0][:1] and make comparisons between y_py_actual, y_predicted and input_tensor This is more or less what I got, but the function doesn’t work def custon_loss(self, input_tensor): def loss(y_actual, y_predicted): i = input_tensor[0][-1:][0][:1] mse = K.mean(K.sum(K.square(y_actual – y_predicted))) return K.switch((K.greater(i, y_predicted) & ..

Read more

After studying autograd, I tried to make loss function myself. And here are my loss def myCEE(outputs,targets): exp=torch.exp(outputs) A=torch.log(torch.sum(exp,dim=1)) hadamard=F.one_hot(targets, num_classes=10).float()*outputs B=torch.sum(hadamard, dim=1) return torch.sum(A-B) and I compared with torch.nn.CrossEntropyLoss here are results for i,j in train_dl: inputs=i targets=j break outputs=model(inputs) myCEE(outputs,targets) : tensor(147.5397, grad_fn=<SumBackward0>) loss_func = nn.CrossEntropyLoss(reduction=’sum’) : tensor(147.5397, grad_fn=<NllLossBackward>) values were same. I ..

Read more

I received TypeError: Expected bool, got 0.0 of type ‘float’ instead. In the first line of the following custom loss function @tf.function def reduce_fp(y_true, y_pred): mask_0 = tf.cast(y_true == 0.0, float) mask_1 = tf.cast(y_true == 1.0, float) dist_0 = y_pred * mask_0 dist_1 = y_pred * mask_1 discounted_0 = tf.reduce_mean(dist_0) discounted_1 = 1.0 – tf.reduce_max(dist_1) ..

Read more

I am training a sparse multi-label text classification problem using Hugging Face models which is one part of SMART REPLY System. ` The task which I am doing is mentioned below: I classify Customer Utterances as input to the model and classify to which Agent Response clusters it belongs to. I have 60 clusters and ..

Read more

I read the publication entitled "IMPROVED TRAINABLE CALIBRATION METHOD FOR NEURAL NETWORKS ON MEDICAL IMAGING CLASSIFICATION" available at https://arxiv.org/pdf/2009.04057.pdf. In this study, they have proposed a custom loss function that incorporates calibration into the model training process. They included the calibration component to the categorical cross entropy loss to create this custom function. I have ..

Read more

I am using a custom cauchy-schwarz divergence-based loss function available at https://gist.github.com/Jarino/cb6d9b39abcf773a1fb0e9a90ee67db9 to train a DL model for a multi-class classification task in Tensorflow 2.4 with Keras 2.4.0. The y_true labels are one-hot encoded. The loss function is given below: from math import sqrt from math import log from scipy.stats import gaussian_kde from scipy import ..

Read more