#### Category : loss-function

I have a multidimensional input (None, 8, 105) I need to access the value – i[-1:][-1:][:1] and make comparisons between y_py_actual, y_predicted and input_tensor This is more or less what I got, but the function doesn’t work def custon_loss(self, input_tensor): def loss(y_actual, y_predicted): i = input_tensor[-1:][:1] mse = K.mean(K.sum(K.square(y_actual – y_predicted))) return K.switch((K.greater(i, y_predicted) & ..

In my neural network (RNN), I am defining the loss function such that the output of the neural network is used to find the index (binary) and then the index is used to extract the required element from an array which in turn will be used to calculate MSELoss. However, the program gives parameter().grad = ..

I am new to image processing and optimization. Let y be a noisy image described by the relationship y = x+n, where u is a noise-free image and n is the noise. The goal is to recover x from n. min || y-x ||^2_2+lambda|| x ||^2_2 It should be solved by the gradient method for ..

After studying autograd, I tried to make loss function myself. And here are my loss def myCEE(outputs,targets): exp=torch.exp(outputs) A=torch.log(torch.sum(exp,dim=1)) hadamard=F.one_hot(targets, num_classes=10).float()*outputs B=torch.sum(hadamard, dim=1) return torch.sum(A-B) and I compared with torch.nn.CrossEntropyLoss here are results for i,j in train_dl: inputs=i targets=j break outputs=model(inputs) myCEE(outputs,targets) : tensor(147.5397, grad_fn=<SumBackward0>) loss_func = nn.CrossEntropyLoss(reduction=’sum’) : tensor(147.5397, grad_fn=<NllLossBackward>) values were same. I .. I have a weird performance which I can not explain. During the training of my network, and after I am done with the calculation of my training batch, I directly go as well over the validation set and take a batch and test my model on it. So, my validation is not done on a ..

I received TypeError: Expected bool, got 0.0 of type ‘float’ instead. In the first line of the following custom loss function @tf.function def reduce_fp(y_true, y_pred): mask_0 = tf.cast(y_true == 0.0, float) mask_1 = tf.cast(y_true == 1.0, float) dist_0 = y_pred * mask_0 dist_1 = y_pred * mask_1 discounted_0 = tf.reduce_mean(dist_0) discounted_1 = 1.0 – tf.reduce_max(dist_1) ..

I am training a sparse multi-label text classification problem using Hugging Face models which is one part of SMART REPLY System. ` The task which I am doing is mentioned below: I classify Customer Utterances as input to the model and classify to which Agent Response clusters it belongs to. I have 60 clusters and ..