I have a 3D numpy array (I call it a tensor) shaped (5,8, 15000). Because of some calculations which filled it, there are some NaN‘s inside the tensor. The last axis shows the simulation index. I had a process which repeated 15,000 times on the computer with slightly changed dynamics. I want to go through ..
Category : tensor
I have a 3Dim numpy array which stores results from 15,000 simulations. The 3D array is composed of 2D arrays (results), with one such 2D array for each simulation. So the 3rd dimension of the 3D array is populated with the simulation number. The 3rd dimension is literally given by np.arange(15000). For each simulation, the ..
I am currently working on this code that feeds two tensors into a smooth_l1_loss function with elementwise_mean set to false for loss calculation. I tried to find articles regarding how the function works behind the scene but couldn’t find any materials related to my question. Basically, the contents in both tensors are lists, making them ..
I have an input array created from : initial window = ‘I have a bad feeling about this’ seq_tokens = t.texts_to_sequences(initial_window) # seq_tokens = [[4], [], [], [5], [590], [], [], [5], [], [998], [5], [], [], [], [], [], [], [4], [], [], [], [5], [998], [591], [], [], [], [], [], [4], []] ..
I have a variable losses_all = [] that I want to convert to an np.array. I tried to do so using the code and got the following error: # convert to numpy array losses = np.array(losses_all) # ERROR MESSAGE RuntimeError: Can’t call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. I also tried this ..
# A: 14287 * 768 array, B: 863394 * 768 array def cosine_similarity(A,B): A = torch.tensor(A).to(‘cpu’) ; B = torch.tensor(B).to(‘cpu’) num = torch.mm(A, B.T) p1 = torch.sqrt(torch.sum(A**2, axis=1))[:, None] p2 = torch.sqrt(torch.sum(B**2, axis=1))[None, :] return (num/(p1*p2)).T The process gets killed when I do cosine similarity for the two matrices. I get the following logs on ..
I would like to make tensor, multiple multiple by 100, and then overwrite. This is what I want to do. a = torch.arange(3) >>>a tensor([ 0,1,2]) a = torch.mul(a, 100) >>>a tensor([0,100,200]) However, tensor do not allow me to overwrite. How can I preserve the result torch.mul(a, 100) ? Source: Python..
How could the following be written as a Tensor operation without a loop? a = torch.Tensor([[0, 1, 1], [2, 1, 0], [1, 0, 1]]).long() trace = torch.zeros(4, dtype=torch.long) start_idx = 0 trace[0] = start_idx for i in range(a.shape[0]): trace[i + 1] = a[i, trace[i]] print(trace) # tensor([0, 0, 2, 1]) The code basically traces a ..
Follow-Up question to PyTorch: Dynamic Programming as Tensor Operation. Could the following be written as a tensor operation instead of a loop? a = torch.Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) print(a.shape) # (3, 4) for i in range(1, a.shape[0]): a[i] = a[i-1].max(dim=0)[0] + a[i] print(a) # tensor([[ 1, 2, ..
Is it possible to get the following loop with a Tensor operation? a = torch.Tensor([1, 0, 0, 0]) b = torch.Tensor([1, 2, 3, 4]) for i in range(1, a.shape[0]): a[i] = b[i] + a[i-1] print(a) # [1, 3, 6, 10] The operation depends on the previous values in a and the values that are computed ..
Recent Comments