I am calculating an autocorrelation function of a signal using Numpy’s FFT by first calculating the power spectrum. I believe that autocorrelation functions are positive definite. However, when I test if my simulated autocorrelation function is positive definite, it often fails. Here’s some example code: import numpy as np def is_pos_def(x): return np.all(np.linalg.eigvals(x) > 0) ..

#### Category : linear-algebra

could you please help with idea of realisation of solver from Excel (with function, restraints and variables). I have read answer to question 58002755, as well as the very question, but haven’t caught the whole idea. I have invoices and their amount (a). I need to assign them to calculated sums (b). So I have ..

I have two shapes, a rectangle and a parallelogram that signify two gantry systems. The one gantry system has a camera on it and can detect the position of the other gantry system as it sits above. I cannot via a series of transforms (translate, rotate, shear x, shear y, translate) get it even remotely ..

I was trying to make an identity matrix to find the inverse. But I am stuck at this moment due to a bug in the code. A fresh eye would help a lot. identity=[] null=[] for i in range(3): null.append(i*0) for j in range(3): identity.append(null) for k in range(3): identity[k][k]=1 print(identity) The result I got ..

Below is my code: import numpy as np M = np.array([[0.94957099, 0.08870858 + 0.30074196j], [0.08870858 – 0.30074196j, -0.94957099]]) vals, vecs = np.linalg.eigh(M) eta = np.diag([vals[0], vals[1]]) print(vecs @ eta @ vecs.T) Obviously M is a Hermitian matrix, when I tried to use the result of the eigendecomposition to regain M, I got a different matrix ..

I need to solve a few eigenvalues of a large matrix specified by their indices. These indices are according to the whole eigenspectrum sorted in algebraic (not absolute value) ascending order. I notice this is made available by the subset_by_index option in scipy.linalg.eigh, but not in, e.g., the sparse counterpart eigsh. Is this possible at ..

I’m trying to solve 5 equations with 5 unknowns. from sympy import * w1, w2, w3, h2, h3 = symbols(‘w1 w2 w3 h2 h3’) tan = 8.89/19 a1 = 17.73*w1+(w1*w1*tan)/2 a2 = h2*w2+(w2*w2*tan)/2 a3 = h3*w3+(w3*w3*tan)/2 a4 = w2*(17.73+w1*tan-h2)+w3*(17.73+(w1+w2)*tan-h3) at = (17.73*19)+(8.89*19)/2 eq1 = a1 – a2 eq2 = a1 – a3 eq3 = a1 ..

I’m training a model with components that are represented by square matrices. The closer the determinant of those matrices is to 1 the better, so I implemented a loss function as follows : @tf.function def det_loss(slice): out = tf.linalg.det(slice) return out (I left subtracting the absolute or square value from 1 out, this is enough ..

Apologies for double-posting. I haven’t found a solution yet. Former post: Conditional Numpy shuffling Problem I have a randomly shuffled 2D array of dtype=int and would like to maximize the distance between identical integers. This is an optimization problem. E.g. I have: np.array([1, 2, 2, 2], [3, 2, 1, 1], [1, 3, 3, 1], [1, ..

I am trying to speed up some multi-camera system that relies on calculation of fundamental matrices between each camera pair. Please notice the following is pseudocode. @ means matrix multiplication, | means concatenation. I have code to calculate F for each pair calculate_f(camera_matrix1_3x4, camera_matrix1_3x4), and the naiive solution is for c1 in cameras: for c2 ..

## Recent Comments