I have a point in 3D p = [0,1,0] and a list of line segments defined by their starting and ending co-ordinates. line_starts = [[1,1,1], [2,2,2], [3,3,3]] line_ends = [[5,1,3], [3,2,1], [3, 1, 1]] I tried adapting the first two algorithms detailed over here in this post: Find the shortest distance between a point and ..

#### Category : performance

I just wrote a simple benchmark comparing Numba and Julia, together with some discussion. I’m wondering whether my Numba code could be fixed somehow, or if what I’m trying to do is indeed not supported by Numba. The idea is to evaluate this function using a JIT-compiled quadrature rule. g(p) = integrate exp(p*x) with respect ..

I have written a code which is detecting contours of specific objects and draws contours around this objects by using a ultrawide webcam. import cv2 from time import sleep import numpy as np while True: success1, frame = capture.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, binary = cv2.threshold(gray, 100, 255, cv2.THRESH_OTSU) sleep(0.05) if success1: #mask = ..

I’m trying to run a HyperparameterTuner on an Estimator for an LDA model in a SageMaker notebook using mxnet but am running into errors related to the feature_dim hyperparameter in my code. I believe this is related to the differing dimensions of the train and test datasets but I’m not 100% certain if this is ..

I’m trying to learn cython by trying to outperform Numpy at dot product operation np.dot(a,b). But my implementation is about 4x slower. So, this is my hello.pyx file cython implementation: cimport numpy as cnp cnp.import_array() cpdef double dot_product(double[::1] vect1, double[::1] vect2): cdef int size = vect1.shape[0] cdef double result = 0 cdef int i = ..

I’m trying to replace duplicates in my data, and I’m looking for an efficient way to do that. I have a df with 2 columns, idA and idB, like this: idA idB 22 5 22 590 5 6000 This is a df with similarities. I want to create a dictionary in which the key is ..

I’m trying to stack an image and I can’t wrap my head around how to make it more efficient. The following function is correct but not fast enough computationally: def func(table, arr): img_sum = np.zeros((1024, 256)) for i in range(1024): for j in range(256): for k in range(3): img_sum[i, j] += arr[int(table[i, j, k]), i, ..

In simple terms, what the script does and what I need your support for. The algorithm runs, it takes more than 10 mins to create all the files. With this, queue consumes the message multiple times. This initiates the algorithm again and creates multilpe files, the script ends up taking too long, sometimes 10-15 minutes ..

I’ve finally got a working genetic algorithm to solve the "8 Queens" puzzle, however I’m aiming to get the run time of this program down to under 30 seconds consistently, and I really don’t know how to optimize my code to do this. Is this likely to be an issue with how my fitness score ..

I’m building a web app with Flask Sqlalchemy that calls stock data from the iexcloud api. I’ve created a form where the user can enter a ticker(stock symbol) they are interested in. Iexcloud offers a list of viable tickers (with a bunch of other info) that I am accessing using python requests and parsing the ..

## Recent Comments