I need to estimate a parameter in an ODEs where I have a df of data as an ‘objective function’ to infer the parameter from – df contains weekly data, and I want to utilise that in conjunction with the ODEs so that if beta is unknown, I can infer it using scipy minimize. E.g., ..
I wish to estimate a parameter in an ODE system when I have a dataframe of weekly data as an ‘objective function’ to estimate the parameter from. In other words, the dataframe df contains weekly incidence data, and I wish to use that in conjunction with the ODE system so that if beta is unknown, ..
I was trying to solve a problem asked on Mathstackexchange but when I tried to code the same problem in python I ended up doing a program that for sure has killed the optimization on programming. Code : import numpy as np def p(k): if k == 1: return np.array([1, 1]) else: return add(p(k-1), np.append(np.zeros(k, ..
I’m trying to optimize some Python code that is using Pandas library to process about 1GB of CSV data. I noticed that apply method in Pandas seems to be working much slower compared to native Python functions. Specifically code that is using DataFrame.apply method is running about 20 times slower. Here is some reproducible code ..
I am a medical physics student trying to simulate photon detection – I succeeded (below) but I want to make it better by speeding it up: it currently takes 50 seconds to run and I want it to run in some fraction of that time. I assume someone more knowledgeable in Python could optimize it ..
I have a dataset with thousands of cancer patients with columns saying what is the probability of surviving based on an specific treatment. Please see the table below I want to create an optimization in pyomo to return what is the best option for each patient maximizing the overall surviving probability. So, I need something ..
I’m implementing Bayesian Optimization scheme from scratch, and when I’m trying to minimize the acquisition function by using scipy.optimize.minimize, I get the following error: for x_start in (np.random.random((self.batch_size, self.x_init.reshape(2,-1).shape)) * self.scale): response = minimize(fun=self.acquisition_function, x0=x_start, method = ‘L-BFGS-B’) ValueError: `f0` passed has more than 1 dimension Here is the code: class BayesianOptimizer: def __init__(self, target_func, ..
I am doing a constrained nonlinear (also involves Mix-integer) optimization for a system operation. The problem can be summarized as: There are n devices to support the system operation, (D1, D2, …, Dn). For the operation of this system, the devices will be degraded with time. i.e. Di, i=1, 2, …, n may not work. ..
Can someone help me in optimizing the below code. power = [2,3,2,1] noElement = len(power) result = 0 for i in range(noElement): sum = 0 minEleme = float(‘inf’) for j in range(i, noElement): minEleme = min(minEleme,power[j]) sum+=power[j] result += (minEleme*sum)%1000000007 print("min : ",minEleme,"sum : ",sum,"result : " ,result) print(result) Source: Python..
If my output is a tensor of values: torch.tensor([0.0, 1.2, 0.1, 0.01, 2.3, 99.2, -21.2]) I’m trying to create a differentiable loss function that will minimize the number of values that are not 0. That is, the actual values don’t matter, I just need to have less values that are not 0. How can I ..