Proper use of fmin_l_bfgs_b to set model parameters

I have some experimental data (for y, x, t_exp, m_exp) and you want to find the "optimal" model parameters (A, B, C, D, E) for this data using the limited multidimensional BFGS method . Parameter E must be greater than 0, the rest are not limited.

def func(x, A, B, C, D, E, *args): return A * (x ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * x) / numpy.cos(t_exp))) + numpy.exp((-2 * B * x) / numpy.cos(t_exp)) * C + (D * m_exp) initial_values = numpy.array([-10, 2, -20, 0.3, 0.25]) mybounds = [(None,None), (None,None), (None,None), (None,None), (0, None)] x,f,d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(m_exp, t_exp), bounds=mybounds) 

A few questions:

  • Should my func model statement include my independent variable x or should it be provided from experimental data x_exp as part of *args ?
  • When I run the above code, I get the error func() takes at least 6 arguments (3 given) , which I assume is x, and my two arguments ... How to define func ?

EDIT: Thanks to @zephyr's answer, I now understand that the goal is to minimize the sum of the squared residuals, not the actual function. I got the following working code:

 def func(params, *args): l_exp = args[0] s_exp = args[1] m_exp = args[2] t_exp = args[3] A, B, C, D, E = params s_model = A * (l_exp ** E) * numpy.cos(t_exp) * (1 - numpy.exp((-2 * B * l_exp) / numpy.cos(t_exp))) + numpy.exp((-2 * B * l_exp) / numpy.cos(theta_exp)) * C + (D * m_exp) residual = s_exp - s_model return numpy.sum(residual ** 2) initial_values = numpy.array([-10, 2, -20, 0.3, 0.25]) mybounds = [(None,None), (None,None), (None,None), (None,None), (0,None)] x, f, d = scipy.optimize.fmin_l_bfgs_b(func, x0=initial_values, args=(l_exp, s_exp, m_exp, t_exp), bounds=mybounds, approx_grad=True) 

I am not sure that the borders are working correctly. When I specify (0, No) for E, I get a start flag of 2, abnormal termination. If I installed it (1e-6, None), it works fine, but selects 1e-6 as E. Am I correctly defining the boundaries?

+4
source share
1 answer

I did not want to try to figure out what the model you are using is, so here is a simple example for a string:

  x_true = arange (0,10,0.1.1)
 m_true = 2.5
 b_true = 1.0
 y_true = m_true * x_true + b_true

 def func (params, * args):
     x = args [0]
     y = args [1]
     m, b = params
     y_model = m * x + b
     error = y-y_model
     return sum (error ** 2)

 initial_values ​​= numpy.array ([1.0, 0.0])
 mybounds = [(None, 2), (None, None)]

 scipy.optimize.fmin_l_bfgs_b (func, x0 = initial_values, args = (x_true, y_true), approx_grad = True)
 scipy.optimize.fmin_l_bfgs_b (func, x0 = initial_values, args = (x_true, y_true), bounds = mybounds, approx_grad = True)

The first optimization is unlimited and gives the right answer, the second - the boundaries that prevent the achievement of the correct parameters.

The important thing that you are mistaken in is that almost all optimization functions, "x" and "x0" refer to the parameters that you optimize - everything else is passed as an argument. It is also important that your correspondence function returns the correct data type - here we want to get one value, some routines expect an error vector. You also need the flag approx_grad = True if you do not want to calculate the gradient analytically and provide it.

+9
source

Source: https://habr.com/ru/post/1388500/


All Articles