R with constraints of equality and inequality

I am trying to find the local minimum of a function, and the parameters have a fixed amount. For instance,

Fx = 10 - 5x1 + 2x2 - x3

and the conditions are as follows:

x1 + x2 + x3 = 15

(x1, x2, x3)> = 0

If the sum of x1, x2 and x3 has a known value and they are all greater than zero. In R, it will look something like this:

Fx = function(x) {10 - (5*x[1] + 2*x[2] + x[3])} opt = optim(c(1,1,1), Fx, method = "L-BFGS-B", lower=c(0,0,0), upper=c(15,15,15)) 

I also tried using the inequalities with constrOptim to make the sum commit. I still think this might be a believable job, but I couldn't get it to work. This is a simplified example of a real problem, but any help would be greatly appreciated.

+6
source share
2 answers

In this case, optim will not work because you have equality constraints. constrOptim will not work for the same reason (I tried to convert equality into two inequalities, i.e. more and less than 15, but this is not with constrOptim ).

However, there is a package dedicated to this issue, and this is Rsolnp .

You use it as follows:

 #specify your function opt_func <- function(x) { 10 - 5*x[1] + 2 * x[2] - x[3] } #specify the equality function. The number 15 (to which the function is equal) #is specified as an additional argument equal <- function(x) { x[1] + x[2] + x[3] } #the optimiser - minimises by default solnp(c(5,5,5), #starting values (random - obviously need to be positive and sum to 15) opt_func, #function to optimise eqfun=equal, #equality function eqB=15, #the equality constraint LB=c(0,0,0), #lower bound for parameters ie greater than zero UB=c(100,100,100)) #upper bound for parameters (I just chose 100 randomly) 

Output:

 > solnp(c(5,5,5), + opt_func, + eqfun=equal, + eqB=15, + LB=c(0,0,0), + UB=c(100,100,100)) Iter: 1 fn: -65.0000 Pars: 14.99999993134 0.00000002235 0.00000004632 Iter: 2 fn: -65.0000 Pars: 14.999999973563 0.000000005745 0.000000020692 solnp--> Completed in 2 iterations $pars [1] 1.500000e+01 5.745236e-09 2.069192e-08 $convergence [1] 0 $values [1] -10 -65 -65 $lagrange [,1] [1,] -5 $hessian [,1] [,2] [,3] [1,] 121313076 121313076 121313076 [2,] 121313076 121313076 121313076 [3,] 121313076 121313076 121313076 $ineqx0 NULL $nfuneval [1] 126 $outer.iter [1] 2 $elapsed Time difference of 0.1770101 secs $vscale [1] 6.5e+01 1.0e-08 1.0e+00 1.0e+00 1.0e+00 

Thus, the resulting optimal values ​​are:

 $pars [1] 1.500000e+01 5.745236e-09 2.069192e-08 

which means that the first parameter is 15, and the rest are zero and zero. This is truly a global minimum in your function, since x2 adds a function, and 5 * x1 has a much larger (negative) effect than x3 on the result. Choosing 15, 0, 0 is the solution and the global minimum for the function in accordance with the constraints.

Function worked great!

+6
source

This is actually a linear programming problem, so a natural approach would be to use a linear programming solution like the lpSolve package. You need to provide an objective function and a matrix of constraints, and the solver does the rest:

 library(lpSolve) mod <- lp("min", c(-5, 2, -1), matrix(c(1, 1, 1), nrow=1), "=", 15) 

Then you can access the optimal solution and objective value (by adding a constant term of 10, which is not provided to the solver):

 mod$solution # [1] 15 0 0 mod$objval + 10 # [1] -65 

A linear software solver should be much faster than a regular non-linear optimization optimizer, and should not have problems returning the exact optimal solution (instead of the nearest point that may be subject to rounding errors).

+4
source

Source: https://habr.com/ru/post/988051/


All Articles