Is normalization useful / necessary in optimization?

I am trying to optimize the design of a device using the Matlab optimization toolkit (more precisely, using the fmincon function). To quickly find my point of view, I provide a small set of variables {l_m, r_m, l_c, r_c}, which with its starting value is {4 mm, 2 mm, 1 mm, 0.5 mm}.

Although Matlab specifically does not recommend normalizing input variables, my professor advised me to normalize variables to the maximum value {l_m, r_m, l_c, r_c}. Thus, now the variables will take values ​​from 0 to 1 (instead of 3 mm - 4.5 mm in the case of l_m). Of course, I have to change my objective function to convert it to the correct values, and then do the calculations.

My question is: do optimization functions like fmincon be fmincon if the input variables are normalized? Is it reasonable to expect changes in performance due to normalization? Keep in mind how the optimizer changes variables, such as say l_m - in one case, it can change it from 4 mm to 4.1 mm, and in the other case, it can change it from 0.75 to 0.76.

+4
source share
1 answer

It is generally much easier to optimize when input is normalized. You can eliminate the improvement in both convergence speed and output accuracy.

For example, as you can see in this article ( http://www-personal.umich.edu/~mepelman/teaching/IOE511/Handouts/511notes07-7.pdf ), the rate of convergence of gradient descent is better limited when the ratio of the largest and smallest Hessian eigenvalues ​​are few. As a rule, when your data is normalized, this ratio is 1 (optimal).

+5
source

Source: https://habr.com/ru/post/1433414/


All Articles