Polynomial estimation - floating point problem or optimization?

From numerical recipes:

We assume that you know enough to never evaluate a polynomial in this way: p = c [0] + c [1] * x + s [2] * x * x + s [3] * x * x * x + s [4] * x * x * x * x; ...

Obviously, I don’t know enough ...

Is it just an optimization problem or is it also a floating point arithmetic problem and why?

+6
source share
1 answer

You can calculate p=c[0]+c[1]*x+c[2]*x*x+c[3]*x*x*x+c[4]*x*x*x*x; as:

 p=c[0]+(c[1]+(c[2]+(c[3]+c[4]*x)*x)*x)*x; 

In the second form, there are much fewer multiplications. This second form is called Horner.

Is it just an optimization problem or is it also a floating point arithmetic problem and why?

This is an optimization problem. However, some modern processors have a floating point operation to perform multiplication and addition to the same command without intermediate rounding for multiplication, and although the advantage that most programmers see is still optimizing, it also means that the result is more accurate.

Horner's form is very well suited for calculations using this command with combined-multiplication-addition .


Finally, I must point out the completeness that modern processors can be even more efficient if the polynomial is presented to them in a form with a lot of parallelism. See, for example, Estrin's scheme or this blog post for an explanation. Your book does not imply a requirement to know about Estrin's regimen. This only hints at the requirement to know about the Horner scheme.

+11
source

Source: https://habr.com/ru/post/970502/


All Articles