Java evaluates derivative at point

I am currently writing a calculator application. I am trying to write a derivative estimate into it. The formula below is an easy way to do this. Typically on paper, you can use the smallest h value to get the most accurate estimate. The problem is that doubles cannot handle adding really small numbers to relatively huge numbers. For example, 4 + 1E-200 will simply result in 4.0. Even if h was only 1E-16, 4 + 1E16 will actually give you the correct value, but using math, it is inaccurate because something after 16th place is lost and rounding cannot happen correctly. I heard that the general rule for doubles is 1E-8 or 1E-7. The problem with this is that large numbers do not work, since 2E231 + 1E-8 will be only 2E23, 1E-8 will be lost due to size problems.

f'(x)=(f(x+h)-f(x))/h as x approaches 0

When I check f (x) = x ^ 2 at point 4, so that f '(4), it should be exactly 8 now I understand that I probably will never get exactly 8. but I'm the most accurate, it seems about 1E-7 or 1E8 The funny thing is that 1E-9 all 1E-11 give the same answer. Here is a list of h and results for f(x)=x^2 at x=4

 1E-7 8.000000129015916 1E-8 7.999999951380232 1E-9 8.000000661922968 1E-10 8.000000661922968 1E-11 8.000000661922968 1E-12 8.000711204658728 

Here are my questions:

  • Which is the best way to choose h, obviously 1E-8 or 1E-7 makes sense, but how can I choose h based on x so that it works with any size, even if x is 3.14E203 or 2E-231.
  • How many decimal places of precision should be considered.
  • You know how Texas, TI 83, 84 and Inspire tools can numerically determine derivatives of up to 12 decimal places or precision and almost always be right, but the maximum accuracy of their numbers is 12 digits anyway and these calculators are not CAS, so they arent actually getting anything
  • Logically, there is a number somewhere between 1E-7 and 1E-8, which will give me a more accurate result, is there a way to find this number or at least get closer to it.

ANSWERED

Thanks a lot BobG. Currently, the application is planned to be placed in two forms, the application for the PC command line. And an Android app. You will be especially marked with parts of the About page. If you want it to be open source, but I do not post links to the project website until I make some very big mistakes. At the moment, I call it Mathulator, but the name is likely to change, because it already has copyright, and that sounds silly. I have no clue when the release candidate will be launched, at the moment I do not know when it will be stable. But it will be very powerful if I can realize everything I want. Thanks again. Happy programming.

+1
source share
6 answers

There is a book that answers this question (and others like it):

Numerical recipes in C, 2nd Edition, Press, Vetterling, Teukolsky and Flannery. This book also comes in versions of C ++, Fortran, and BASIC. Unfortunately, no version of Java exists. In addition, I believe that this book is not printed, but you can buy used versions of some of them on the Internet (at least through bn.com.)

Section 5.7, “Numerical Derivatives,” p. 186 accurately explains the problem you see with numerical derivatives, and the math behind why this happens, and also the function to calculate the correct numerical derivative (in C, but it’s easy to translate it to Java). Here is a summary of their simple approximation:

1) Numerically you better calculate the symmetric version:

f '(x) = (f (x + h) - f (x - h)) / 2h

2) h should be approximately (sigma_f) ^ (1/3) * x_c

Where

sigma_f = ~ fractional accuracy of the calculation of the function f (x) for simple functions

x_c = ~ x if x is not equal to zero.

However, this does not lead to optimal derivatives, since the error is ~ (sigma_f) ^ (2/3). The best solution is the Ridders algorithm, which is presented as program C in the book (Ridders reference, CJF 1982, Advances in Engineering Software, Volume 4, No. 2, pp. 75-76.)

+3
source

Read the document titled "What Every Programmer Should Know About Floating Point" (google for it). Then you will see that most floating values ​​are represented approximately in computer hardware.

To perform calculations without this drawback, use symbolic calculation. But this is not as effective as using a floating point.

For consistent floating point results, use rounding to the nearest power of 10, such as 0.1, 0.01, etc. To understand when you should stop apporximations, use some threshold that you need to observe during the approximation steps. For example, if the next step of approximation yields only 0.001% of the change in the already calculated value, it makes no sense to continue the approximation.

Update . I had my numerical calculation classes for a long time, but I can vaguely remember that subtracting close numbers is very bad, because if the numbers are very close, the most reliable digits are canceled, and you have unreliable digits. This is exactly what happens when you decrease h . In these situations, it is proposed to replace the substitution with some other operations. For example, you can switch to some series to which your `f (x) extends.

I do not quite understand your second question, because the answer depends on your requirements - "as many as you like."

By the way, you might be lucky to find answers to your questions on math.stackexchange.com.

Also, visit the link provided by thrashgod : Numerical Differentiation

+2
source

1. The accuracy of floating point numbers (floats and doubles) depends on the absolute value of the number. Doubles have ~ 15 digits of precision, so you can add 1 + 1e-15 , but 10 + 1e-15 will most likely be 10, so you will need to do 10 + 1e-14 . To get a meaningful result, I would recommend that you multiply this very 1e-8 by the absolute value of the original number, this will give you about 7 correct digits in the derivative. Sort of:

 double h = x * 1e-8; double derivative = (f(x+h) - f(x)) / h; 

In any case, this approximation, say, if you try to calculate the derivative of sin (x) at x = 1e9, you get h = 10, and the result will be incorrect. But for "regular" functions that have an "interesting" part around zero, this will work well.

2. The smaller the "h", the more accurate is the point at which you produce the derivative, but the less the correct digits of the derivative that you get. I can’t prove it, but I feel that with h = x * 1e-8 you get the correct numbers 7 = 15 - 8 , where 15 is the precision of double .

In addition, it would be nice to use a "more symmetric" formula, it gives an absolutely correct answer to second-order polynomials:

 double derivative = (f(x+h) - f(xh)) / (2*h); 
+2
source

My question is what is the most suitable h, and how it can be scaled to any size.

As noted in Numerical Differentiation , the appropriate choice for h is sqrt (ɛ) * x, where ɛ is the epsilon machine .

+1
source

I would use the BigDecimal class for this kind of calculation, although this is not the answer to your questions, but it will really improve the precision of floating point arithmetic.

0
source

According to Javadoc, 11 bits represent the exponent, and 52 bits represent significant digits. Excluding the exponent, it looks like you have 52 bits for the game. Therefore, if you select h = x * 2 ^ -40, you have used 40 bits here, and the accuracy you will see is 2 ^ -12. Adjust this ratio as you wish.

0
source

Source: https://habr.com/ru/post/921042/


All Articles