Machine Accuracy Estimates

Some say that the machine epsilon for double precision floating-point numbers is 2 ^ -53, and the other (more often) talk about its 2 ^ -52. I ran into a machine accuracy score using integers except 1 and aproaching above and below (in Matlab), and got both values ​​as a result. Why can one observe both values ​​in practice? I thought that he should always create epsilon around 2 ^ -52.

+4
source share
3 answers

There is an internal ambiguity of the term “machine epsilon”, therefore the difference between 1 and the next large representable number is usually determined to correct it. (This number is actually (and not by chance) obtained by literally increasing the binary representation by one.)

The IEEE754 64-bit float has 52 explicit mantissa bits, so 53 includes an implicit leading 1 . Thus, two consecutive numbers:

 1.0000 ..... 0000 1.0000 ..... 0001 \-- 52 digits --/ 

Thus, the difference between them is 2 -52 .

+7
source

It depends on how you twist.

1 + 2^-53 is halfway between 1 and 1 + 2^-52 , which are double precision consecutive floating point numbers. Therefore, if you round it, it is different from 1; if you round it, it is 1.

+2
source

In fact, there are two definitions of "machine accuracy" that sound exactly the same at first glance, but are not, because they give different meanings for "machine epsilon":

  • Machine epsilon is the smallest eps1 floating point eps1 such that 1.0 + x > 1.0 .
  • The machine epsilon is the difference eps2 = x - 1.0 , where x is the smallest floating point number represented with x > 1.0 .

Strictly mathematically speaking, the definitions are equivalent, i.e. eps1 == eps2 , but this is not about real numbers, but about floating point numbers. And that means implicit rounding and nullification, which means roughly eps2 == 2 * eps2 (at least in the most common architectures using IEEE-754 floats).

In more detail, if we let some x go from 0.0 to some point, where 1.0 + x > 1.0 , this point is reached at x == eps1 (by definition 1). However, due to rounding, the result of 1.0 + eps1 not 1.0 + eps1 , but the next floating-point value represented is greater than 1.0 - that is, eps2 (by definition 2). So essentially

 eps2 == (1.0 + eps1) - 1.0 

(Mathematicians will shrink at the same time.) And because of rounding behavior, this means that

 eps2 == eps1 * 2 (approximatively) 

And so there are two definitions of “machine epsilon,” both legal and correct.

Personally, I find eps2 more “reliable” definition, because it does not depend on the actual rounding behavior, only on the presentation, but I would not say that it is more correct than the other. As always, it all depends on the context. Just clarify which definition you use when referring to “machine epsilon” to prevent confusion and errors.

+1
source

Source: https://habr.com/ru/post/1379473/


All Articles