In fact, there are two definitions of "machine accuracy" that sound exactly the same at first glance, but are not, because they give different meanings for "machine epsilon":
- Machine epsilon is the smallest
eps1 floating point eps1 such that 1.0 + x > 1.0 . - The machine epsilon is the difference
eps2 = x - 1.0 , where x is the smallest floating point number represented with x > 1.0 .
Strictly mathematically speaking, the definitions are equivalent, i.e. eps1 == eps2 , but this is not about real numbers, but about floating point numbers. And that means implicit rounding and nullification, which means roughly eps2 == 2 * eps2 (at least in the most common architectures using IEEE-754 floats).
In more detail, if we let some x go from 0.0 to some point, where 1.0 + x > 1.0 , this point is reached at x == eps1 (by definition 1). However, due to rounding, the result of 1.0 + eps1 not 1.0 + eps1 , but the next floating-point value represented is greater than 1.0 - that is, eps2 (by definition 2). So essentially
eps2 == (1.0 + eps1) - 1.0
(Mathematicians will shrink at the same time.) And because of rounding behavior, this means that
eps2 == eps1 * 2 (approximatively)
And so there are two definitions of “machine epsilon,” both legal and correct.
Personally, I find eps2 more “reliable” definition, because it does not depend on the actual rounding behavior, only on the presentation, but I would not say that it is more correct than the other. As always, it all depends on the context. Just clarify which definition you use when referring to “machine epsilon” to prevent confusion and errors.