Is this a design feature, a mathematical artifact, or some kind of optimization performed by compilers and runtimes?
This is a feature of real numbers. A theorem from modern algebra (modern algebra, not secondary school algebra, mathematical specialties take a class in modern algebra after their main calculus and classes of linear algebra) says that for some positive integer b, any positive real number r can be expressed as r = a * b p where a is in [1, b), and p is some integer. For example, 1024 10 = 1.024 10 * 10 3 . It is this theorem that justifies our use of scientific notation.
This number a can be classified as terminal (for example, 1.0), repeating (1/3 = 0.333 ...) or not repeating (representation pi). There is a slight problem with terminal numbers. Any terminal number can also be represented as a repeating number. For example, 0.999 ... and 1 is the same number. This ambiguity in the presentation can be resolved by indicating that the numbers that can be represented as terminal numbers are presented as such.
What you discovered is a consequence of the fact that all integers have a terminal representation in any database.
There is a problem with the way the realities are presented on the computer. Just as int and long long int do not represent all integers, float and double do not represent all reals. The scheme used on most computers to represent a real number r should be represented as r = a * 2 p but with a mantissa (or significant) truncated to a certain number of bits, and the exponent p is limited to some finite number. This means that some integers cannot be represented exactly. For example, although googol (10 100 ) is an integer, this floating point representation is not exact. The basic representation of googol is a 333-bit number. This 333-bit mantissa is truncated to 52 + 1 bits.
The consequence of this is that double-precision arithmetic is no longer accurate, even for integers, if integers are greater than 2 53 . Try the experiment using the unsigned long long int with values between 2 53 and 2 64 . You will find that double precision arithmetic is no longer accurate for these large integers.
source share