Well, for the sake of argument, let's assume that we have a processor that represents a floating point number with 7 significant decimal digits and a mantissa with, say, two decimal digits. So now the number 1e8 will be stored as
1.000 000 e 08
(where "." and "e" do not really need to be stored.)
So now you want to calculate "1e8 - 1". 1 is represented as
1.000 000 e 00
Now, to perform the subtraction, we first do the subtraction with infinite accuracy, then normalize it so that the first digit before the "." ranges from 1 to 9 and, finally, is rounded to the nearest representable value (with a break by even, say). The result of infinite accuracy "1e8 - 1" is equal to
0.99 999 999 e 08
or normalized
9.9 999 999 e 07
As you can see, the result of infinite accuracy requires one more digit in value than what our architecture provides; therefore, we need to round (and change to normalize) the infinitely accurate result to 7 significant digits, as a result of which
1.000 000 e 08
Therefore, you get "1e8 - 1 == 1e8" and your cycle never ends.
Now, in fact, you are using the IEEE 754 binary floats, which are slightly different, but the principle is about the same.
source share