An interesting fact on Python 2.7.6 running on macosx:
You have a very small number, for example:
0.000000000000000000001
You can imagine it as:
>>> 0.1 / (10 ** 20)
1.0000000000000001e-21
But you can see the floating point error at the end. We really have something like this:
0.0000000000000000000010000000000000001
So, we got an error in the number, but this is normal. The problem is this:
As expected:
>>> 0.1 / (10 ** 20) == 0
False
But wait, what is it?
>>> 0.1 / (10 ** 20) + 1 == 1
True
>>> repr(0.1 / (10 ** 20) + 1)
'1.0'
It seems that python is using a different data type to represent my number, since this only happens when using the 16th decimal digit and so on. Also why did python decide to automatically turn my number into 0an add-on? Should I deal with a decimal with a very small decimal part and with a floating point error?
, , , , .