I could have missed something fundamental, but consider this session of interpreter 1 :
>>> -0.0 is 0.0 False >>> 0.0 is 0.0 True >>> -0.0
You might think that the Python interpreter would understand that -0.0 and 0.0 are the same number. In fact, he compares them as equals:
>>> -0.0 == 0.0 True >>>
So why is Python differentiating between them and generating a whole new object for -0.0 ? He does not do this with integers:
>>> -0 is 0 True >>> -0
Now I understand that floating point numbers are a huge source of problems with computers, but these problems are always related to their accuracy. For instance:
>>> 1.3 + 0.1 1.4000000000000001 >>>
But this is not a problem of accuracy, is it? I mean, we are talking about the sign of the number here, and not about its decimal places.
1 I can reproduce this behavior in both Python 2.7 and Python 3.4, so this is not a version related issue.
source share