Why is -0.0 not the same as 0.0?

I could have missed something fundamental, but consider this session of interpreter 1 :

>>> -0.0 is 0.0 False >>> 0.0 is 0.0 True >>> -0.0 # The sign is even retained in the output. Why? -0.0 >>> 

You might think that the Python interpreter would understand that -0.0 and 0.0 are the same number. In fact, he compares them as equals:

 >>> -0.0 == 0.0 True >>> 

So why is Python differentiating between them and generating a whole new object for -0.0 ? He does not do this with integers:

 >>> -0 is 0 True >>> -0 # Sign is not retained 0 >>> 

Now I understand that floating point numbers are a huge source of problems with computers, but these problems are always related to their accuracy. For instance:

 >>> 1.3 + 0.1 1.4000000000000001 >>> 

But this is not a problem of accuracy, is it? I mean, we are talking about the sign of the number here, and not about its decimal places.


1 I can reproduce this behavior in both Python 2.7 and Python 3.4, so this is not a version related issue.

+5
source share
3 answers

In IEEE754, the format of floating-point numbers, the sign is a separate bit. Thus, -0.0 and 0.0 differ by this bit. Integers use two additions to represent negative numbers; therefore there is only one 0 .

Use is only you really want to compare object instances. Otherwise, especially for numbers, use == :

 >>> 1999+1 is 2000 False >>> 0.0 == -0.0 True 
+11
source

The IEEE standard for floating point arithmetic ( IEEE 754 ) defines the inclusion of signed zeros . In theory, they allow you to distinguish between the negative number of the lower and positive numbers of underflow .

Regarding python specifics, use == rather than is to compare numbers.

+8
source

Since the binary representation of these two numbers is different. At 0.0, the 32nd bit is 0 and at -0.0 the 32nd bit is 1.

+4
source

Source: https://habr.com/ru/post/1206277/


All Articles