Understanding 2 ^ 31 and -2 ^ 31 whole advancements

#include <stdio.h> int main() { printf("sizeof(int): %zu\n", sizeof(int)); printf("%d\n", 2147483648u > -2147483648); printf("%d\n", ((unsigned int)2147483648u) > ((int)-2147483648)); printf("%d\n", 2147483648u != -2147483648); printf("%d\n", ((unsigned int)2147483648u) != ((int)-2147483648)); return 0; } 

The output of this code in both C and C ++, on cygwin64 and on the rhel6.4 machine with gcc 5.2.0:

 sizeof(int): 4 1 0 1 0 

According to " Integer promotions , 2147483648u will be of type unsigned int (even without the suffix u ) and -2147483648 type int (as usual). Why are there different results with explicit casting?

In accordance with the Common Arithmetic Transformations, "this paragraph applies:

Otherwise, the signature is different: if the operand with unsigned type has a conversion rank, the type of the signed operand is greater than or equal to the rank, then the operand with the signed type is implicitly converted to the unsigned type

This means that the correct result looks like this:

 2147483648u > 2147483648u 2147483648u != 2147483648u 

because in 32 bits signed -2 ​​^ 31 and unsigned 2 ^ 31 have the same representation. In other words, the casting result is correct. What's happening?

I have the feeling that somehow the promotion of higher-order integers is applied without casting, so I get, for example. 64-bit subscription share on both sides - but why?

Both executables are compiled as 64-bit, can this play a role?

+5
source share
1 answer

There are no negative integer constants. There are only positives with the unary operator application.

Since 2147483648 > INT_MAX , which advances 2147483648 to the next larger signed (because you did not add u ) integer type, before applying - .


By the way, that's why INT_MIN usually defined as (-INT_MAX - 1) in <limits.h> .; -)

+12
source

Source: https://habr.com/ru/post/1241997/


All Articles