ISO C states what the differences are.
The int data type is signed and has a minimum range of -32767 to 32767 inclusive. Actual values are given in limits.h as INT_MIN and INT_MAX respectively.
An unsigned int has a minimum range from 0 to 65535 inclusive, with the actual maximum value being UINT_MAX from the same header file.
In addition, the standard does not provide a double complementary notation for encoding values, this is only one of the possibilities. The three allowed types will have the following encodings for 5 and -5 (using 16-bit data types):
two complement | ones' complement | sign/magnitude +---------------------+---------------------+---------------------+ 5 | 0000 0000 0000 0101 | 0000 0000 0000 0101 | 0000 0000 0000 0101 | -5 | 1111 1111 1111 1011 | 1111 1111 1111 1010 | 1000 0000 0000 0101 | +---------------------+---------------------+---------------------+
- In two additions, you get a negative number number, inverting all bits, adding 1.
- In one of the additions, you get a negative number number by inverting all bits.
- In sign / magnitude, the upper bit is a sign, so you just invert this to get a negative result.
Please note that positive values have the same encoding for all representations, only negative values differ.
Note that for unsigned values you do not need to use one of the bits for the character. This means that you get more range on the positive side (of course, without negative encodings).
And no, 5 and -5 cannot have the same encoding no matter what representation you use. Otherwise, there would be no way to tell the difference.
paxdiablo Sep 28 '10 at 11:16 2010-09-28 11:16
source share