What is the difference between unsigned int and signed int in C?

Consider the following definitions:

int x=5; int y=-5; unsigned int z=5; 

How are they stored in memory? Can anyone explain their presentation in mind?

Can int x=5 and int y=-5 have the same bit representation in memory?

+27
c
Sep 28 '10 at 10:59
source share
5 answers

ISO C states what the differences are.

The int data type is signed and has a minimum range of -32767 to 32767 inclusive. Actual values ​​are given in limits.h as INT_MIN and INT_MAX respectively.

An unsigned int has a minimum range from 0 to 65535 inclusive, with the actual maximum value being UINT_MAX from the same header file.

In addition, the standard does not provide a double complementary notation for encoding values, this is only one of the possibilities. The three allowed types will have the following encodings for 5 and -5 (using 16-bit data types):

  two complement | ones' complement | sign/magnitude +---------------------+---------------------+---------------------+ 5 | 0000 0000 0000 0101 | 0000 0000 0000 0101 | 0000 0000 0000 0101 | -5 | 1111 1111 1111 1011 | 1111 1111 1111 1010 | 1000 0000 0000 0101 | +---------------------+---------------------+---------------------+ 
  • In two additions, you get a negative number number, inverting all bits, adding 1.
  • In one of the additions, you get a negative number number by inverting all bits.
  • In sign / magnitude, the upper bit is a sign, so you just invert this to get a negative result.

Please note that positive values ​​have the same encoding for all representations, only negative values ​​differ.

Note that for unsigned values ​​you do not need to use one of the bits for the character. This means that you get more range on the positive side (of course, without negative encodings).

And no, 5 and -5 cannot have the same encoding no matter what representation you use. Otherwise, there would be no way to tell the difference.

+36
Sep 28 '10 at 11:16
source share

the C standard specifies that unsigned numbers will be stored in binary format. (With extra padding bits). Signed numbers can be stored in one of three formats: Magnitude and sign; two additions or one addition. Interestingly, this excludes some other views, such as Excess-n or Base -2 .

However, on most machines and compilers, signed numbers are stored in 2 additions.

int usually 16 or 32 bits. The standard says that int should be most efficient for the main processor, if it is >= short and <= long , then it is allowed by the standard.

On some machines and the OS, the story has int reasons not to be the best size for the current hardware iteration.

+4
Sep 28 '10 at 11:11
source share

Here is a very nice link that explains storing signed and unsigned INTs in C -

http://answers.yahoo.com/question/index?qid=20090516032239AAzcX1O

Taken from this article -

", called two additions, is used to convert positive numbers to negative numbers. A side effect of this is that the most significant bit is used to tell the computer whether this number is positive or negative. If the most significant bit is 1, then the number is negative. If it is 0, the number is positive. "

+3
Sep 28 '10 at 11:03
source share

Since this is all about memory, at the end all numeric values ​​are stored in binary format.

An unsigned 32-bit integer can contain values ​​from all binary 0s for all binary 1s.

When it comes to a 32-bit signed integer, this means that one of its bits (the most significant) is a flag that marks a positive or negative value.

+2
Apr 18 '17 at 5:18
source share

Assuming int is a 16-bit integer (which depends on the implementation of C, most of which are now 32 bits), the bits are different in the following:

  5 = 0000000000000101 -5 = 1111111111111011 

if the binary code 1111111111111011 is set to unsigned int, it will be the decimal number 65531.

0
Sep 28 '10 at 11:05
source share



All Articles