int and unsigned int are two different integer types. ( int can also be called signed int or just signed ; unsigned int can also be called unsigned .)
As the names suggest, int is an unsigned int integer type, and unsigned int is an unsigned int integer type. This means that int can represent negative values, and unsigned int can only represent non-negative values.
The C language sets some requirements for the ranges of these types. The int range must be at least -32767 .. +32767 , and the unsigned int range must be at least 0 .. 65535 . This means that both types must be at least 16 bits. They are 32 bits on many systems or even 64 bits on some. int usually has an additional negative value due to the presentation of two additions used by most modern systems.
Perhaps the most important difference is the behavior of signed or unsigned arithmetic. For a signed int overflow has undefined behavior. There is no overflow for unsigned int ; any operation that gives a value outside the range of the type is wrapped around, for example, UINT_MAX + 1 == 0 .
Any integer type, whether signed or unsigned, models a subrange of an infinite number of mathematical integers. As long as you work with values ββwithin the type range, everything works. When you approach the lower or upper bound of the type, you are faced with a gap and you may get unexpected results. For signed integer types, problems arise only for very large negative and positive values ββgreater than INT_MIN and INT_MAX . For unsigned integer types, problems arise with very large positive values and at zero . This can be a source of errors. For example, this is an infinite loop:
for (unsigned int i = 10; i >= 0; i --) [ printf("%u\n", i); }
because i always greater than or equal to zero; that the nature of unsigned types. (Inside the loop, when i is zero, i-- sets its value to UINT_MAX .)
Keith Thompson Jun 30 '15 at 15:14 2015-06-30 15:14
source share