The main problem is that the basic equipment, the central processor, has only instructions for comparing two signed values or comparing two unsigned values. If you pass an unsigned comparison instruction a negative value with a sign, it will be perceived as a large positive number. Thus, -1, a bit combination with all bits (addition to two), becomes the maximum unsigned value for the same number of bits.
8 bits: -1 with a sign matches 255 unsigned 16-bit: -1 with a sign matches 65535 unsigned, etc.
So, if you have the following code:
int fd; fd = open( .... ); int cnt; SomeType buf; cnt = read( fd, &buf, sizeof(buf) ); if( cnt < sizeof(buf) ) { perror("read error"); }
you will find that if the read (2) call fails due to the file descriptor becoming invalid (or due to some other error), cnt will be set to -1. When comparing with sizeof (buf), an unsigned value, the if () operator will be false, because 0xffffffff is no less than sizeof () for some (reasonable, not invented to be the maximum size) data structure.
Thus, you should write above if, in order to remove a signed / unsigned warning like:
if( cnt < 0 || (size_t)cnt < sizeof(buf) ) { perror("read error"); }
It just speaks loudly about the problems.
1. Introduction of size_t and other datatypes was crafted to mostly work, not engineered, with language changes, to be explicitly robust and fool proof. 2. Overall, C/C++ data types should just be signed, as Java correctly implemented.
If you have values so large that you cannot find a working type of signed value, you are using a processor too small or too large a value in your language. If, as is the case with money, each number matters, most languages have systems that provide you with infinite numbers of accuracy. C / C ++ just doesn’t do very well, and you have to be very frank about types as mentioned in many other answers here.