A sign is a concept that lies on top of bit patterns. Bitwise not (~) refers only to the bitmap, and not to the sign of the value. The result of canceling a signed unsigned value is identical.
Having said that, looking at the C standard: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf (a draft version is available for free). Section 6.3.1.4, page 51:
If int can represent all values โโof the original type (limited by the width for the bit field), the value is converted to int; otherwise, it will be converted to unsigned int. They are called whole stocks. (58) All other types are not modified by whole promotions.
I believe this means that char and short types will be upgraded to int (or unsigned int depending on size) when we actually work on them. This makes sense because we want operations to be performed as fast as possible, so we must focus on the machineโs own size.
Given this, we can see what is really happening. The machine will perform all operations with the size "int", since both operands in "==" and "~" can fit in the int field that I assume on your computer is 32 bits.
Now the first thing you need to pay attention to is the value of 'a'. Take 0, we donโt get and get 0xFFFFFFFF. We assign this value to uint16_t and get 0xFFFF. When we are ready to perform the comparison, we will load 0xFFFF, we will understand that the unsigned value and zero extend it to 0x0000FFFF. For the value of the value "b" everything is the same, except when we read 0xFFFF for comparison, which we sign, bringing it to 0xFFFFFFFF. Now for your cases:
- Zero notation gives 0xFFFFFFFF, and compared to 0x0000FFFF it will fail.
- We took our 0xFFFFFFFF, cut it into 0xFFFF, and then zero expanded it to 0x0000FFFF, specifying the same value as 'a'.
And so on.
source share