While commonly called the "sign bit", the binary values that we usually use do not have a true sign bit.
Most computers use arithmetic with two additions. Negative numbers are created by accepting one's complement (flip all the bits) and add one:
5 (decimal) -> 00000101 (binary)
1 complement: 11111010
add 1: 11111011 which is 'FB' in hex
This is why the signed byte contains values from -128 to +127 instead of -127 to +127:
1 0 0 0 0 0 0 0 = -128
1 0 0 0 0 0 0 1 = -127
- - -
1 1 1 1 1 1 1 0 = -2
1 1 1 1 1 1 1 1 = -1
0 0 0 0 0 0 0 0 = 0
0 0 0 0 0 0 0 1 = 1
0 0 0 0 0 0 1 0 = 2
- - -
0 1 1 1 1 1 1 0 = 126
0 1 1 1 1 1 1 1 = 127
(adding from 1 to 127 gives :)
1 0 0 0 0 0 0 0 , which we see at the top of this diagram is -128.
If we had the correct sign bit, the range of values would be the same (for example, from -127 to +127), because one bit is reserved for the sign. If the most significant bit is a sign bit, we will have:
5 (decimal) -> 00000101 (binary)
-5 (decimal) -> 10000101 (binary)
Interesting in this case is zero and negative zero:
0 (decimal) -> 00000000 (binary)
-0 (decimal) -> 10000000 (binary)
We do not have -0 with two additions; which will be -0 -128 (or more general, more than the largest positive value). However, we do with one addition; all 1 bit negative 0.
Mathematically, -0 is 0. I vaguely remember the computer, where -0 <0, but now I can not find a link to it.
Alan Jay Weiner Oct 11 2018-10-10T00: 10-10
source share