How does Color.FromArgb accept Int32 as a parameter?

The Color.FromArgb method accepts Int32 as a parameter. The value of Color.White is #FFFFFFFF as ARGB, which is equal to 4.294.967.295 as a decimal number (path above int.MaxValue ). What I don’t understand here? How can a method accept int as a parameter if valid ARGB values ​​exceed the maximum int value?

+7
source share
8 answers

Unfortunately, since Color.FromArgb accepts int instead of uint , you will need to use the unchecked keyword for colors that are larger than int.MaxValue.

 var white = Color.FromArgb(unchecked((int)0xFFFFFFFF)); 
+10
source

Your confusion lies in the sign. Although Int32.MaxValue is 2,147,483,647, which is signed.

If you look at UInt32.MaxValue , this is not indicated, and as you can see, the maximum value is 4,294,967,295.

You see that in binary format, signed digits, use the left-most bit to determine if there is a positive or negative number. Unsigned numbers in binary format do not have bit-bits and use this last bit, which significantly increases the storage capacity.

I think the reason is that the Color class uses Int32 instead of unsigned, because unsigned ints do not match the CLS, as stated in this SO Question

+3
source

The practical problem is that you want to enter an eight-digit hexadecimal number, but since the version with one parameter uses int rather than uint , it is difficult to imagine colors with an alpha value above & 7F. This is because int uses one bit to represent a character.

The simplest solution is to use the four-parameter version:

 var whiteColor = Color.FromArgb(0xFF, 0xFF, 0xFF, 0xFF); 
+1
source

The byte ordering of the 32-bit ARGB value is AARRGGBB. Most Significant Bytes (MSBs) represented by AA are an alpha component of value. The second, third, and fourth bytes represented by RR, GG, and BB, respectively, are the color components of red, green, and blue, respectively.

http://msdn.microsoft.com/en-us/library/2zys7833(v=vs.110).aspx

It looks like the method splits int32 into 32 bits and converts it to AARRGGBB, which is two nibbles (1 byte) for each parameter A, R, G and B.

This works because each digit in FFFFFFFF in hexadecimal format is converted to one nibble. Each space is 4 bits. Thus, this representation of bits is converted directly to 32 bits, which can be represented as a single int32.

To give a little more detailed information:

The maximum space value in hexadecimal is F (or 15 in decimal).

The maximum value of 4 bits (1 nibble) is (8, 4, 2, 1), which is 15.

So FFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111, which is then represented as int32.

AS @icemanind indicated that the first bit is reserved for the sign (+ or -) and therefore limits the numerical value of int32 to 2,147,483,647.

This is not a numerical value, but the bit values ​​that are important for this method.

0
source

According to the MSDN page for Color.FromArgb Method (Int32) , you are not passing the actual int value for color. For example, to get red, you would call Color.FromArgb(0x78FF0000) . So, for white, you just need to call Color.FromArgb(0xFFFFFFFF) .

0
source

A Color consists of four important fields: A (alpha), R (red), G (green) and B (blue). Each of them is eight bits. four eight-bit values ​​fit into int32. Although the MSB may be a signed bit, this is ignored.

0xFFFFFFFF can be a negative number if it is expressed as an int , but it is white as a color.

0
source

You can wright 0x00FFFFFF-0x01000000 the compiler will work correctly

0
source

It does not matter.

#FFFFFFFF - 1111111111111111111111111111111111 in binary format.

In decimal, this is 4.294.967.295 if you are using unsigned ints . If you use signed ints, this is interpreted as -1.

But the actual decimal value does not matter; the meaning of the individual bits matters.

A signed int can store values ​​of 4.294.967.295, only half of them are negative. The bits are the same.

-1
source

Source: https://habr.com/ru/post/1202352/


All Articles