Why do we sometimes use the hexadecimal format for decimal?

I read the explanation about

int a = 0x1; //hexadecimal format 

But still I can not find the reason why the programmer should use 0x1, 0x2 instead of a simple integer 1 or 2 ...

Can someone explain this?

Thanks.

+6
source share
3 answers

One of the reasons I used myself is because it helps organizationally / visually (for example, for people when they set them in code) when it comes to flags.

i.e.

 int a = 0x1; int b = 0x2; int c = 0x4; int d = 0x8; int e = 0x10; 

etc. Then they can be bitwise-OR'ed together more accurately.

For example, all of the above-bit-tag-tags are 0X1F, which is 11111 in binary or separate binary fields.

Then, if I want to remove the flag I am bitwise - XOR .

i.e.

0x1F XOR 0x8 = 10111

+4
source

There are several reasons why you need a hexadecimal representation in decimal value. The most common in calculations are bit fields . Several people have already mentioned color codes, for example:

 red = 0xFF0000 // 16711680 in decimal green = 0x00FF00 // 65280 in decimal blue = 0x0000FF // 255 in decimal 

Please note that this color representation is not only more intuitive than trying to figure out what color a random integer can be, e.g. 213545, but it also takes up less space than a 3-tuple, e.g. (125, 255, 0) , representing (R,G,B) . A hex view is an easy way to abstract the same idea as a 3-tuple, with much less overhead.

Remember that there are many applications in bit fields, consider the spacetime bit field:

  Represents x coordinate | Represents y coordinate | | Represents z coordinate | | | Represents t | | | | 1A 2B 3C 4D 

Another reason someone can use hexadecimal values ​​is because it is sometimes easier to remember (and represent) a binary digit as two characters rather than three. Consider the instructions for using x86 . I know that 0xC3 ret ; It’s easier for me to remember the 00-FF hexadecimal numbers, not the 0-255 decimal places (I looked at it, and ret ends 195 ), but your mileage may vary. For example, this is the code from a project I was working on:

 public class x64OpcodeMapping { public static final Object[][] map = new Object[][] { { "ret", 0xC3 }, { "iret", 0xCF }, { "iretd", 0xCF }, { "iretq", 0xCF }, { "nop" , 0x90 }, { "inc" , 0xFF }, }; } 

There are clear advantages to using hexadecimal notation (not to mention consistency). Finally, as Obicer mentions, hexadecimal codes are often used as error codes. Sometimes they are grouped in half. For instance:

 0x0X = fatal errors 0x1X = user errors 0x2X = transaction errors // ... // X is a wildcard 

In such a scheme, the minimum list of errors will look like this:

 0x00 = reserved 0x01 = hash mismatch 0x02 = broken pipe 0x10 = user not found 0x11 = user password invalid 0x20 = payment method invalid // ... 

Please note that this also allows you to add new errors to 0x0X , if necessary. This answer turned out to be much longer than I expected, but I hope I shed some light.

+4
source

Actually no difference. I think this is based on consistency considerations. For example, you want to specify some colors, such as 0xffffff, 0xab32, 0x13, and 0x1, which are consistent and easy to read.

+2
source

Source: https://habr.com/ru/post/974725/


All Articles