Hexa base digit and decimal base digit in C #

Some of my downloaded C # codes include hexadecimal calculations.

eg. int length = ((((byte) hexstr[0x10]) & 0x80) > 0) ? 0x10 : 8; 

when I change this code to a regular decimal code like this

 int length = ((((byte) hexstr[16]) & 128) > 0) ? 16 : 8; 

This gives the same solution without any errors.
It is still working correctly.
So I would like to know why most code uses a hexadecimal base digit, which is harder to understand than a normal decimal digit.

If anyone who knows this, let me know.

+4
source share
1 answer

It shows the bit more clearly. 0x80 is clearly a value with the top nybble set to 8, and the bottom nybble set to 0 ..., which does not disappear from the decimal value.

As another example, if I wanted to mask the second and third bytes of an integer, I could use:

 int masked = original & 0xffff00; 

I wrote this code without a calculator or anything like that. There is no way to do the same for the decimal equivalent - I cannot multiply 65535 by 256 in my head with any chance of success, and the resulting code would not be nearly as clear anyway.

+4
source

Source: https://habr.com/ru/post/1380133/


All Articles