How does this โ€œbit setโ€ work in C?

unsigned int error_bits = ( X && Y ) | ( A == TRUE) << 1 | ( B == TRUE) << 2 | ( C == TRUE && D == TRUE) << 4; 

I believe that the general concept here is to set each of the 32 bits to true or false based on certain conditions - each bit represents an error of something.

In the syntax above, I got a little confused as to what is being set, shifted, and where / why.

Any clarification is helpful.

Thanks.

+4
source share
3 answers

You're right. The layout of the bits after the line:

 Bits X-5: 0 Bit 4: (C == TRUE && D == TRUE) Bit 3: 0 Bit 2: B == TRUE Bit 1: A == TRUE Bit 0: (X && Y) 

From the most significant to the least significant bit. Perhaps something like this would be more readable (a matter of taste):

 unsigned int error_bits = 0; if( X && Y ) error_bits |= 1; if( A == TRUE ) error_bits |= 2; if( B == TRUE ) error_bits |= 4; if( C == TRUE && D == TRUE ) error_bits |= 16; 
+4
source

A == TRUE will evaluate to 1 if A is TRUE. 1 <1 is 2 or an integer with only the second set of bits (numbered from the lowest value). 1 <4 is 16 or an integer with only the 5th set of bits.

+2
source

error_bits The value is set in accordance with:

  • The least significant bit (b0) is set when (X && Y) is true, that is, both X and Y are true.
  • b1 is set when A is true
  • b2 is set when B is true
  • b3 clear
  • b4 is set when both C and D are true
+1
source

Source: https://habr.com/ru/post/1438981/


All Articles