Comparing characters with hexadecimal values

Over the last day, I had a nasty bug in my code that, after some searching, seems to be related to matching char and hex values. My gcc 4.4.1 compiler works on Windows. I reproduced the problem in the simple code below:

char c1 = 0xFF; char c2 = 0xFE; if(c1 == 0xFF && c2 == 0xFE) { //do something } 

Surprisingly, the above code does not fall into the loop. I have absolutely no idea why and really will be very grateful for the help. This is so absurd that the decision should be (as always) a huge mistake on my part that I completely ignored.

If I replace the above with unsigned chars, this works, but only in some cases. I'm struggling to figure out what's going on. Also, if I compare hexadecimal values ​​with char, then it correctly enters a loop like this:

 if(c1 == (char)0xFF && c2 == (char)0xFE) { //do something } 

What does it mean? Why can this happen? Is the default value of raw hex interpreted as char? For a curious point in my code, where I first noticed, this is a comparison of the first 2 bytes of the stream with the specified hex value and their inverse to idenity byte order value.

Any help is appreciated

+4
source share
5 answers

A regular char can be signed or unsigned . If the type is unsigned , then everything works as you expected. If the type is signed , then assigning 0xFF to c1 means that when performing the comparison, the value will increase to -1 , but 0xFF is a regular positive integer, so the comparison -1 == 0xFF not performed.

Please note that the types char , signed char and unsigned char different, but two of them have the same representation (and one of the two has char ).

+7
source

The literal 0xff not char , it is a int (signed). When you bind this value to a char variable, it will fit well, but depending on whether your char type is signed or not, this will affect how it is updated in expressions (see below).

In an expression like if (c1 == 0xff) variable c1 will be raised to an integer, since it is 0xff . And what it contributed to depends on whether it is signed or not.

On the bottom line, you can do one of two things.

  • Make sure you use signed char so that it "signs-extends" to the correct int . By this, I mean that unsigned char 0xff will become (for a 4-byte int ) 0x000000ff (so it's still 255), but signed will become 0xffffffff (so it's still -1).

  • Align the literal to the same type as the variable you are already doing with (char)oxff .

+4
source

When comparing char with hex, you have to be careful:

Using the == operator to compare char with 0x80 always results in an error?

I would recommend this syntax introduced in C99 to be sure

 if(c1 == '\xFF' && c2 == '\xFE') { // do something } 

Avoid throwing, it is not needed and is not type safe.

It tells the compiler that 0xFF is a char, not an int, this will solve your problem.

the clang compiler will also warn you about this:

comparing constant 128 with a char expression is always false [-Werror, -Wtautological-constant-out-of-range-compare]

+1
source

The 0xFE character will be converted to a negative integer. Constants in the expression will be converted to positive integers.

0
source

I solved this by translating my variables into UINT16 (optimized for my compiler). In your case, you will distinguish c1 and c2 as INT

 char c1 = 0xFF; char c2 = 0xFE; if((int)c1 == 0xFF && (int)c2 == 0xFE) { //do something } 
0
source

Source: https://habr.com/ru/post/1390783/


All Articles