Decoding Byte Encoding

I have a built-in device that sends me a UTC date in this format (date 4 bytes):

buffer.push_back((BYTE)(time_utc & 0x000000FF)); buffer.push_back((BYTE)((time_utc & 0x0000FF00) >> 8)); buffer.push_back((BYTE)((time_utc & 0x00FF0000) >> 16)); buffer.push_back((BYTE)((time_utc & 0xFF000000) >> 24)); 

On the server, I receive bytes and store them in socket_buf , starting at index 0-3 and decoding it using the following logic

 mypkt.dateTime = ( ( socket_buf[0] << 24) + (socket_buf[1 ] << 16) + socket_buf[2] << 8) + (socket_buf[3] << 0)); 

But I'm not sure if it decodes it correctly, because the date I get is incorrect. Can someone suggest me the correct way to decode it? I decode the date using the Linux command (16711840 is the number I get by decoding):

 #date -d @16711840 
+4
source share
3 answers

Have you cleared your socket_buf?

Are you convinced that your computer is big enddian?

In addition, I suggest you: use or operator instead of plus.

This can save you more time.

 mypkt.dateTime = (long) ( ( socket_buf[0] << 24) | (socket_buf[1] << 16) | socket_buf[2] << 8) | (socket_buf[3] << 0)); 
-3
source

The write code is a little essential - first it sends the least significant byte.

Your read code expects a large endian - it takes a null byte and shifts it by 24 bits.

Please note that in no case does the code depend on the local limb of the computer - the written code does not depend on it, it simply does not agree with each other.

Try this instead:

 mypkt.dateTime = ((socket_buf[0] << 0) + (socket_buf[1] << 8) + (socket_buf[2] << 16) + ((uint32_t)socket_buf[3] << 24)); 

Actuation is necessary (but only in the last shift), because 0x80 - 0xff will be converted to signed int and undefined, which happens with bits that are shifted to a signed bit (thanks @Lundin)

NB: 16711840 is not the "current" date-time value in Unix, depending on which argument you use to represent it. You may have other problems elsewhere.

+8
source

Since socket_buf declared an unsigned char, socket_buf[0] << 24 is an error.

socket_buf[0] not char and will receive an integer before int . Regardless of whether it is 16 bits or 32 bits in this particular system, it does not matter because the program will be unstable. In both cases, it ends as a signed variable. You will leave the shift of the signed variable and change the binary to sign bits. Then you make an addition on this.

The correct way to record decoding is as follows:

 mypkt.dateTime = ( ((uint32_t)socket_buf[0] << 24) | ((uint32_t)socket_buf[1] << 16) | ((uint32_t)socket_buf[2] << 8) | ((uint32_t)socket_buf[3] << 0) ); 

Also, you seem to be changing the byte order back between encoding and decoding. I don’t quite understand that this is related to endianess, your code just uses an inconsistent order between buffer versus sock_buf .

0
source

Source: https://habr.com/ru/post/1483551/


All Articles