Interpreting uint16_t as int16_t

Is there a portable and safe way to interpret the bit pattern made by boost::uint16_t as boost::int16_t ? I have uint16_t , which, as I know, is a signed 16-bit integer encoded as little-endian. I need to do some signed arithmetic for this value, so still need to convince the compiler that it is already a signed value?

If I'm not mistaken, static_cast<int16_t> converts the value, possibly by changing its bit pattern.

0
source share
6 answers

If you are looking for something other than a cast, copy its memory representation into boost::int16_t from the moment it is presented, which starts with it.

Edit: If you need to make it work on a large end machine, just copy the bytes back. Use std::copy and std::reverse .

+2
source

Just use a static cast. Changing the bit pattern occurs exactly as you want, if you are on a platform that defines them differently.

reinterpret_cast or any equivalent pointer pointer, undefined (no implementation defined). This means that the compiler can do unpleasant things, such as undefined cache in the register and skip the update. Also, if you were on a platform where the bit patterns were different, then a conversion bypass would leave you with garbage (just like pretending to be a float int and add 1 to it.)

For more information, see Signed for unsigned conversion to C - is it always safe? but a summary of C in a roundabout way defines static translation (regular C in fact) as exactly what you get by considering bits the same on x86 (which uses two additions.)

Do not play chicken with the compiler (this has always worked on this compiler, so of course they will not break every code by changing it). History has proven that you are wrong.

+1
source

Disable everything except the sign bit, store this in a signed int, then set the sign using the sign bit.

0
source

I assume that *(boost::int16_t*)(&signedvalue) will work if by default your system architecture is not unimportant. endian ness will change the behavior, since after the operation above cpu will treat the signed value as the value for boost :: int16_t architecture (which means that if your architecture is a big endian, this will go wrong).

0
source

Edit
To avoid contradictions regarding *(int16_t)(&input_value) , I changed the last statement in the code block to memcpy and added *(int16_t)(&input_value) as an add. (It's the other way around).

On a large destination machine, you need to do a byte swap and then interpret it as a signed integer:

 if (big_endian()) { input_value = (uint16_t)((input_value & 0xff00u) >> 8) | (uint16_t)((input_value & 0x00ffu) << 8); } int16_t signed_value; std::memcpy (&signed_value, &input_value, sizeof(int16_t)); 

On most computers, you can change the memcpy call to signed_value = *(int16_t)(&input_value); . This is, strictly speaking, undefined behavior. It is also an extremely common idiom. Almost all compilers do the β€œright thing” with this statement. But, as is always the case with language extensions, YMMV.

0
source

As another approach, the best way to maximize (but not provide) portability is to store these signed 16-bit integers as signed 16-bit integers in a network order, and not as unsigned 16-bit integers in a small trailing okay. This puts a strain on the target machine so that it can translate these 16-bit integers with the network number into 16-bit signed integers in the original form to the target. Not every machine supports this feature, but most machines that can connect to the network do this. In the end, this file must get to the target machine by some mechanism, so the chances are pretty good that it will understand the network order.

On the other hand, if you loop this binary to some embedded computer through some kind of proprietary serial interface, the answer to the portability question is the same answer that you get when you tell your doctor: "It hurts when I doing it. "

0
source

Source: https://habr.com/ru/post/1201938/


All Articles