Edit
To avoid contradictions regarding *(int16_t)(&input_value) , I changed the last statement in the code block to memcpy and added *(int16_t)(&input_value) as an add. (It's the other way around).
On a large destination machine, you need to do a byte swap and then interpret it as a signed integer:
if (big_endian()) { input_value = (uint16_t)((input_value & 0xff00u) >> 8) | (uint16_t)((input_value & 0x00ffu) << 8); } int16_t signed_value; std::memcpy (&signed_value, &input_value, sizeof(int16_t));
On most computers, you can change the memcpy call to signed_value = *(int16_t)(&input_value); . This is, strictly speaking, undefined behavior. It is also an extremely common idiom. Almost all compilers do the βright thingβ with this statement. But, as is always the case with language extensions, YMMV.
source share