I am trying to translate the following Python code in C ++:
import struct import binascii inputstring = ("0000003F" "0000803F" "AD10753F" "00000080") num_vals = 4 for i in range(num_vals): rawhex = inputstring[i*8:(i*8)+8]
Thus, it reads the 32-bit value of a hex-encoded string, turns it into a byte array using the unhexlify
method unhexlify
and interprets it as a little-endian floating-point value.
The following almost works, but the code looks like crappy (and the last 00000080
incorrectly):
#include <sstream>
(compiles on OS X 10.7 / gcc-4.2.1 with simple g++ blah.cpp
)
In particular, I would like to get rid of the BIG_ENDIAN
macro, as I am sure there is a better way to do this, as this post is being discussed.
A few other random data - I can't use Boost (too much dependency for the project). A string usually contains between 1536 (8 3 * 3) and 98304 float values (32 3 * 3), at most 786432 (64 3 * 3)
(edit2: another value added, 00000080
== -0.0
)
source share