Is the IEEE 754 floating point format well defined on different platforms? In terms of both bit format and judgment?
I want to add the following to my code (for the initial version):
static_assert(std::numeric_limits<float>::is_iec559, "Only support IEC 559 (IEEE 754) float"); static_assert(sizeof(float) * CHAR_BIT == 32, "Only support float => Single Precision IEC 559 (IEEE 754)"); static_assert(std::numeric_limits<double>::is_iec559, "Only support IEC 559 (IEEE 754) double"); static_assert(sizeof(float) * CHAR_BIT == 64, "Only support double => Double Precision IEC 559 (IEEE 754)"); static_assert(std::numeric_limits<long double>::is_iec559, "Only support IEC 559 (IEEE 754) long double"); static_assert(sizeof(float) * CHAR_BIT == 128, "Only support long double => Exteneded Precision IEC 559 (IEEE 754)");
If I write my float / double / long double bit in binary format, they can be moved between systems without further interpretation. i.e...
void write(std::ostream& stream, double value) { stream.write(reinterpret_cast<char const*>(&value), 8); } .... double read(std::istream& stream) { double value; stream.read(reinterpret_cast<char*>(&value), 8); return value; }
Or do I need to split the double into integer components for the transport (as suggested by this answer ):
The difference is that I am ready to limit my supported IEEE-754 representation, will it basically solve my floating point binary memory or do I need to take further steps?
Note. For non-compliant platforms (when I find them) Iām ready for a special occasion to use the code so that they read / write IEEE-754 in a local view. But I want to know if the bit / endian specific cross platform is enough to support storage / transport.
source share