Will different math processors get the same floating point results?

I am developing laptop software that has unit tests that should work on Linux, UNIX, and Windows.

Imagine this unit test, which claims that a single-precision IEEE value of 1.26743237e + 015f is converted to a string:

void DataTypeConvertion_Test::TestToFloatWide() { CDataTypeConversion<wchar_t> dataTypeConvertion; float val = 1.26743237e+015f; wchar_t *valStr = (wchar_t*)dataTypeConvertion.ToFloat(val); std::wcout << valStr << std::endl; int result = wcscmp(L"1.26743E+015", valStr); CPPUNIT_ASSERT_EQUAL(0, result); delete [] valStr; } 

My question is: do all OS and processors convert the float to the string "1.26743E + 015" while the float is IEEE? I ask, since I know that mathematical processors may not return exact results, and I was wondering if they would give different results on different processors, since they might have different hardware implementations of IEEE floating point operations inside the processor architecture.

+6
source share
1 answer

The answer, unfortunately, most likely will not be. Converting a floating point number to and from arbitrary strings is not guaranteed on different platforms.

In principle, at least all the processors that you are likely to encounter comply with the IEEE 754 standard. The standard is quite stringent to the extent that it defines floating point arithmetic. You can add / subtract / multiply or divide floating point numbers with a reasonable expectation of getting the same results on different platforms at the bit level.

The standard also defines conversion to and from symbolic representation. In principle, to ensure the compatibility of compatible implementations, but it has a "room for maneuver." Not all numbers should give the same results.

You should also be aware that the accuracy and default format may vary between platforms.

Having said all this, you can achieve the desired results if (a) you control the width and accuracy of the lines, and do not leave it by default (b) you choose the accuracy that is within the maximum available for a particular format (c), avoids NaN and things like that.

The article here is very useful.

+3
source

Source: https://habr.com/ru/post/970117/


All Articles