It will be a little manual work; I was too late last night watching newcomers win the World Series, so don't insist on accuracy.
The rules for evaluating floating point expressions are somewhat flexible, and compilers usually handle floating point expressions even more flexibly than the rules formally allow. This makes evaluating floating point expressions faster, because the results are slightly less predictable. Speed โโis important for floating point calculations. Java initially made a mistake by imposing exact requirements on floating point expressions, and the number community screamed in pain. Java had to be given to the real world and relaxed.
double f(); double g(); double d = f() + g();
On x86 hardware (i.e., approximately on every existing desktop system), floating point calculations are actually performed with an accuracy of 80 bits (unless you install some of them that kill performance, as Java requires), although double and float are equal respectively 64 and 32 bit. Thus, for arithmetic operations, the operands are converted to 80 bits, and the results are converted back to 64 or 32 bits. This is slow, so the generated code usually delays the execution of conversions for as long as possible, performing all the calculations with 80-bit precision.
But C and C ++ require that when the value is stored in a floating point variable, the conversion must be performed. Thus, formally, in line // 1, the compiler must convert the sum back to 64 bits in order to save it in the variable d . Then, the value of dd1 calculated in line // 2 must be calculated using the value that was stored in d , i.e. a 64-bit value, while the value of dd2 calculated in line // 3 can be calculated using f() + g() , i.e. The full 80-bit value. These additional bits may have a value, and the value of dd1 may differ from the value of dd2 .
And often the compiler will hang with the 80-bit value of f() + g() and use this instead of the value stored in d when it calculates the value of dd1 . This is an inappropriate optimization, but as far as I know, every compiler does this by default. All of them have command line switches to provide the strictly required behavior, so if you need slower code, you can get it. & L; r>
For a serious crystal number, speed is critical, so this flexibility is welcome, and the crunch code for numbers is carefully written to avoid being sensitive to such subtle differences. People get PhDs to figure out how to make floating point code fast and efficient, so don't feel bad that the results you see don't make sense. They do not do this, but they are close enough that they are carefully processed, they give the correct results without a speed penalty.