I recently analyzed an old piece of code compiled with VS2005 due to different numerical behavior in the debug compilation (without optimization) and release (/ O2 / Oi / Ot options). The code (reduced) looks like this:
void f(double x1, double y1, double x2, double y2)
{
double a1, a2, d;
a1 = atan2(y1,x1);
a2 = atan2(y2,x2);
d = a1 - a2;
if (d == 0.0) {
printf("EQUAL!\n");
}
It is expected that the function fwill print "EQUAL" if called with the same value pairs (for example, f(1,2,1,2)), but this does not always happen in the "release". In fact, it happened that the compiler optimized the code as if it were something like d = a1-atan2(y2,x2)and completely deleted the assignment of an intermediate variable a2. Moreover, he took advantage of the fact that the second result is atan2()already on the FPU stack, so he rebooted a1onto the FPU and read the values. The problem is that the FPU works with advanced precision (80 bits), and there a1was a “only” double (64 bits), so storing the first result atan2()in memory actually lost accuracy. In the end, it dcontains a “conversion error” between extended and double precision.
, (== operator) float/double . , . , "" . "" , (, ). , "float"? , "int" (, )?
, , C?