.net for rounding rounding errors at compile time and runtime

I recently installed some data for a test case that checks rounding errors by the data type of float, and came across some unexpected results. I expected cases t2 and t3 to give the same result as t1, but this is not the case on my machine. Can someone tell me why?

I suspect that the reason for the difference is that t2 and t3 are evaluated at compile time, but I am surprised that the compiler completely ignores my attempts to force it to use the intermediate float data type during evaluation. Is there any part of the C # standard that requires evaluating constants with the largest data type available, regardless of what is indicated?

This is on a 64-bit win7 Intel machine with .net 4.5.2 running.

float temp_t1 = 1/(3.0f); double t1 = (double)temp_t1; const float temp_t2 = 1/(3.0f); double t2 = (double)temp_t2; double t3 = (double)(float)(1/(3.0f)); System.Console.WriteLine( t1 ); //prints 0.333333343267441 System.Console.WriteLine( t2 ); //prints 0.333333333333333 System.Console.WriteLine( t3 ); //prints 0.333333333333333 
+1
source share
2 answers

People often have questions about the consistency of floating point calculations. There are currently almost no guarantees provided by the .NET Framework. To quote Eric Lippert :

The C # compiler, jitter, and runtime are wide in order to give you more accurate results than what is required in the specification, at any time on a whim - they don’t need to choose to do this consistently and in fact that they do not.

In this particular case, the answer is simple. Untreated IL for release:

 IL_0000: ldc.r4 0.333333343 IL_0005: conv.r8 IL_0006: ldc.r8 0.33333333333333331 IL_000f: stloc.0 IL_0010: ldc.r8 0.33333333333333331 IL_0019: stloc.1 IL_001a: call void [mscorlib]System.Console::WriteLine(float64) IL_001f: ldloc.0 IL_0020: call void [mscorlib]System.Console::WriteLine(float64) IL_0025: ldloc.1 IL_0026: call void [mscorlib]System.Console::WriteLine(float64) IL_002b: ret 

All arithmetic here is performed by the compiler. In the Roslyn compiler, the fact that temp_t1 is a variable causes the compiler to emit an IL that loads a 4-byte float and then converts it to double. I believe this is consistent with previous versions. In two other cases, the compiler performs all arithmetic operations with double precision and saves these results. It is not surprising that the second and third cases do not differ from each other, since the compiler does not store local constants in IL.

+2
source

C # floating point behavior is based on the core processor using IEEE 754. If you want to really see what happens, you need to look at the numbers in their binary format, translating them into bytes. When you print them, they are converted from base 2 to base 10, and you have a lot of processing.

That is what I suspect. Your first calculation (temp_t1) uses a single floating point precision, 23 bits for the mantissa. I suspect but did not confirm that temp_t2 and t2 were converted by the optimization component in the compiler, so temp_t2 was not calculated using single floating point precision, but rather double precision, and t2 took this value.

Additional information on floating point behavior: https://msdn.microsoft.com/en-us/library/aa691146(v=vs.71).aspx

-1
source

Source: https://habr.com/ru/post/1265478/


All Articles