The corresponding expression tree gives a different result, then the equivalent code

The following code:

double c1 = 182273d; double c2 = 0.888d; Expression c1e = Expression.Constant(c1, typeof(double)); Expression c2e = Expression.Constant(c2, typeof(double)); Expression<Func<double, double>> sinee = a => Math.Sin(a); Expression sine = ((MethodCallExpression)sinee.Body).Update(null, new[] { c1e }); Expression sum = Expression.Add(sine, c2e); Func<double> f = Expression.Lambda<Func<double>>(sum).Compile(); double r = f(); double rr = Math.Sin(c1) + c2; Console.WriteLine(r.ToString("R")); Console.WriteLine(rr.ToString("R")); 

It will display:

 0.082907514933846488 0.082907514933846516 

Why are r and rr different?

Update:

It is established that this is reproducible if you select the target platform “x86” or check “Prefer 32-bit” with “Any processor”. In 64x mode it works correctly.

+6
source share
2 answers

I am not an expert on such things, but I will give my opinion on this subject.

Firstly, the problem only occurs when compiling with the debug flag (it does not appear in release mode), and only if it is running as x86.

If we decompile the method with which your expression is compiled, we will see this (both in debugging and in the release):

 IL_0000: ldc.r8 182273 // push first value IL_0009: call float64 [mscorlib]System.Math::Sin(float64) // call Math.Sin() IL_000e: ldc.r8 0.888 // push second value IL_0017: add // add IL_0018: ret 

However, if we look at the IL code of a similar method compiled in debug mode, we will see:

 .locals init ( [0] float64 V_0 ) IL_0001: ldc.r8 182273 IL_000a: call float64 [mscorlib]System.Math::Sin(float64) IL_000f: ldc.r8 0.888 IL_0018: add IL_0019: stloc.0 // save to local IL_001a: br.s IL_001c // basically nop IL_001c: ldloc.0 // V_0 // pop from local to stack IL_001d: ret // return 

You can see that the compiler has added (not needed) saving and loading the result into a local variable (possibly for debugging purposes). Now I'm not sure, but as far as I understand, on x86 architecture double values ​​can be stored in 80-bit register CPUs (quote from here ):

By default, in code for x86 architectures, the compiler uses 80-bit coprocessor registers to store intermediate floating-point results. This increases the speed of the program and reduces the size of the program. However, since the calculation includes floating-point data types that are less than 80 bits in memory, carrying additional precision bits of 80 bits minus the number of bits in a smaller amount of floating point, through a long calculation can lead to inconsistent results.

So, I assume that this repository is local and loading from local causes is converted from 64-bit to 80-bit (due to case) and vice versa, which causes the behavior that you observe.

Another explanation may be that JIT behaves differently between debugging and release modes (it can still be associated with storing the results of intermediate calculations in 80-bit registers).

I hope some people who know more can confirm that I am right or not.

Update in response to a comment. One way to decompile an expression is to create a dynamic assembly, compile the expression into a method there, save it to disk, and then search with any decompiler (I use JetBrains DotPeek). Example:

  var asm = AppDomain.CurrentDomain.DefineDynamicAssembly( new AssemblyName("dynamic_asm"), AssemblyBuilderAccess.Save); var module = asm.DefineDynamicModule("dynamic_mod", "dynamic_asm.dll"); var type = module.DefineType("DynamicType"); var method = type.DefineMethod( "DynamicMethod", MethodAttributes.Public | MethodAttributes.Static); Expression.Lambda<Func<double>>(sum).CompileToMethod(method); type.CreateType(); asm.Save("dynamic_asm.dll"); 
+5
source

As already mentioned, this is due to the difference between the Debug and Release modes on x86. It appeared in your code in Debug mode, because the compiled lambda expression is always JIT compiled in Release mode.

The difference is not caused by the C # compiler. Consider the following version of your code:

 using System; using System.Runtime.CompilerServices; static class Program { static void Main() => Console.WriteLine(Compute().ToString("R")); [MethodImpl(MethodImplOptions.NoInlining)] static double Compute() => Math.Sin(182273d) + 0.888d; } 

The output is 0.082907514933846516 in debug mode and 0.082907514933846488 in release mode, but IL is the same for both:

 .class private abstract sealed auto ansi beforefieldinit Program extends [mscorlib]System.Object { .method private hidebysig static void Main() cil managed { .entrypoint .maxstack 2 .locals init ([0] float64 V_0) IL_0000: call float64 Program::Compute() IL_0005: stloc.0 // V_0 IL_0006: ldloca.s V_0 IL_0008: ldstr "R" IL_000d: call instance string [mscorlib]System.Double::ToString(string) IL_0012: call void [mscorlib]System.Console::WriteLine(string) IL_0017: ret } .method private hidebysig static float64 Compute() cil managed noinlining { .maxstack 8 IL_0000: ldc.r8 182273 IL_0009: call float64 [mscorlib]System.Math::Sin(float64) IL_000e: ldc.r8 0.888 IL_0017: add IL_0018: ret } } 

The difference is the generated machine code. Disassembling Compute for debug mode:

 012E04B2 in al,dx 012E04B3 push edi 012E04B4 push esi 012E04B5 push ebx 012E04B6 sub esp,34h 012E04B9 xor ebx,ebx 012E04BB mov dword ptr [ebp-10h],ebx 012E04BE mov dword ptr [ebp-1Ch],ebx 012E04C1 cmp dword ptr ds:[1284288h],0 012E04C8 je 012E04CF 012E04CA call 71A96150 012E04CF fld qword ptr ds:[12E04F8h] 012E04D5 sub esp,8 012E04D8 fstp qword ptr [esp] 012E04DB call 71C87C80 012E04E0 fstp qword ptr [ebp-40h] 012E04E3 fld qword ptr [ebp-40h] 012E04E6 fadd qword ptr ds:[12E0500h] 012E04EC lea esp,[ebp-0Ch] 012E04EF pop ebx 012E04F0 pop esi 012E04F1 pop edi 012E04F2 pop ebp 012E04F3 ret 

In Release mode:

 00C204A0 push ebp 00C204A1 mov ebp,esp 00C204A3 fld dword ptr ds:[0C204B8h] 00C204A9 fsin 00C204AB fadd qword ptr ds:[0C204C0h] 00C204B1 pop ebp 00C204B2 ret 

In addition to using a function call to calculate sin instead of directly using fsin , which does not seem to matter much, the main change is that Release mode saves the result of sin in a floating mode, then Debug mode writes it and then reads it into memory ( fstp qword ptr [ebp-40h] instructions fstp qword ptr [ebp-40h] and fld qword ptr [ebp-40h] ). This means that it rounds the sin result from 80-bit precision to 64-bit precision, which leads to different values.

Curiously, the result of the same code on .Net Core (x64) is another value: 0.082907514933846627 . Parsing for this case shows that it uses SSE instructions, not x87 (although the .Net Framework x64 does the same, so the difference will be in the function being called):

 00007FFD5C180B80 sub rsp,28h 00007FFD5C180B84 movsd xmm0,mmword ptr [7FFD5C180BA0h] 00007FFD5C180B8C call 00007FFDBBEC1C30 00007FFD5C180B91 addsd xmm0,mmword ptr [7FFD5C180BA8h] 00007FFD5C180B99 add rsp,28h 00007FFD5C180B9D ret 
+3
source

Source: https://habr.com/ru/post/1015685/


All Articles