(How) debugging a change in a program’s workflow?

Consider the following simple program:

var dblMax = Double.MaxValue; var result = (dblMax * 1000) / 1800; Console.WriteLine(result); 

When I create this in debug mode and run (Ctrl + F5) or debug (F5), it prints 9.987140856842E+307 .

When I switch to Release mode and run (Ctrl + F5), it prints 8 for infinity.

I understand that this difference is due to some compiler optimizations that are performed in Release mode.

However, if I debug (F5) the same assembly in Release mode, it again prints 9.987140856842E+307 !

How does the fact that I am debugging changes the calculation result?

Edit:

I am not asking why debug mode and release mode give different results. I wonder why the release mode gives different results depending on whether I am debugging (F5) or not (Ctrl + F5).

0
source share
3 answers

When debugging, JITTER behaves differently.

First, local variables in many cases change the lifetime so that they can be checked. Consider hitting a breakpoint after using the variable during the calculation. If JITTER knows that the variable will not be used after the expression, and this does not extend the life of the variable, you may not be able to look at this variable, which is the main debugging function.

JITER has a very clear knowledge of when a variable is useful for keeping it still. If a register is available during this time, it can use this register to store the variable.

However, when connecting a debugger, it can use a memory cell instead, because the lifetime has changed so much that the register is not accessible for this part of the code.

The floating point registers of the CPU are more accurate than the corresponding floating point storage formats, which means that as soon as you either raise the value from the register and into memory, or simply store it in memory all the time, you will experience lower accuracy.

The difference between the RELEASE and DEBUG versions can lead to dictation of these things, as well as the presence of a debugger.

In addition, there may be differences between different versions of the .NET runtime that may affect this.


Writing floating-point code correctly requires intimate knowledge of what you are trying to do and how different parts of the machine and platform will influence. I will try to avoid writing such code.

+4
source

This is strictly related to floating point precision. In debug mode, the compiler uses 80-bit precision. In release mode, the compiler uses 64-bit truncated results.

If this happens or not, it depends on several configuration parameters, parameters and environment variables. For example, you can disable the optimization of your configurations for release mode. This should help.

Take a look at this answer by John Skeet: fooobar.com/questions/1266432 / ...

+3
source

Look at this:

 var result1 = (dblMax / 1800) * 1000; // 64-bit precission is needed var result2 = (dblMax * 1000) / 1800; // 80-bit precission is needed Console.WriteLine(result1); Console.WriteLine(result2); 

This is just an example confirmin the above answers.

0
source

Source: https://habr.com/ru/post/1266417/


All Articles