Differences in floating point depending on how debugging is done

I use the debug build and get different results on the same computer, regardless of whether I run under the debugger or not. I use the excellent TestDriven.Net to run unit tests.

  • run with TestDriven.Net or an external NUnit runner produces the same result.
  • "run with debugger" using TestDriven.Net gives different results.

Code

  • A complex iterative mesh deformation procedure involving significant calculations within floating point precision
  • C #, VS2012, aimed at .Net 3.5.
  • Single threaded
  • Debug build only, release version not released
  • The same device, without the permissions \ speedstep or other function that I know of
  • Vanilla C # - unsafe code, unmanaged libraries, platform invocation, etc.
  • The debugger does not check code or strange third-party libraries

I did not track the first difference (complex without a debugger!), But considering how the iterative code, its sensitive input and the smallest difference will grow to significant proportions, given enough time.

I know how fragile fp reproducibility is related to compilers, platforms, and architectures, but I’m disappointed that the debugger is one of the factors that can drop this.

Should I just accept this as a fact of life or is there any advice you can offer?

+4
source share
1 answer

Should I just accept this as a fact of life or is there any advice you can offer?

You must accept this as a fact of life. Floating-point code can be optimized differently in different situations. In particular, in some cases, the JIT compiler may use the representation with greater precision / accuracy (for example, with an 80-bit floating point) for operations. The situations in which the JIT compiler will do this will depend on the architecture settings, optimization, etc. There can be any number of subtleties about what you do with the variable (and whether it is a local variable or not), which can affect this. Running under the debugger greatly affects the JIT optimization settings in general - not just for floating point - so I'm not at all surprised by this.

If you are performing a floating-point comparison with a certain tolerance, this should be good - it is very rarely a good idea to make exact comparisons of equalities by floating-point type. Of course, it is possible that you are actually doing a comparison without equality, where the differences become significant, but I rarely came across this as a problem.

+9
source

Source: https://habr.com/ru/post/1266432/


All Articles