Should I just accept this as a fact of life or is there any advice you can offer?
You must accept this as a fact of life. Floating-point code can be optimized differently in different situations. In particular, in some cases, the JIT compiler may use the representation with greater precision / accuracy (for example, with an 80-bit floating point) for operations. The situations in which the JIT compiler will do this will depend on the architecture settings, optimization, etc. There can be any number of subtleties about what you do with the variable (and whether it is a local variable or not), which can affect this. Running under the debugger greatly affects the JIT optimization settings in general - not just for floating point - so I'm not at all surprised by this.
If you are performing a floating-point comparison with a certain tolerance, this should be good - it is very rarely a good idea to make exact comparisons of equalities by floating-point type. Of course, it is possible that you are actually doing a comparison without equality, where the differences become significant, but I rarely came across this as a problem.
source share