What LLVM passes are responsible for floating point optimization?

I am working on a Rust box that changes the rounding mode (+ inf, -inf, nearest or truncate).

Functions that change the rounding mode are recorded using the built-in assembly:

fn upward() { let cw: u32 = 0; unsafe { asm!("stmxcsr $0; mov $0, %eax; or $$0x4000, %eax; mov %eax, $0; ldmxcsr $0;" : "=*m"(&cw) : "*m"(&cw) : "{eax}" ); } } 

When I compile the code in debug mode, it works as intended, I get 0.3333333333337 one-third when rounded to positive infinity, but when I compile in free mode, I get the same result no matter what rounding mode I set. I assume this behavior is related to the optimizations that the LLVM server does.

If I knew which LLVM passes are responsible for this optimization, I can disable them, because at the moment I do not see another workaround.

+5
source share
1 answer

In principle, you cannot do this. LLVM assumes that all floating point operations use the default rounding mode and that the floating point control register is never read or changed.

In some discussion of this issue, it recently appeared on the LLVM-dev mailing list if you're interested.

In the meantime, the only reliable workaround is to use the built-in assembly, such as asm!("addsd $0, $1" .

The Rust standard library also assumes that you do not change the rounding mode (in particular, the code for converting between a floating point and strings is sensitive to this).

+4
source

Source: https://habr.com/ru/post/1247693/


All Articles