Consider for entertainment purposes only:
Only 2 floating point values ββare compared with 0f : zero and negative zero, and they differ only 1 bit. Thus, a circuit / software emulation that checks if 31 non-sign bits are transparent will do so.
Comparison >0f bit more complicated, since negative numbers and 0 lead to false numbers, positive numbers lead to true ones, but NaNs (of both signs) also lead to false numbers, so itβs a bit more than just checking the sign bit.
Depending on the floating point mode, any operation may result in a super-accurate result in the floating point register being rounded up to 32 bits before comparison, so the estimate is even there.
If there was a difference at all, I would expect that != Would be faster, but I would not expect that there would be a difference, and I would not be very surprised if I am mistaken in some implementation.
I assume that your proof that the value cannot be negative is not subject to floating point errors. For example, the calculations in lines 1/2.0 - 1/3.0 - 1/6.0 or 0.4 - 0.2 - 0.2 can lead to positive or negative values ββif errors accumulate and are not canceled, so, apparently, nothing like this happens. On the actual use of the floating point test for equality with 0, you need to check whether you assigned it a literal 0 . Or the result of some other calculation is guaranteed to have a result of 0 in a float , but it can be tricky.
source share