Do you think fabs() and fabsf() implemented on your system or, when compared, with constant 0? If this is not a bitwise ops, this is entirely possible because the compiler authors do not think it will be faster.
Portability issues with this code:
- float and int may not have the same precision or even the same size. Therefore, masks may be incorrect.
- float may not be an IEEE representation
- You violate strict alias rules. The compiler is allowed to assume that the pointer / link to float and the pointer / link to int cannot point to the same memory location. So, for example, the standard does not guarantee that
r initialized to 1.0 before it is changed on the next line. It can reorder operations. This is not idle speculation, and unlike (1) and (2) it is undefined, not for implementation, so you cannot just search for it for your compiler. With sufficient optimization, I saw that GCC skips the initialization of floating point variables that are referenced only by type pointers.
First, I would do the obvious thing and consider the emitted code. Only if it seems dodgy is it worth considering doing something else. I have no particular reason to think that I know more about the bitwise representation of float than my compiler ,-)
inline float fast_sign(float f) { if (f > 0) return 1; return (f == 0) ? 0 : -1;
[Edit: In fact, GCC does something like this even with -O3. The emitted code is not necessarily slow, but it uses floating point operations, so it is not clear that it is fast. So, the next step will be testing, check if the alternative is faster on any compiler you can rely on, and if it makes it possible for people porting your code to use #define or something else according to the results their own landmark.]
source share