If you think that multiplying by two means increasing the figure by 1, think again. Here are the possible cases for IEEE 754 floating point arithmetic:
Case 1: Infinity and NaN remain unchanged.
Case 2: the floating point numbers with the highest possible exponent change to infinity, increasing the exponent and setting the mantissa, except that the sign bit is zero.
Case 3: Normalized floating-point numbers with an exponent less than the maximum possible exponent exponent increase by one. Yippee !!!
Case 4: Denormalized floating point numbers with the highest bit set mantissas increase their exponent by one, turning them into normalized numbers.
Case 5: Denormalized floating-point numbers with the highest mantissa bits, including +0 and -0, have their own mantissa shifted to the left by one bit position, leaving the exponent unchanged.
I highly doubt that a compiler that produces integer code that handles all of these cases correctly will be anywhere as fast as the built-in floating point processor. And it is only suitable for multiplying by 2.0. To multiply by 4.0 or 0.5, a completely new set of rules is applied. And for the case of multiplication by 2.0, you can try to replace x * 2.0 with x + x, and many compilers will do this. That is, they do this because the processor can, for example, perform one addition and one multiplication at a time, but not one of each type. Therefore, sometimes you prefer x * 2.0, and sometimes x + x, depending on what other operations you need to do at the same time.
gnasher729 Mar 11 '15 at 17:12 2015-03-11 17:12
source share