Code to check if double is power 2 without bit manipulation in C ++

To check if double is a value of 2, I found this code:

unsigned long long int &p= *(unsigned long long int *) &x; unsigned int exp= (p >> 52) & 0x7FF; if ( exp == 0 || exp == 0x7FF ) return false; return (p & 0xFFFFFFFFFFFFFULL) == 0; 

However, it does not perform basic tests for some architectures. I guess due to the different lengths of integers. So I tried to figure out a simple alternative that does not perform bit manipulation:

 bool isPot(double a ){ return a==0.? false : (1./a)*a==1.; } 

It is assumed that any division by a number that is not a power of 2 generates infinite digits in the mantissa, therefore, since the values ​​are truncated, it will not give 1 when multiplied by its inverse.

However, it seems to work, but I can't ignore it, because it takes ~ 140 years to complete the test for all values.

Suggestions?

MyTests:

 assert(isPot(2.0)); //first solution fails here assert(isPot(0.5)); assert(!isPot(5.0)); assert(!isPot(0.2)); 

By Strength 2, I mean a value that is exactly 2 times when it is stored in RAM. therefore, a number with all the mantissa bits equal to 0. This is an implicit solution that has an inherent error because it takes the following value:

 2.000000000000000000000000000000000000000000000000000000000000003 

he will be converted to

 2.0 

and therefore returns true, because all the mantissa bits are 0, but initially it was not 2.

+5
source share
4 answers

You can use frexp as a portable way to split a double into an exponent and a mantissa, and then check that the mantissa is exactly 0.5.

Example:

 #include <math.h> bool isPow2(double x) { int exponent = 0; auto mantissa1 = frexp(x, &exponent); return mantissa1 == 0.5; } BOOST_AUTO_TEST_CASE(powers_of_2) { std::vector<double> yes { 1, 2, 4, 8, 16, 0.5, 0.25, 65536 }; std::vector<double> no { 0, 3, 7, 15, 0.51, 0.24, 65535 }; for (size_t i = 0 ; i < yes.size() ; ++i) { BOOST_CHECK_EQUAL(isPow2(yes[i]), true); } for (size_t i = 0 ; i < no.size() ; ++i) { BOOST_CHECK_EQUAL(isPow2(no[i]), false); } } 
+14
source

The code here, named after Dekker, computes the exact binary multiplication error of the IEEE 754 (with some restrictions on the arguments).

The prohibition of overflow or overflow, the multiplication of nextafter(1.0, 0.0) by x is accurate if if x is zero or a power of two. Thus, you can use the algorithm to calculate the exact error of multiplying nextafter(1.0, 0.0) by x and compare it with 0.0 (c == ).

The implementation of the above idea is literally:

 bool isPow2(double x){ double y = std::nextafter(1.0,0.0); double xy= y*x; return Dekker(x,y,xy)==0.0; //linked above } 

This idea can be simplified by saving only the first few descriptions of the Decker separation and changing the constant C so that the split is between the first bit of the value (1 by definition) and the last 52 (which iff x is a power of two):

 bool isPow2(double x){ double px, qx, hx; double const C=0x10000000000001; px=x*C; qx=x-px; hx=px+qx; return hx==x; } 

This assumes your compiler provides IEEE 754 binary64 semantics for floating point operations.

If you really want to use this approach and need a robust version that handles zeros and negative numbers, you can wrap it like this:

 bool robustIsPow2(double x) { if (x <= 0.0) return false; if (x <= 1.0) return isPow2(x * (double)0x4000000000000000); return isPow2(x / (double)0x4000000000000000); } 
+3
source

According to the Wikipedia Floating Point page, double precision usually matches what you use: 53 bits for the mantissa.

But you can also find an extended format (80 bits) with 64 bits of mantissa, 32-bit single precision float (24 bits of mantissa), quadruple precision (112 bits of mantissa), and possibly others on less common architectures.

So, as Alan Stokes has already said, the only reliable and portable way is to use the frexp function.

+1
source

Your function returns true by -1 , so it does not work. This is easy enough to fix, but your approach has other fundamental problems.

In any rounding mode, it will calculate either infinity or the largest normal number when applied to the smallest positive nonzero subnormal. Taking the product will lead to infinity or something significantly less than 1 .

In the mode from rounding to the point closest to it, true is also returned to double after 1 . (And many, many and many other double s.)

In round-up mode, let's say it works for positive, normal floating-point numbers; if x not a power of two, then 1/x cannot be represented exactly like flup(1/x) , the floating-point number to which 1/x rounded in circular mode is strictly greater than 1/x . Then x * flup(1/x)) strictly greater than x ; therefore, flup(x * flup(1/x)) strictly greater than 1. However, for a positive normal cardinality of two operations, both operations are exact. The same applies to the round and round to zero modes.

The following code appears in round-upward, round-downward, and round-to-zero modes for IEEE 754 double s:

 int p2(double d) { return d > 0 && 0x1.0p-51 / d * d == 0x1.0p-51; } 

The constant 0x1.0p-51 was chosen so that the sample 0x1.0p-51 / d * d did not overflow for the smallest subnormal or overflow for the largest normal power of two.

I can calculate d times the round error when dividing 0x1.0p-51 by d, which is of the order of 0x1.0p-104 . The following code works in all four rounding modes:

 int p2(double d) { return d > 0 && fma(0x1.0p-51/d, d, -0x1.0p-51) == 0; } 

With all this in mind, you should use Alan Stokes' solution; frexp designed for such things, and it behaves wisely in all strange cases.

+1
source

Source: https://habr.com/ru/post/1209542/


All Articles