I came across this unusual error while working on some bitwise exercises. When the output pow()was forcibly applied to unsigned int, the result pow()called with the variable as an exponent became zero, while the result when the exponent was a literal integer was usually forced 0xFFFFFFFF (2 ^ 32 - 1). This only happens when the value is too large , in this case 2 ^ 32. The type of variable used as the exponent argument does not seem to affect this result. I also tried to save the output of both calls pow()as a doubling, and then apply coercion when referring to variables; discrepancy persists.
#import <math.h>
int main (void) {
int thirtytwo = 32;
printf("Raw Doubles Equal: %s\n", pow(2, 32) == pow(2, thirtytwo) ? "true" : "false");
printf("Coerced to Unsigned Equal: %s\n", (unsigned) pow(2, 32) == (unsigned) pow(2, thirtytwo) ? "true": "false");
return 0;
}
Out of curiosity, I ran the same code through clang / llvm and got a different result: regardless of whether the variable was an exponent of the variable, casting the result to unsigned int gave zero (as expected).
Edit: The maximum 32-bit unsigned integer 2^32 - 1, so neither forced output is actually correct. My error overflowed the integer size limit. Why gcc is essentially rounded to its maximum integer value is an interesting curiosity, but it doesn't really matter.
source
share