Unsigned overflow with module in C

I ran into an error in some c-code that I wrote, and although it was relatively easy to fix, I want to better understand the problem underlying it. essentially, what happened was that I had two unsigned integers (actually uint32_t) which, when applying the module operation, gave the unsigned equivalent of a negative number, the number that was wrapped and thus was "large". Here is an example program for demonstration:

#include <stdio.h> #include <stdint.h> int main(int argc, char* argv[]) { uint32_t foo = -1; uint32_t u = 2048; uint64_t ul = 2048; fprintf(stderr, "%d\n", foo); fprintf(stderr, "%u\n", foo); fprintf(stderr, "%lu\n", ((foo * 2600000000) % u)); fprintf(stderr, "%ld\n", ((foo * 2600000000) % u)); fprintf(stderr, "%lu\n", ((foo * 2600000000) % ul)); fprintf(stderr, "%lu\n", foo % ul); return 0; } 

this outputs the following result on my x86_64 machine:

 -1 4294967295 18446744073709551104 -512 1536 2047 

1536 is the number I was expecting, but (uint32_t) (- 512) is the number I was getting, which, as you might imagine, dropped a bit.

therefore, probably, my question is this: why does the operation of a module between two unsigned numbers in this case generate a number greater than the divisor (i.e. a negative number)? is there a reason this behavior is preferable?

+5
source share
2 answers

I think the reason is that the compiler interprets the 2600000000 literal as a signed 64-bit number, since it does not fit into the signed 32-bit int. If you replace the number with 2600000000U , you should get the expected result.

+3
source

I have no help, but I'm sure that when you do this multiplication, it advances them to int64_t , because it needs to force two multisets to the type of signed integral. Try 2600000000u instead of 2600000000 ....

+2
source

Source: https://habr.com/ru/post/958079/


All Articles