I ran into an error in some c-code that I wrote, and although it was relatively easy to fix, I want to better understand the problem underlying it. essentially, what happened was that I had two unsigned integers (actually uint32_t) which, when applying the module operation, gave the unsigned equivalent of a negative number, the number that was wrapped and thus was "large". Here is an example program for demonstration:
#include <stdio.h> #include <stdint.h> int main(int argc, char* argv[]) { uint32_t foo = -1; uint32_t u = 2048; uint64_t ul = 2048; fprintf(stderr, "%d\n", foo); fprintf(stderr, "%u\n", foo); fprintf(stderr, "%lu\n", ((foo * 2600000000) % u)); fprintf(stderr, "%ld\n", ((foo * 2600000000) % u)); fprintf(stderr, "%lu\n", ((foo * 2600000000) % ul)); fprintf(stderr, "%lu\n", foo % ul); return 0; }
this outputs the following result on my x86_64 machine:
-1 4294967295 18446744073709551104 -512 1536 2047
1536 is the number I was expecting, but (uint32_t) (- 512) is the number I was getting, which, as you might imagine, dropped a bit.
therefore, probably, my question is this: why does the operation of a module between two unsigned numbers in this case generate a number greater than the divisor (i.e. a negative number)? is there a reason this behavior is preferable?