The generation of random integers by scaling the double in the range [0..1] is excellent, provided that the generated double is fairly evenly distributed. However, most pseudo-random number generators, including Mersenne Twister, initially generate (32-bit unsigned) integers, so it would be nice if we didn't need to double rounding.
If the boundary of N is a power of 2, we can first create a 32-bit random integer X and take X mod N. This is guaranteed to give an even distribution result. But if N is not a power of 2, taking the absolute value of the offset in the resulting distribution (there are more than 32-bit unsigned integers for which X mod 7 is 0, for example 6.) If N is small, then finding such an offset will require a huge number of generated numbers but theoretically the distribution will be wrong.
To generate integers with a truly even distribution in the range [0..N) for N that are not powers of two, we can resort to a sampling algorithm: first, calculate M so that it is the smallest power of 2 more than N (for N = 13, M = 16, etc.). Then generate a 32-bit integer X, compute Y = X mod M. Now, if Y <N, we are done, Y is our number. If Y> = N, discard it and generate another one until Y <N. is displayed. If the base prng is good, the expected number of generated numbers never exceeds 2 (it depends on N), and although the cycle is not guaranteed to end, it will be interrupted with overwhelming probability. In a practical application, you need to put a limit counter in a loop and drop it to another method to protect it from degassing generators.
Which method is the fastest? Only profiling will tell. It depends on the hardware, language and quality of the code.
A more detailed discussion can be found in Knuth TAOCP, Part 2.
source share