Simple . The double has 52 bits of precision, assuming IEEE. Therefore, create a 52-bit (or larger) unsigned random integer (for example, by reading bytes from dev / urandom), convert it to double and divide it by 2 ^ (the number of bits it was).
This gives a numerical uniform distribution (in which the probability that a value is in a given range is proportional to the range) up to the 52nd binary digit.
Complicated However, in the range [0,1] there are many double values โโthat cannot be created above. In particular, half of the values โโin the range [0,0.5) (those that have the least significant bit) cannot occur. Three quarters of the values โโin the range [0,0.25) (those that have one of their smallest 2 bits) cannot be found, etc., Only one positive value less than 2 ^ -51 is possible, despite the fact that a double creature can to be jokes of such meanings. Therefore, it cannot be said that it is really uniform over the specified range to complete accuracy.
Of course, we do not want to choose one of these doubles with equal probability, because then the resulting number will be on average too small. We still need the probability that the result in the given range will be proportional to the range, but with higher accuracy in those ranges that work.
I think the following works. I did not particularly study or test this algorithm (as you probably can say that there is no code), and personally I would not use it without finding the correct links indicating its correctness. But here goes:
- Start the exhibitor at 52 and select a 52-bit unsigned random integer (suppose 52 bits of the mantissa).
- If the most significant bit of an integer is 0, increase the exponent by one, shift the integer on the left by one and fill the least significant bit with a new random bit.
- Repeat until you press 1 in the most significant place, otherwise the exponent will increase too much for your double (1023. Or, possibly, 1022).
- If you find 1, divide your value by 2 ^ exponent. If you have all zeros, return 0 (I know that this is not really a special case, but it emphasizes how unlikely the return of 0 is [Edit: this may actually be a special case - it depends on whether you want to generate denorms. If not, then after you have enough 0s in a row, you will drop everything that remains and return 0. But in practice, it is so unlikely to be negligible if the random source is not random).
I do not know if there is any practical application for such a random double, mind you. Your definition of randomness should depend on what it is intended for. But if you can capitalize on all of your 52 significant bits that are random, this can be useful.
source share