The two methods can give different results, but you will notice the difference in fairly extreme situations (with very wide ranges). For example, if you generate random numbers between 0 and 2/sys.float_info.epsilon ( 9007199254740992.0 or just over 9 quintillion), you will notice that the version using multiplication will never give you any floats with fractional values. If you increase the maximum border to 4/sys.float_info.epsilon , you will not get any odd integers, only even ones. This is because using the 64-bit Python floating-point type does not have sufficient precision to represent all integers at the upper end of this range and tries to maintain uniform distribution (therefore, it omits small odd integers and fractional values, even if these can be represented in parts of the range).
The second version of the calculation will provide additional accuracy for the generated smaller random numbers. For example, if you generate numbers between 0 and 2/sys.float_info.epsilon , and the randrange call returns 0 , you can use the full precision of the random call to add the fractional part to the number. On the other hand, if randrange returns the largest number in the range ( 2/sys.float_info.epsilon - 1 ), very little fraction precision will be used (the number will be rounded to the nearest integer without the remainder).
Adding a fractional value will also not help you cope with ranges that are too large for every integer to be represented. If randrange returns only even numbers, adding a fraction usually will not result in odd numbers (this may be in some parts of the range, but not for others, and the distribution can be very uneven). Even for ranges where all integers can be represented, the likelihood of a particular floating point number will not be completely uniform, since smaller digits can be more accurately represented. Larger but inaccurate numbers will be more common than smaller but more accurately presented ones.
source share