I realized that the problem itself is not very simple, but I found a simple algorithm that I think (not yet proven) will generate a uniform distribution over all possible values. I think that this algorithm must have been considered somewhere, since it is quite simple, but I did not try to find an analysis of this algorithm. The algorithm is as follows:
- Iteration from 1 to n
- At each step, find the real lower and upper bounds, assuming that the values ββafter this point will have a maximum value (for the lower) or a minimum value (for the upper range).
- Generate a uniformly distributed number between the boundaries (if there is no number satisfying the estimate, then the restriction is unsatisfactory)
- Add generated number to array
- At the end, shuffle the resulting array
code:
Selective Outputs:
$ python random_constrained.py 4 0 0.5
[0.06852504971359885, 0.39391285249108765, 0.24215492185626314, 0.2954071759390503]
$ python random_constrained.py 4 0 0.5
[0.2519926400714304, 0.4138640296394964, 0.27906367876610466, 0.055079651522968565]
$ python random_constrained.py 4 0 0.5
[0.11505150404455633, 0.16665881845206237, 0.45371668123772924, 0.264572996265652]
$ python random_constrained.py 4 0 0.5
[0.31689744182294444, 0.11233051635974067, 0.3599600067081529, 0.21081203510916197]
$ python random_constrained.py 4 0 0.5
[0.16158825078700828, 0.18989326608974527, 0.1782112102703714, 0.470307272852875]
$ python random_constrained.py 5 0 0.2
[0.19999999999999998, 0.2, 0.19999999999999996, 0.19999999999999996, 0.20000000000000004]
$ python random_constrained.py 5 0 0.2
[0.2, 0.2, 0.19999999999999998, 0.2, 0.19999999999999996]
$ python random_constrained.py 5 0 0.2
[0.20000000000000004, 0.19999999999999998, 0.19999999999999996, 0.19999999999999998, 0.2]
$ python random_constrained.py 5 0 0.2
[0.2, 0.20000000000000004, 0.19999999999999996, 0.19999999999999996, 0.19999999999999996]
$ python random_constrained.py 2 0.4 1
[0.5254259945319483, 0.47457400546805173]
$ python random_constrained.py 2 0.4 1
[0.5071103628251259, 0.4928896371748741]
$ python random_constrained.py 2 0.4 1
[0.4595236988530377, 0.5404763011469623]
$ python random_constrained.py 2 0.4 1
[0.44218002983240046, 0.5578199701675995]
$ python random_constrained.py 2 0.4 1
[0.4330169754142243, 0.5669830245857757]
$ python random_constrained.py 2 0.4 1
[0.543183373724851, 0.45681662627514896]
I'm still not sure how to deal with the error of accuracy.
I checked some tests to verify uniformity in the case of n=2 , generating 100,000 arrays (with alpha=0.4, beta=0.6 ) and getting values ββin 10 buckets of equal size from alpha to beta, counting the number of occurrences:
First number: [9998, 9966, 9938, 9952, 10038, 10161, 9899, ββ10007, 10054, 9987]
Second number: [9987, 10054, 10007, 9899, ββ10161, 10038, 9952, 9938, 9966, 9998]
For n=4, alpha=0, beta=0.3 10000 attempts:
[0, 0, 0, 304, 430, 569, 730, 1135, 1874, 4958]
[0, 0, 0, 285, 492, 576, 805, 1113, 1775, 4954]
[0, 0, 0, 248, 465, 578, 769, 1077, 1839, 5024]
[0, 0, 0, 252, 474, 564, 800, 1100, 1808, 5002]
We see that each number has a more or less the same distribution, so there is no bias to any position.