Listed in this post is a vectorized approach that gives us a number of such random options for a number of iterations in a single pass, without focusing on those multiple iterations. The idea uses np.argpartition and is inspired by this this post .
Here's the implementation -
def get_items(coo, num_items = 2, num_iter = 10): idx = np.random.rand(num_iter,len(coo)).argpartition(num_items,axis=1)[:,:2] return np.asarray(coo)[idx]
Note that we are returning a 3D array whose first size is the number of iterations, the second dimension is the number of options that must be made at each iteration, and the last dimension is the length of each tuple.
The sample run should contain a sharper image -
In [55]: coo = [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0)] In [56]: get_items(coo, 2, 5) Out[56]: array([[[2, 0], [1, 1]], [[0, 0], [1, 1]], [[0, 2], [2, 0]], [[1, 1], [1, 0]], [[0, 2], [1, 1]]])
A run-time test comparing a looping implementation with random.sample , as stated in the @freakish post -
In [52]: coo = [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0)] In [53]: %timeit [random.sample(coo, 2) for i in range(10000)] 10 loops, best of 3: 34.4 ms per loop In [54]: %timeit get_items(coo, 2, 10000) 100 loops, best of 3: 2.81 ms per loop