How to get random.sample () from deque in Python 3?

I have a collection.deque () of tuples from which I want to draw random samples. In Python 2.7 I can use batch = random.sample(my_deque, batch_size).

But in Python 3.4, this calls TypeError: Population must be a sequence or set. For dicts, use list(d).

What is the best workaround or recommended way to efficiently select from deque in Python 3?

+6
source share
2 answers

The obvious way is to convert to a list.

batch = random.sample(list(my_deque), batch_size))

But you can avoid creating an entire list.

idx_batch = set(sample(range(len(my_deque)), batch_size))
batch = [val for i, val in enumerate(my_deque) if i in idx_batch] 

PS (Ed)

Actually, it random.sampleshould work fine with deques in Python> = 3.5. because the class has been updated in accordance with the Sequence interface.

In [3]: deq = collections.deque(range(100))

In [4]: random.sample(deq, 10)
Out[4]: [12, 64, 84, 77, 99, 69, 1, 93, 82, 35]

! , , , O (n) , m random O (m * n).

+8

sample() deque Python ≥3.5, .

Python 3.4 , :

sample_indices = sample(range(len(deq)), 50)
[deq[index] for index in sample_indices]

MacBook Python 3.6.8 44 , Eli Korvigo. :)

deque 1 , 50 :

from random import sample
from collections import deque

deq = deque(maxlen=1000000)
for i in range(1000000):
    deq.append(i)

sample_indices = set(sample(range(len(deq)), 50))

%timeit [deq[i] for i in sample_indices]
1.68 ms ± 23.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit sample(deq, 50)
1.94 ms ± 60.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit sample(range(len(deq)), 50)
44.9 µs ± 549 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

%timeit [val for index, val in enumerate(deq) if index in sample_indices]
75.1 ms ± 410 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

, , deque . , :

class ReplayMemory:
    def __init__(self, max_size):
        self.buffer = [None] * max_size
        self.max_size = max_size
        self.index = 0
        self.size = 0

    def append(self, obj):
        self.buffer[self.index] = obj
        self.size = min(self.size + 1, self.max_size)
        self.index = (self.index + 1) % self.max_size

    def sample(self, batch_size):
        indices = sample(range(self.size), batch_size)
        return [self.buffer[index] for index in indices]

50 :

%timeit mem.sample(50)
#58 µs ± 691 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
0

Source: https://habr.com/ru/post/1658475/


All Articles