How to build “vectorized” building blocks using the itertools module?

The itertools docs recipe section begins with this text:

Advanced tools offer the same high performance as the core toolbox. Superior memory performance is supported by processing elements one at a time, rather than casting an integer into memory immediately. The amount of code is maintained in a small way by linking the tools together in a functional style that helps eliminate temporary variables. High speed is maintained, preferring the "vectorized" building blocks for the use of for-loop and generators, which bear the overhead of the translator.

The question is, how should generators be built to avoid overhead? Could you give some examples of the poor blocs that have this overhead?

I decided to ask when I answered this question , where I could not say for sure if chain(sequence, [obj]) has overhead over chain(sequence, repeat(obj,1)) , and I should prefer the latter.

+4
source share
1 answer

The text of the document is not about how to create generators to avoid overhead. It explains that correctly written itertools -using code, such as the code in the examples, generally avoids for-loop and generators, leaving it for itertools or built-in collectors (e.g. list ) to use iterators.

Take for example the tabulate example:

 def tabulate(function, start=0): "Return function(0), function(1), ..." return imap(function, count(start)) 

A non-vectorized way to write this:

 def tabulate(function, start=0): i = start while True: yield function(i) i += 1 

This version takes interpreter overhead because the loop and function call are executed in Python.

Regarding the chain of a single element, we can safely assume that chain(sequence, [obj]) will be (trivially) faster, because building a fixed-length list is well optimized in Python using specialized syntaxes and op codes. In the same vein, chain(sequence, (obj,)) will be even faster, because tuples use optimization for lists and load less. As always with benchmarking, it is much better to measure with python -m timeit than to assume.

The documentation citation does not refer to differences in creating an iterator, for example, when choosing one of repeat(foo, 1) and [foo] . Since the iterator only ever creates one element, it does not matter how it is consumed. The docs talk about processor and memory performance when working with iterators that can create millions of elements. Compared to this, choosing an iterator that creates faster is trivial, as the creativity strategy can be changed at any time. On the other hand, once the code is designed to use explicit loops that are not vectorized, it can be very difficult to change it later without completely rewriting it.

+4
source

Source: https://habr.com/ru/post/1441106/


All Articles