About memory efficiency: vs xrange, zip vs izip range

I read the following section: Make a dictionary from a list using python

The initial problem - transform tuple (1,'a',2,'b',3,'c') to the dictionary {1: 'a', 2: 'b', 3: 'c'} . Many interesting solutions were given, including the following two:

Solution 1:

 dict(x[i:i+2] for i in range(0, len(x), 2)) 

Solution 2:

 dict(zip(*[iter(val_)] * 2)) 

In solution 1, why create an actual list with range ? Wouldn't xrange( 0, len(x), 2 ) more memory efficient? The same question for solution 2: zip creates the actual list. Why not use itertools.izip instead?

+4
source share
2 answers

Why create an actual list with a range?

Yes, xrange(0, len(x), 2) will work more efficiently with memory.

Why not use itertools.izip () in solution 2?

Yes, zip () creates an actual list, so you can save memory using itertools.izip.

Does it really matter?

Differences in speed may be small. Memory efficiency improves speed only when data exceeds the size of memory caches. Some of the benefits are offset by the overhead of iterators.

Since the dictionary stores keys and values, the only memory is stored in tuples pointing to keys and values. Thus, the savings in this situation are much more modest than for other iterator applications that do not accumulate all the results in the container.

So this is probably "a lot of noise from nothing."

What about Python 3?

In Python 3, range () and zip () return iterators.

+1
source

As far as I know

 dict(zip(*[iter(val_)] * 2)) 

- The usual "pythonic" way to do this. And the approach in Python when it comes to optimizing stuff is to always profile and see where time is wasted. If the approach described above is great for your application, why optimize it?

0
source

Source: https://habr.com/ru/post/1501070/


All Articles