I tested the speeds of several different ways to perform complex iterations on some of my data, and I found something strange. It seems that having a large list local to some function significantly slows down this function, even if it does not concern this list. For example, creating 2 independent lists through 2 instances of the same generator function a second time is about 2.5 times slower. If the first list is deleted before the second is created, both iterators go with the same character.
def f(): l1, l2 = [], [] for c1, c2 in generatorFxn(): l1.append((c1, c2))
Lists ultimately make up about 3.1 million items each, but I saw the same effect with smaller lists. The first for loop takes about 4.5 seconds, and the second 10.5. If I insert l1= [] or l1= len(l1) at the comment position, both for loops take 4.5 seconds.
Why does the rate of distribution of local memory in a function have anything to do with the current size of the variables of this function?
EDIT: Disabling the garbage collector fixes everything, so it should be related to a constant run. Case is closed!
source share