What makes a heap based scheme slower than a stack based scheme?

I am developing a compiler for a language similar to Scheme, and I am reading Dybwig's thesis. In it, he says that most of his performance gains came from allocating call frames on the stack rather than on the heap. There are several tricks that need to be done to actually do this work in the presence of closures and continuations.

My question is, where did this performance improvement come from? Is this purely because we put less strain on the garbage collector?

Take a different path: assuming we have an infinite amount of memory, will the stack of allocated call frames be even faster than the allocated heap call frames?

+6
source share
2 answers

I think Eli answered your question, so I'm going to insert my comment here and get a loan for him :).

Eli Barzilai writes:

(a) heap processing takes longer because it requires scanning (it is not linear, like a stack); (b) virtually all cpu architectures focus on providing access to the stack as quickly as possible, rather than the heap.

To this, I would add a general waving of the cache locality. That is, the stack saves all actions in a very small part of the memory, which will almost certainly remain in the cache.

+4
source

Appel wrote paper , arguing that heap allocation can be faster than stack distribution. Mathematically, he is right, but he ignores how modern processors adapt to running code that uses a stack (cache locality, efficient stack instructions, etc.), and that access heaps have more restrictions (bad in modern processors).

0
source

Source: https://habr.com/ru/post/906328/


All Articles