I can not speak for the "main languages". Many "minor" languages contain heap activation entries, with each call using a piece of heap space instead of a linear piece of the stack. This allows recursion to go as deep as you have address space to host.
Some people here argue that recursion is so deep that using a "large linear stack" is just fine. It is not right. I would agree that if you need to use the entire address space, you are doing some kind of problem. However, when you have very large chart or tree structures, you want to allow deep recursion, and you do not want to guess how much linear stack space you need in the first place, because you make a mistake.
If you decide to go in parallel, and you have a lot (from a thousand to a million "grains" [I think, small threads]), you cannot have 10 MB of stack space allocated for each thread, because you will spend gigabytes of RAM. How could you get a million grains? Easy: many grains that block with each other; when a grain freezes in anticipation of a lock, you cannot get rid of it, and yet you still want to run other grains to use the available CPUs. This maximizes the amount of work available and, thus, allows the efficient use of many physical processors.
The PARLANSE parallel programming language uses this very large number of parallel grain models and heap allocation for function calls. We developed PARLANSE to provide symbolic analysis and conversion of very large source computer programs (say, several million lines of code). They produce ... giant abstract syntax trees, giant control / data flow graphics, giant symbol tables, with tens of millions of nodes. Many opportunities for parallel workers.
Highlighting the heap allows lexically embracing PARLANSE programs even within parallelism, since it is possible to implement a “stack” as a cactus stack, where forks are found in the “stack” for subgrains, and therefore, each seed can see the activation records (parent areas) of its callers . This makes the transfer of large data structures cheap when recursing; you just refer to them lexically.
You might think that heap allocation slows down the program. It does; PARLANSE pays about 5% of the performance penalty, but gets the opportunity to process very large structures in parallel, with as many grains as the address space can take.
Ira Baxter Jul 10 '10 at 3:20 2010-07-10 03:20
source share