It seems like you are going through a really amazing garbage collection task where you are selling your own collectors (mark and expand, stop and copy, generations).
The general answer to your question is: two-pass algorithms usually use less memory than single-pass algorithms, according to the trading time for space.
A more specific answer: in the stop-copy collection, you do this in two passes: (1) first copy the real-time data to the new half-space and (2) set up internal links in live data to link for elements in the new half-space, matching old memory with new memory.
You must understand that the information needed to perform the mapping is not magically accessible: you need memory to keep track of how to redirect the moved memory. And remember: your collector is a program, and it must use a limited, small amount of memory! For example, storing a hash table in your account collector will be verbose: it does not play by the rules. Therefore, you need to ensure that the collector plays with a limited amount of memory. Thus, this explains why the stop copy collector reuses the old half-space and writes redirection records there.
Given this limitation: it is important to understand that we need to be careful how we go through the living set. Which approach we choose may or may not require additional memory, in some very subtle and surprising way. In particular, recursion is generally not free! Technically, in the first pass, we must use the new half-space not only as the goal of our copy, but also as a funky representation of the control stack, which we use to implement a recursive process that scans the current data.
Specifically, if we take a one-pass approach like this to copy a live set:
;; copy-live-set: number -> void ;; copies the live set starting from memory-location. Pseudocode: to copy-live-set starting at memory-location: copy the block at memory-location over to the new semispace, and record a redirection record in the old semispace for each internal-reference in the block: recursively call copy-live-set at the internal-reference if it hasn't been copied already remap the internal-reference to that new memory location
then you may be surprised to know that we ruined the memory. The above will break the promise that the collector makes to the runtime because recursion is not iterative here! It will consume the management stack space. During the live set traversal, it will consume the space of the control stack, proportional to the depth of the structures we are going over. By email Oh.
If you try an alternative approach for going through a live set, you should ultimately see that there is a good way to go through the entire live set, while maintaining the limited use of a small control. Hint: consider how graph traversal algorithms can be written as a simple while loop, with an explicit container that contains what you need to visit until we unload the container. If you squint slightly, the intermediate values in the new half-space look awful, like this container.
Once you learn how to navigate a living set in a space with a constant control stack, you will see that you will need these two passes to make a full copy and rewrite the internal link. Worrying about these details is futile, but it's important to see how the garbage collectors really work. A true collector should do something similar to worry about using the management stack to ensure that it uses limited memory during collection.
Summary: A two-pass algorithm is a solution that helps us with memory at the expense of some time. But we don’t pay much in terms of performance: although we go through the live set twice, the process is still linear in size of the live set.
History: see Cheney’s algorithm and pay attention to the title of the main work: “ A Non-recursive List Compression Algorithm ). This single word“ Nonrecursive ”is the key to motivating the two-pass approach: it tries to avoid consuming the management stack. Cheney’s article is a continuation Fenichel and Johelson's work “ A LISP Garbage Collector for Computer Virtual Memory Systems, ” in which the authors first proposed a recursive, stacked approach. To improve the situation, Fenichel and Yochelson then suggested using ner Schorr-Waite's ursive (but complex!) DFS algorithm for traversing the traversal. Cheney's approach is an improvement because traversing is easier.