How does an optimizing C ++ compiler reuse function stack slots?

How does the optimizing C ++ compiler determine when this function is no longer needed by the function stack slot (part of the function stack frame), so can it reuse its memory ?.
In the stack slot, I mean part of the function stack frame, not necessarily the full frame of the function stack, and an example, to clarify the issue, suppose we have a function that has six integer variables defined in its area when it is time to use the sixth variable in a function, the fifth variable becomes useless, so the compiler can use the same block of memory for the fifth and sixth variables.
any information on this subject is appreciated.

+2
source share
3 answers

The simple part: when a function exits, all local variables of that function are freed. Thus, the output of the function indicates that the entire stack stack can be freed. However, this is not a problem, and you would not mention the "optimizing compiler" if that were the case.

In theory, the compiler can analyze the flow by function, find out which pieces of memory are used at what time, and possibly even reorder the stack distribution based on the order in which the variables become available. Then, if new automatic variables are introduced somewhere in the middle of the function or another area nested in the function (and not at the beginning), these newly released slots can be reused.

In practice, this sounds like a lot of spinning gears, and I suspect that the stack just pops up whenever the variables fall into scope and fly out of the block, decreasing the stack pointer when the region ends. But I admit that I am not an expert on this topic. Someone with more authoritative knowledge may come and correct me.

+3
source

EDIT: I interpreted this question like this: "How does the compiler reuse a specific memory word on the stack?" Most of the following answers to this question and note: the end answers the question: "How does the compiler reuse all the stack space needed for the function?".

Most compilers do not first assign stack slots. Rather, what they do, for each body of the function, processes each update of the variable, and everyone turns to this variable, which can see this specific purpose, as the so-called variable lifetime. Thus, a variable assigned several times will cause the compiler to create several lives.

(There are complications with this idea that arise when several tasks can be accessed using different control channels, this is solved with the help of a smart accessory to this idea called a static single task , which I will not talk about here).

At any point in the code, there are many lifetime variables that are valid; since you select different code points, you have different real life variables. The actual problem of the compiler is to assign different registers or stack slots for each of the lifetimes. You can think of it as a problem in a coloring graph: each service life is a node, and if two lifetimes can intersect at a point, in the code, there is an “interference” arc from a node to another node represents a different lifetime. You can color the graph (or use numbers instead of colors equivalently) so that none of the two nodes connected by the interference arc have the same color (number); you may need to use quite large numbers for this, but for most functions the numbers should not be very large. If you do this, the colors (numbers) will tell you a safe stack slot to use for the assigned value of a specific lifetime of the variable. (This idea is usually used in approximately two stages: once for allocating registers and once for allocating stack slots for those lifetimes that do not fit into registers).

Having determined the largest number used as the color on the chart, the compiler knows how many slots are needed in the worst case, and can reserve such storage during function input.

There are many complications: different values ​​occupy different volumes of space, etc., but the main idea is here. Not all compilers use the graph coloring technique, but almost all of them determine how to assign stack slots and registers to avoid the expected interference. And thus, they know the slot numbers of the stack and the size of the stack frame.

EDIT ... when typing, it seems that the question is interpreted as "when does the stack frame for the function disappear"? Answer: when the function exits. The compiler already knows how big it is. When performing the function there is no need to push or push on the stack; he knows where to put everything based on the numbering of the stack slots, determined by the coloring of the graph.

+10
source

If I understand the question correctly, this concerns the chain of calls, that is, calling the function from the function without allocating a new stack frame.

This is possible when a call can be converted to a tail call - the last op before returning. Thus, all local variables (stack) are already out of scope, so space can be reused. The compiler then generates a jump command instead of a call . The original return return address is still in the right place on the stack.

I probably hushed up a lot of details here, but this idea.

-1
source

Source: https://habr.com/ru/post/1210174/


All Articles