It really comes down to how much information the compiler can get while analyzing how your code will run at runtime. In the worst case (i.e., the Compiler cannot conclude that this data will ever be executed), the compiler must have it in order to allocate stack space for each function call:
struct SomeLargeStructure { double arr[20]; }; int aRecursiveFunction(const SomeLargeStructure *a, int x) { int val; cin >> val;
For this, the full stack distribution is required for each call ( -O3 ):
aRecursiveFunction(SomeLargeStructure const*, int): pushq %r15 pushq %r14 pushq %rbx subq $176, %rsp // stack alloc movl %esi, %ebx movq %rdi, %r14 leaq 172(%rsp), %rsi
Also, in cases where human deduction may think that βit is definitely not required,β the compiler can still choose to allocate stack space.
It is impossible to answer this question correctly without seeing the full code and / or studying the compiler behavior in this case. You must compile your code and independently view the created and optimized assembly. A simple allocation of stack space is usually not performance-related (unless you have one).
A word of recommendation, though: this is usually a premature optimization , as others have noted, you should not worry about this low-level problem, but rather focus on your algorithm and how your data is used, especially because if your consumption of stack space does not prove / seem The problem is the profiling phase, which will tell you about the red areas where optimization is most needed.
source share