The compiler will almost certainly not be able to perform this optimization. At the lowest level, memory allocation comes down to calls to library functions such as malloc
(and one level deeper for the OS API). It is not safe for the compiler to assume that individual malloc/free
pairs may be omitted, and their storage will be reused, because their implementation should be outside the scope of the optimizer.
Also, I don't think this is a good job for the optimizer. This is what you, a programmer, can do without much effort.
There are no standardized costs for allocating / freeing memory. As a rule, the allocation / release time can vary greatly (for example, it will take much longer if the user space heap implementation is forced to retrieve fresh pages from the OS kernel memory manager).
A reasonable rule of thumb is that small distributions are more likely to be faster than larger ones, and distributions should be slower than de-distributions.
source share