Sorry if you pressed my general button.
This is due to optimization at a low level, so for the code it matters only that 1) the program counter spends a lot of time and 2) the compiler really sees it. For example, if a PC spends most of its time in library programs that you do not compile, it does not really matter.
Are conditions 1 and 2 satisfied, my optimization experience:
Several iterations of fetching and fixing were performed. In each of them, the problem is identified, and most often it is not about where the program counter is located. Most likely, at the level of the call stack levels, there are function calls that, since performance is of the utmost importance, can be replaced. To quickly find them, I do this.
Keep in mind that if there is a command to call a function that is on the stack for a significant part of the execution time, whether in several long calls or in very many short ones, this call is responsible for this fraction of the time, so deleting it or executing it less often can save a lot of time. And this savings is far superior to any low-level optimization.
Now the program can be many times faster than it began. I have never seen a single program of good size, no matter how carefully it is written, which could not benefit from this process. If the process has not been completed, you should not assume that low-level optimization is the only way to speed up the program.
After this process has been carried out to such an extent that it is simply impossible to do, and if the samples show that the PC is in the code that the compiler sees, then low-level optimization can make a difference.
Mike Dunlavey May 11 '09 at 11:51 a.m. 2009-05-11 11:51
source share