Short answer:. Since it’s easier to identify and analyze hot spots at run time, the parts of your program that use the most time.
Long answer:
If you run the code in interpreted mode, the virtual machine can count how often and for how long different parts of the code are used. These parts can be optimized better.
Take the nested if-then-else clauses. Fewer logical checks require shorter runtimes. If you optimize the path for the part that runs more often, you can improve the overall execution time.
Another thing is that at runtime you can make assumptions that are impossible at compile time. Java-VM, for example, is built into virtual methods in server mode - while only one class is loaded that implements this method. This would be unsafe if it were done at compile time. The JVM disables the code again if another class is loaded, but often this never happens.
Also at runtime, it is more known about the machine on which the program is running. If you have a machine with a lot of registers, you can use them. Again, this is unsafe if you do this at compile time.
One thing can be said: runtime optimization also has disadvantages. Most importantly: time for optimization is added to the runtime of the program. It is also more complicated because you have to compile parts of the program and execute them. Errors in the virtual machine are critical. Think of a compiler that sometimes crashes - you can compile again, and everything is fine. Sometimes a virtual machine crash sometimes means that sometimes your program crashes. Not good.
As a conclusion: you can do each optimization at runtime, which is possible at compile time ... and a few more. You have additional information about the program, its startup paths and the machine in which the program runs. But you must consider the time required to complete the optimization. It is also more difficult to do at run time, and errors are more relevant than at compile time.
source share