It is assumed that the compiler cannot optimize functions. This is a limitation of specific compilers, not a common problem. Using this as a general solution to a specific problem can be bad. The compiler can very easily inflate your program with the fact that it could be reused by functions at the same memory address (getting cache) compiled in another place (and performance loss due to the cache).
Large functions in the total cost during optimization, there is a balance between the overhead of local variables and the amount of code in the function. Saving the number of variables in the function (both past and local) up to the number of one-time variables for the platform leads to the fact that most of them can remain in the registers and should not fail, and the frame is not required (depending on the purpose) ), therefore, overhead costs are significantly reduced. It is difficult to do in real-world applications all the time, but the alternative is a small number of large functions with a large number of local variables, the code will spend a considerable amount of time evicting and loading registers with variables to / from ram (depends on the target).
Try llvm, it can optimize not only function by function throughout the program. Release 27 came to the gcc optimizer, at least for a test or two, I did not do exhaustive performance testing. And 28 no, so I guess it's better. Even with multiple files, the number of combinations of settings buttons is too large to mess with. I believe that itโs best not to optimize until you include the entire program in one file, and then do your optimization, providing the optimizer with the whole program to work, basically what you are trying to do with inlining, but without baggage.
old_timer Oct 22 2018-10-22 21:56
source share