This is an interesting question. I donβt think anyone knows the answer. I believe that a significant part of the problem lies in the fact that for more complex programs no one can predict their complexity. Therefore, even if you have profiling results, it is very difficult to interpret it in terms of changes that must be made to the program, because you do not have a theoretical basis for the optimal solution.
I think this is the reason why we so inflated the software. We only optimize so that fairly simple cases work on our fast machines. But as soon as you put these things into a large system or use a larger input order, the wrong algorithms used (which were previously invisible both theoretically and practically) will begin to demonstrate their true complexity.
Example. You create a string class that handles Unicode. You use it somewhere, like computer processing XML, where it really doesn't matter. But Unicode processing is there, taking part in the resources. The string class itself can be very fast, but call it a million times and the program will be slow.
I believe that most of the ongoing bloat of software is of this nature. There is a way to reduce it, but this is contrary to OOP. There is an interesting book There is an interesting book about various methods, it is memory-oriented, but most of them can be returned to get more speed.
Js
source share