What models of algorithm running time exist?
We expect mergesort to be faster than bublesort, and note that mergesort does O (n log n) versus O (n 2 ) comparisons for bubblesort.
For other algorithms, you count other operations (than comparisons and swaps), such as dereferencing a pointer, searching for an array, arithmetic on integers of a fixed size, etc.
What other ways to simulate program runtimes exist?
One, which I know about, counts the number of blocks read and is written to disk; see my answer to When does a Big-O note fail? for a detailed description.
Another counts the number of misses in the cache. This extends the I / O model by looking at all levels of the memory hierarchy.
Thirdly, for distributed algorithms (for example, in secure multi-part computing), it is necessary to calculate the amount of data transmitted over the network (usually measured in communication rounds or the number of messages).
What other interesting things can be calculated (and not counted!) To predict the effectiveness of the algorithm?
Also, how good are these models? As far as I heard, caching forgotten algorithms compete with I / O algorithms for data on disk, but not for algorithms in memory.
: ? , ( ), , .