Consider the problem of finding the k largest elements of an array. There is a good O (n log k) -time algorithm for solving this issue, using only O (k) space, which works by supporting a binary heap of at most k elements. In this case, since k is guaranteed no more than n, we could rewrite these boundaries as O (n log n) with memory O (n), but this turned out to be less accurate. The direct dependence of the runtime and memory usage on k makes it clear that this algorithm takes more time and uses more memory when k changes.
Similarly, consider many standard graph algorithms, such as Dijkstra's algorithm. If you use the Dijkstra algorithm using the Fibonacci heap, the runtime runs O (m + n log n), where m is the number of nodes and n is the number of edges. If you assume that your input graph is connected, then the runtime is also O (m + m log m), but this is a less accurate border than what we had.
On the other hand, if you implement the Dijkstra binary heap algorithm, then the runtime runs in O (m log n + n log n). In this case (again, assuming the graph is connected), the m log n-term strictly dominates the n log n-term, so rewriting this as O (m log n) does not lose any accuracy.
Generally speaking, you will want to give the most accurate boundaries that you can in the process of documenting the runtime and memory usage of a code fragment. If you have several terms where one clearly clearly dominates the other, you can safely abandon these lower terms. But otherwise, I would not recommend abandoning one of the variables, since this loses accuracy in the task you are related to.
source share