Upper bounds and lower bounds in algorithms

I have seen several articles describing the upper bound as the best case and the lower bound as the worst case. Meanwhile, some articles explain the upper or lower bounds of the worst case.

So basically it made me ask three questions:

  • What are the upper / lower bounds?
  • How can they be defined separately in the worst case scenario?
  • Is it possible to define boundaries for other cases (Best, Average)?
+5
source share
2 answers

What are the upper / lower bounds?

We are interested in feature ratings, as you can read on Wikipedia .

In addition, the response part says:

For the function f(n) g(n) is the upper bound ( large O ), if for "sufficiently large n", f(n)<=c*g(n) , for the constant c . [g dominates f]
g (n) the lower bound ( large omega ), if for "a sufficiently large number n", f(n) >= c*g(n) , for the constant c . [f dominates g]

How can they be defined separately in the worst case scenario?

They will either be different or the same; in this case we will say that Θ (n), where n is usually the size of the problem. As Duckeling said:

Worse, the best and average cases can be represented as functions (for final algorithms). Each of these functions has upper and lower bounds (of which there are infinitely many). Performing a constant number of operations per element (for example, the best example for sorting an insert and the middle / worst case for a linear search) will have a bounded boundary (lower and upper bound) Θ (n), as well as an upper bound O (n 2 ) or a lower bound Ω (1).

Can borders be defined for other cases (Best, Average)?

Yes. It is possible that all cases have upper and lower bounds.

0
source

The best case is almost never discussed. It is just not that interesting. The algorithm can always be modified to have the smallest theoretically possible best case, which is O (max (input size, output size)), simply by recognizing one specific input and obtaining the output data previously computed for this input. In the benchmarking business, this is called cheating.

The term “connected here” has the same meaning as in the rest of mathematics, namely, an arbitrary value that is no more (no less) than any element of this set.

For example, when discussing a set of sorting algorithms, we can prove that no comparison sorting algorithm has better asymptotic efficiency than O (n log n) in the worst case (and also in the average case). Thus, O (n § n) is the lower bound on the efficiency of all possible comparison algorithms based on sorting in the worst case (as well as in the average case). O (n) is another lower bound. O (n log n) is a better lower bound than O (n). And it just happens that O (n § n) is a tight lower bound, because there are actually sorting algorithms with this complexity.

There is no final upper bound on the complexity of the sorting algorithm, since an arbitrarily bad sorting algorithm can be created.

On the other hand, we can discuss a specific sorting algorithm and prove that it never exceeds a certain number of operations, which would be the upper limit of its complexity. For example, the quicksort algorithm has an upper bound of O (n 2 ). It also has an upper bound O (n 3 ). It does not have an upper bound O (n log n), because there are inputs that exceed this number of operations. The boundary O (n 2 ) is tight because it is reached for some inputs.

It is theoretically possible to discuss the lower bound in the same sense as above, but this is almost never done (this would be tantamount to discussing the complexity of the best case, which usually does not interest us).

We can also discuss the complexity of a particular problem and place both upper and lower bounds on it. How effective is the most efficient algorithm (in the worst or middle case) that solves it? (We do not discuss the best case, because the answer is not interesting, see above.) For the sorting task based on comparison, we know that both the rigid upper bound and the rigid lower bound are O (n log n), because in fact there are O (n log n) sorting algorithms, and it can be shown that a better algorithm no exists. This is not a very interesting case, because we can find the most efficient algorithm possible. For example, a problem with a satchel, the situation is more interesting. We only know that the upper bound O (2 n ) exists, because an algorithm with such complexity exists trivially (brute force). We suspect, but cannot prove, that this border is rigid. We also cannot provide any good lower bound (we suspect that there are no algorithms that solve it with polynomial complexity, but cannot prove it).

+5
source

Source: https://habr.com/ru/post/1268225/


All Articles