Wouldn't it be enough to indicate a special input and show that the work time is at least f (n)?
Yes, assuming you are talking about the worst difficulty complexity .
If you talk about the complexity of the worst case — and you proved that it works in O(f(n)) , if you find an input that is “worse” than C*f(n)) for some constant C — you have effectively proved that the algorithm (in the worst case, performance) is in Ω(f(n)) , and since O(f(n)) [intersection] Ω(f(n)) = Theta(f(n)) , this means that your algorithm works in Theta(f(n)) in the worst case analysis.
Note that this will actually be a “family” of examples, because if I ask for “yes”, but this is only true for small values of n , and not for n>N (for some n ), you can show me that this the family of examples also covers the case where n>N and is still valid.
Symmetrically, if you prove that the algorithm has the best performance Ω(f(n)) , and you find some input that works “better” (faster) than C*f(n)) for some constant C , you effectively proved that the Theta(f(n)) algorithm in the best case analysis.
This trick does NOT work to analyze the average case - where you have to calculate the expected lead time, and a special example will not help.
I read that one opportunity to prove tightness is to "reduce sorting to it." I have no idea what is meant by this.
This is done in order to prove a much stronger statement that there is no algorithm (in general) that can solve a certain problem at the right time . Its general use is to suggest that there is some black box algorithm A that runs at some o(g(n)) time, and then use A to build a sorting algorithm that runs at o(nlogn) time. However, since sorting is an Ω(nlogn) problem, we have a contradiction, and we can conclude that the assumptions about A are incorrect and cannot be satisfied at the right time.
This helps us to prove a stronger statement: not only the OUR algorithm has this lower bound, but the ALL algorithms that solve a particular problem have this lower bound.
source share