For your first question, there is a difference between the best version of the algorithm, the worst execution time of the algorithm, big-O and big- & Omega; notation. The best and worst run times of the algorithm are actual mathematical functions with exact values that show the maximum and minimum amount of work of the algorithm. To describe the growth rate of these functions, we can use big-O and big- & Omega; notation. However, you can describe the best behavior of a function with big and omega; designation or worst case with designation "big O". For example, we might know that the worst execution time of a function can be O (n 2 ), but actually does not know which function O (n 2 ) is the worst behavior. This can happen if we want to associate the worst behavior with the upper limit, so that we know that this is not catastrophically bad. It is possible that in this case the worst case behavior is actually & Theta; (n), and in this case O (n 2 ) is a bad upper limit, but, saying that the worst behavior of O (n 2 ) in this case indicates that it is not scary. In the same way, we can say that the best behavior of the algorithm is & Omega; (n) for example, if we know that he should do at least linear work, but cannot say whether he should do much more than that.
As for your second question, the analysis of the worst and best behavior of the algorithm depends entirely on the structure of the input data, and it is usually quite difficult to analyze the behavior of the best and worst case of the algorithm without seeing how it works on different data sets. It is perfectly reasonable to analyze the worst case by presenting some specific families of input data (note that this should be a family of inputs, not just one input, since we need to get an asymptotic estimate) and showing that the algorithm should perform poorly on this input. Similarly, you can analyze the best case.
Hope this helps!
source share