Algorithm Analysis: Am I analyzing these algorithms correctly? How to approach such problems

1)

x = 25; for (int i = 0; i < myArray.length; i++) { if (myArray[i] == x) System.out.println("found!"); } 

I think this is O (n).

2)

 for (int r = 0; r < 10000; r++) for (int c = 0; c < 10000; c++) if (c % r == 0) System.out.println("blah!"); 

I think this is O (1), because for any input n it will work 10,000 * 10,000 times. Not sure if this is correct.

3)

 a = 0 for (int i = 0; i < k; i++) { for (int j = 0; j < i; j++) a++; } 

I think this is O (i * k). I really don’t know how to approach such problems, where the inner loop depends on the fact that the variables increase in the outer loop. Some key ideas here will be greatly appreciated. The outer loop runs k times, and the inner loop runs 1 + 2 + 3 + ... + k times. So the sum should be (k / 2) * (k + 1), which would be the order of k ^ 2. So will it really be O (k ^ 3)? It seems too big. Again, I do not know how to approach this.

4)

 int key = 0; //key may be any value int first = 0; int last = intArray.length-1;; int mid = 0; boolean found = false; while( (!found) && (first <= last) ) { mid = (first + last) / 2; if(key == intArray[mid]) found = true; if(key < intArray[mid]) last = mid - 1; if(key > intArray[mid]) first = mid + 1; } 

This, I think, is O (log n). But I came to this conclusion, because I believe that this is a binary search, and I know from reading that the runtime is O (log n). I think this is because you divide the input size by 2 for each iteration of the loop. But I don’t know if this is the correct argument or how to approach similar algorithms that I have not seen and be able to deduce that they work in logarithmic time in a more verifiable or formal way.

5)

 int currentMinIndex = 0; for (int front = 0; front < intArray.length; front++) { currentMinIndex = front; for (int i = front; i < intArray.length; i++) { if (intArray[i] < intArray[currentMinIndex]) { currentMinIndex = i; } } int tmp = intArray[front]; intArray[front] = intArray[currentMinIndex]; intArray[currentMinIndex] = tmp; } 

I am confused by this. The outer loop runs n times. And the inner loop of the cycle n + (n-1) + (n-2) + ... (n - k) + 1 times? That is, O (n ^ 3)?

+6
source share
3 answers

More or less, yes.

1 right - it seems you're looking for a specific item in what I assume is not a sorted collection. If so, the worst case is that the item is at the very end of the list, therefore, O (n).

2 correct, although a little strange. This is O (1), assuming that r and c are constants, and the estimates are not variables. If they are constant, then yes O (1), because there is nothing to enter.

3 I think O (n ^ 2) is considered. There would be some constant factor, such as k * n ^ 2, a constant drop and you would get O (n ^ 2).

4 is very similar to the binary search algorithm for a sorted collection. O (logn) is correct. This is a log, because at each iteration, you, in fact, halve the number of possible options that may contain the element you are looking for.

5 looks like a bubble sort, O (n ^ 2), for the same reasons as for 3.

+2
source

O () does not mean anything in itself: you need to indicate whether you consider the “worst case” O or the average case O. For some sorting algorithm, they have O (n log n) on average, but O (n ^ 2) in worst case.

Basically you need to calculate the total number of iterations of the inner loop itself and take the largest component of the result without any constant (for example, if you have k * (k + 1) / 2 = 1/2 k ^ 2 + 1/2 k , the largest component is 1/2 k ^ 2, so you are O (k ^ 2)).

For example, your 4) element is in O (log (n)), because if you are working with an array of size n, then you will run one iteration in this array, and the next will be on an array of size n / 2, then n / 4 , ... until this size reaches 1. So this is an iteration of log (n).

+1
source

Your question mainly concerns the definition of O ().

When someone says that this algorithm is O (log (n)), you should read:

When the input parameter n becomes very large, the number of operations performed by the algorithm grows no more in log (n)

Now this means two things:

  • You must have at least one input parameter n. It makes no sense to talk about O () without one (as in your case 2).
  • You need to identify the operations that you are counting. It can be additions, a comparison between two elements, the number of bytes allocated, the number of function calls, but you have to decide. Usually you perform an operation that is most expensive for you, or one that will become expensive if done too many times.

Therefore, remembering this, return to your problems:

  • n is myArray.Length, and the number of operations you are counting is '=='. In this case, the answer will be exactly n, which is O (n)

  • you cannot specify n

  • n can be only k, and the number of operations that you count is ++. You have exactly k * (k + 1) / 2, which is O (n2), as you say

  • this time n is the length of your array, and the operation you consider is ==. In this case, the number of operations depends on the data, as a rule, we are talking about the “worst case scenario”, which means all the possible results, we look at the one that takes the most time. In the best case, the algorithm takes one comparison. In the worst case, take an example. If the array is [[1,2,3,4,5,6,7,8,9]] and you are looking for 4, your intArray [mid] will be sequentially, 5, 3, and then 4, and so you would do a comparison 3 times. In fact, for an array whose size is 2 ^ k + 1, the maximum number of comparisons is k (you can check). So n = 2 ^ k + 1 => k = ln (n-1) / ln (2). You can extend this result to the case where n is not 2 ^ k + 1 and you get complexity = O (ln (n))

In any case, I think you're confused because you don't know exactly what O (n) means. Hope this is the beginning.

+1
source

Source: https://habr.com/ru/post/920452/


All Articles