Big O - is n always the input size?

I made up my own interview problem and asked about a great solution. I will state the problem and my solution below, but first let me say that the obvious solution includes a nested loop and O (n 2 ). I believe that I found the solution O (n), but then I realized that it depends not only on the size of the input, but also on the largest value of the input. It seems that my O (n) time is only technical, and that it can easily work in O (n 2 ) time or worse in real life.

The problem is this: For each element in the given array of positive integers, print all the remaining elements of the array that are a multiple of the current element.

Input Example:

[2 9 6 8 3] 

Result:

 2: 6 8 9: 6: 8: 3: 9 6 

My solution (in C #):

 private static void PrintAllDivisibleBy(int[] arr) { Dictionary<int, bool> dic = new Dictionary<int, bool>(); if (arr == null || arr.Length < 2) return; int max = arr[0]; for(int i=0; i<arr.Length; i++) { if (arr[i] > max) max = arr[i]; dic[arr[i]] = true; } for(int i=0; i<arr.Length; i++) { Console.Write("{0}: ", arr[i]); int multiplier = 2; while(true) { int product = multiplier * arr[i]; if (dic.ContainsKey(product)) Console.Write("{0} ", product); if (product >= max) break; multiplier++; } Console.WriteLine(); } } 

So, if 2 elements of the array are equal to 1 and n, where n is the length of the array, the inner while loop will execute n times, which is equivalent to O (n 2 ). But since performance depends on the size of the input values, and not on the length of the list, what makes it O (n), right?

Do you think this is a true O (n) solution? Is it just O (n) due to technical features, but slower in real life?

+5
source share
3 answers

Good question! The answer is that no, n not always the size of the input: you cannot talk about O(n) without defining what n means, but often people use an inaccurate language and imply that n is the “most obvious thing that weigh here. " Technically, we usually have to say things like “This sorting algorithm performs a series of comparisons that are O(n) in the number of items in the list”: as for what n is and how much we measure (comparisons),

If you have an algorithm that depends on the product of two different things (here the length of the list and the largest element in it), the correct way to express this is in the form O(m*n) , and then determine what m and n for your context. So, we can say that your algorithm performs O(m*n) multiplications, where m is the length of the list and n is the largest element in the list.

+4
source

The O (n) algorithm, when you need to iterate over n elements and perform some operation with constant time at each iteration. The inner while loop of your algorithm is not constant, since it depends on the hugeness of the largest number in your array.

The runtime of your algorithm is best O (n). This is the case when all n numbers are the same.

The execution time of your worst-case algorithm is O (k * n), where k = the maximum int value possible on your computer if you really insist on setting an upper bound on the k value. For a 32-bit int, the maximum value is 2,147,483,647 . You can say that this k is a constant, but this constant is explicitly

  • not fixed for each case of the input array; and,
  • not insignificant.
0
source

Do you think this is a true O (n) solution?

The run time is actually O(nm) , where m is the maximum element of arr . If the elements in your array are limited by a constant, you can read the O(n) algorithm

Can you improve the lead time? Here is what else you can do. First of all, note that you can make sure that the elements are different. (you compress the array in hashmap, which stores how many times an element is found in the array). Then your execution time will be max/a[0]+max/a[1]+max/a[2]+...<= max+max/2+...max/max = O(max log (max)) (assuming your arr array is sorted). If you combine this with the obvious O(n^2) algorithm, you get the O(min(n^2, max*log(max)) algorithm.

0
source

Source: https://habr.com/ru/post/1233670/


All Articles