I made up my own interview problem and asked about a great solution. I will state the problem and my solution below, but first let me say that the obvious solution includes a nested loop and O (n 2 ). I believe that I found the solution O (n), but then I realized that it depends not only on the size of the input, but also on the largest value of the input. It seems that my O (n) time is only technical, and that it can easily work in O (n 2 ) time or worse in real life.
The problem is this: For each element in the given array of positive integers, print all the remaining elements of the array that are a multiple of the current element.
Input Example:
[2 9 6 8 3]
Result:
2: 6 8 9: 6: 8: 3: 9 6
My solution (in C #):
private static void PrintAllDivisibleBy(int[] arr) { Dictionary<int, bool> dic = new Dictionary<int, bool>(); if (arr == null || arr.Length < 2) return; int max = arr[0]; for(int i=0; i<arr.Length; i++) { if (arr[i] > max) max = arr[i]; dic[arr[i]] = true; } for(int i=0; i<arr.Length; i++) { Console.Write("{0}: ", arr[i]); int multiplier = 2; while(true) { int product = multiplier * arr[i]; if (dic.ContainsKey(product)) Console.Write("{0} ", product); if (product >= max) break; multiplier++; } Console.WriteLine(); } }
So, if 2 elements of the array are equal to 1 and n, where n is the length of the array, the inner while loop will execute n times, which is equivalent to O (n 2 ). But since performance depends on the size of the input values, and not on the length of the list, what makes it O (n), right?
Do you think this is a true O (n) solution? Is it just O (n) due to technical features, but slower in real life?
source share