Here is an algorithm that is theoretically better than O(n log(n)) , but could be worse for a reasonable n . I believe that his running time is O(n lg*(n)) , where lg* is https://en.wikipedia.org/wiki/Iterated_logarithm .
First of all, you can find all primes up to n in O(n) using an Atkin sieve. See https://en.wikipedia.org/wiki/Sieve_of_Atkin for more details.
Now the idea is that we will compile our list of calculations only by inserting each account once. We will look at the prime coefficients one by one and insert the values ββfor everything with the maximum prime number. However, for this we need a data structure with the following properties:
- We can save the value (in particular, the score) for each value.
- We can view the list of inserted values ββback and forth in
O(1) . - We can find the last number inserted below
i "efficiently." - The insert must be "effective."
(Quotes are those that are difficult to evaluate.)
The first is trivial, each slot in our data structure needs a place for meaning. The second can be done with a doubly linked list. A third can be made with a smart change in the pass. The fourth falls out of the first 3.
We can do this with an array of nodes (which do not start with initialization) with the following fields that look like a double list:
value answer we are looking for.prev The last previous value to which we have an answer.next The next value to which we have an answer.
Now, if i is on the list and j is the next value, the skip trick will be to fill in the prev for the first even after i , with the first divided by 4, divided by 8, and so on, until we reach j . Therefore, if i = 81 and j = 96 we filled prev for 82, 84, 88 , and then 96 .
Now suppose we want to insert the value v into k between the existing i and j . How do we do this? I will present the pseudo-code starting only with k , then fill it out for i = 81 , j = 96 and k = 90 .
k.value := v for temp in searching down from k for increasing factors of 2: if temp has a value: our_prev := temp break else if temp has a prev: our_prev = temp.prev break our_next := our_prev.next our_prev.next := k k.next := our_next our_next.prev := k for temp in searching up from k for increasing factors of 2: if j <= temp: break temp.prev = k k.prev := our_prev
In our specific example, we were ready to look down from 90 to 90, 88, 80, 64, 0 . But we are actually told that prev is 81 when we get to 88 . We would like to search up to 90, 92, 96, 128, 256, ... , but we just need to install 92.prev 96.prev , and we are done.
Now this is a complex bit of code, but its performance is O(log(ki) + log(jk) + 1) . This means that it starts with O(log(n)) , but it gets better when more values ββare populated.
So how do we initialize this data structure? Well, we initialize an array of uninitialized values, then set 1.value := 0 , 1.next := n+1 and 2.prev := 4.prev := 8.prev := 16.prev := ... := 1 . And then we start processing our primes.
When we reach a prime p , start by looking for the previous inserted value below n/p . Departing from there, we continue to insert values ββfor x*p, x*p^2, ... until we reach the limit. (The reason for the opposite is that we do not want to try to insert, say, 18 once in 3 and once in 9. Returning back prevents this.)
Now, what is our uptime? Search for prime numbers O(n) . Finding source inserts is also easy O(n/log(n)) O(log(n)) time operations for another O(n) . How about inserting all the values? This is trivial O(n log(n)) , but can we do better?
Well, firstly, all filled inserts with a density of 1/log(n) can be done in time O(n/log(n)) * O(log(n)) = O(n) . And then all the inserts with a density of 1/log(log(n)) can also be done on time O(n/log(log(n))) * O(log(log(n))) = O(n) . And so on with the increase in the number of magazines. The number of such factors that we get is O(lg*(n)) for the estimate of O(n lg*(n)) that I gave.
I have not shown that this rating is as good as you can, but I think it is.
So, not O(n) , but pretty damn close.