I have a counting algorithm for which I am trying to get a general description of a large number. It is terribly nesting and terribly exponential. There he is:
1. For each T_i in T 2. For k = 1 to max_k 3. For each of 2^k*(n choose k) items 4. For each t in T_i 5. check if the item is in t...etc.
Below is a linear idea of ββeach runtime
- This is a simple split, and I'm going to just give it the constant c1.
- max_k is a small number, always less than n, possibly around 4 or 5. I will use k below.
- This loop always executes 2 ^ k * (n selects k) times
- Considering a level 1 constant, we can generalize this line and know that in the worst case it will not be more than 2 ^ n times in the general case, but in the general case it will work from 2 ^ n times, so we will call it (2 ^ n) / s2
- This is a simple if-statement operation inside all of these loops, therefore c3.
Multiplication of all these elements gives:
c1 * k * 2^k * (n choose k) * (2^n)/c2 * c3
Since I want a big-O representation, ignoring constants gives:
k * 2^k * (n choose k) * (2^n)
It is known that (n selects k) is bounded above (n * e / k) ^ k, therefore:
O(k * 2^k * (n * e / k)^k * (2^n))
My question is, what can I ignore here ... 2 ^ n is certainly the dominant term, since n is always greater than k and, as a rule, much more. Can this be simplified to O (2 ^ n)? Or O (2 ^ terrible)? Or leave in 2 ^ k, as in O (2 ^ k * 2 ^ n)? (or leave all terms in?)
I understand that if k or max_k can compete or exceed n, then they are vital. But since they always dominate, can they be discarded as members of the lower order polynomial time? I assume that all exponential time disturbances confuse me. Any advice is appreciated.
source share