Complexity of function time with recursive calls f (n / 2) and f (n - 2)?

I have been given this function and you want to know the time complexity:

int f2(int n) { if(n<2) return 1; return f2(n/2)+f2(n-2); } 

I calculated its runtime as O (n 2 ) using the telescopic extension method. Is it correct?

Edit: After a second review, I found that this function has a similar structure to mergesort, which has complexity & Theta; (n log n). It is right?

+6
source share
6 answers

Suppose this is a polynomial in n, i.e. the operating time T (n) does not exceed a * n b for some constants a and b . Then we have:

 T(n) = T(n/2) + T(n-2) = a*(n/2)^b + a*(n-2)^b = a/2^b * n^b + a*n^b + (lower-order terms) = a*n^b * (1/2^b + 1) + (lower-order terms) 

Where (lower-order terms) consists of sums and differences n raised to degrees strictly less than b . For sufficiently large n term a * n b * (1/2 b + 1) will dominate the lower terms and will always be greater than a * n b therefore it is not true that T (n) is O (n b ) for any constant b .

Now suppose that it is exponential in n, i.e. that T (n) does not exceed a * b n for some constants a and b . Then:

 T(n) = T(n/2) + T(n-2) = a*b^(n/2) + a*b^(n-2) = a*b^n * (b^(-n/2) + 1/b^2) 

Now, if b> 1 and n> = -2 log (1 - 1 / b 2 ) / log (b), then:

 n >= -2 log(1 - 1/b^2) / log(b) -n/2 <= log(1 - 1/b^2) / log(b) = log_b(1 - 1/b^2) b^(-n/2) <= 1 - 1/b^2 b^(-n/2) + 1/b^2 <= 1 T(n) = a*b^n * (b^(-n/2) + 1/b^2) <= a*b^n * 1 = a*b^n 

Therefore, for any a> 0 and b> 1 there exists N (equal to -2 log (1 - 1 / b 2 ) / log (b)) such that for all n> = N, T (n) <= a * b ^ n.

So what does that say? T (n) is larger than the polynomial (for any polynomial that also includes polylogarithmic functions, such any polylogarithmic function is equal to o (a larger non-logarithmic polynomial)), but T (n) is smaller than any exponent. I am not sure if there is a simple formulation for the exact result, but these are some useful estimates.

EDIT

As nhahtdh points out, this makes the runtime quasi-polynomial time .

+3
source

TL DR: I believe that recurrence is n O (log n) which is superpolynomial, but subexponential. I do not have a corresponding lower bound, and I doubt that it is dense, but it is progressing!

I think I can get an upper bound, sub-exponential, but super-polynomial. Combined with @Adam Rosenfield's answer, which shows that there is no polynomial border, it shows that

  • A recurrent does not have a polynomial boundary, but
  • Any exponential boundary must be weak.

The repetition we want to solve

T (n) = T (n - 2) + T (n / 2) + 1

T (1) = T (2) = 1

One of my ideas was that we could evaluate this repetition in a certain order. Suppose you want to evaluate T (n). Doing this for one step gives

T (n - 2) + T (n / 2) + 1

Now suppose we expand the term T (n - 2). This gives

T (n - 4) + T (n / 2 - 1) + T (n / 2) + 2

Now expand the n-4 term. Doing this gives

T (n - 6) + T (n / 2 - 2) + T (n / 2 - 1) + T (n / 2) + 3

We can repeat the process of decomposition of the subtractive term (the term of the form T (n - 2k)) again and again. So what happens if we decompose it to the point where n - 2k is about n / 2? This will happen about 4 times. If we do this, we get that

T (n) = T (n / 2) + T (n / 2) + T (n / 2 - 1) + T (n / 2 - 2) + ... + T (n / 2 - n / 4 ) + n / 4

I will make the assumption that T (n) is non-decreasing. If we do this, we get that

T (n)? T (n / 2) + T (n / 2) + T (n / 2) + ... + T (n / 2) + n / 4

T (n)? (n / 4 + 1) T (n / 2) + n / 4

I will be a little rough with the math here and make another simplification: instead of having the coefficient T (n / 2) be (n / 4 + 1), I will just have it n / 4. I know that this is unsafe, but I am going to assume that this is not too much connected with the result. It would be nice to double check that everything else still works after that though!

This means that we can simplify our higher repeatability (roughly) to get that

T (n)? (n / 4) T (n / 2) + n / 4

This is much easier to work with, and now we can try to solve it directly. We will use the iteration method and just continue to connect it to ourselves again and again until we see the template. If we do this, we will see the following pattern:

T (n)? (n / 4) T (n / 2) + n / 4

& le; (n / 4) ((n / 8) T (n / 4) + n / 8) + n / 4

= (n 2/32) T (n / 4) + n / 4 + n 2/32

& le; (n 2/32) ((n / 16) T (n / 8) + n / 16) + n / 4 + n 2/32

= (n 3/512) T (n / 8) + n / 4 + n 2/32 + n 3/512

& le; (n 3/512) ((n / 32) T (n / 16) + n / 32) + n / 4 + n 2/32 + n 3/512

= (n 4/16384) T (n / 16) + n / 4 + n 2/32 + n 3/512 + n 4/16384

Let's look at the template that appears. After that k times, the coefficient T (n / 2 k ) is equal to n k divided by some power of two. So far, these two forces have been 4, 32, 512, 16384. These figures do not have any obvious pattern for them, but if we rewrite them as 2 2 2 5 2 9 2 14 we see that they follow the pattern 0, 2, 5, 9, 14, .... This is less than the triangular numbers 1, 3, 6, 10, 15, .... This is not a coincidence: if you think about it, the denominator continues to be multiplied by the next power of two times when we expand the repetition, which increases the exponents in powers of two from one triangular number to another. Therefore, we expect that the denominator after k-iterations will be given

2 (k + 1) (k + 2) / 2 - 1

After that, the remaining terms are the sum of powers of n divided by the degrees of two raised to different triangular numbers. Therefore, if we decompose things from k times, we get

T (n)? (k + 2) / 2 - 1) T (n / 2 k ) + & Sigma, i = 0 k (n i / 2 (i + 1) (i + 2) / 2 - 1 )

Good! So it’s a mess, but it’s not so bad. We know that recursion stops as soon as n / 2 k = 1, which happens when k = log n. To make things a little easier, I'm going to change the base case so that the recursion stops when n / 2 k = 2, which happens when k = log n - 1. It does not change anything more than a constant factor. So, if we substitute in k = log n - 1 in the above formula, we can simplify to get an expression with a closed form for the upper boundary. Doing this gives the following:

T (n)? (k + 2) / 2 - 1) T (n / 2 k ) + & Sigma, i = 0 k (n i / 2 (i + 1) (i + 2) / 2 - 1 )

= (n log n - 1/2 (log n) (log n + 1) / 2 - 1 ) T (2) + & Sigma; i = 0 lg n - 1 (n i / 2 (i + 1) (i + 2) / 2 - 1 )

Ohhh, this is ugly. However, this is not as bad as it might seem. Let's look at the denominator of the first term:

2 (log n) (log n + 1) / 2 - 1

Using the basic laws of indicators, we get that

2 (log n) (log n + 1) / 2 - 1

= (2 log n ) (log n + 1) / 2 - 1

= n (log n + 1) / 2 - 1

It's not so bad! Therefore, if we take the expression

n lg n - 1/2 (lg n) (lg n + 1) / 2 - 1

We see that it is simplified as follows:

n lg n - 1/2 (lg n) (lg n + 1) / 2 - 1

= n log n - 1 / n (log n + 1) / 2 - 1

= n (log n - 1 - ((log n + 1) / 2 - 1))

= n (log n - (log n + 1) / 2)

= n (2 log n - log n - 1) / 2)

= n (log n - 1) / 2)

= n O (log n)

Sharpness! This leading term ends with the form n O (log n) !

What about the rest? Well, we have this terrible summation to deal with:

& Sigma; i = 0 lg n - 1 (n i / 2 (i + 1) (i + 2) / 2 - 1 )

Fortunately, you can see that

n i / 2 (i + 1) (i + 2) / 2 - 1

& le; n i / 2 (i + 1) (i + 2)

& le; n i / 2 (i + 1)

& le; n i / 2 i

= (n / 2) i

So, if all we want to do is the upper bound of the summation, we can do it using this much simpler sum:

& Sigma; i = 0 log n - 1 (n / 2) i

Since n / 2> 0, this sum, in turn, the upper bound is bounded

& Sigma; i = 0 log n - 1 (n / 2) log n - 1

= (log n) (n log n - 1 ) / (2 log n - 1 )

= (2 log n) (n log n - 1 ) / n

= (2 log n) n log n - 2

= n O (log n)

And bingo, the second term in this terrible amount is also n O (lg n) ! This means that we have T (n) = n O (log n) . This estimate is not necessarily tough, but it shows that recurrence is definitely sub-exponential since n O (log n) grows more slowly than any exponential function !

Hope this helps!

+1
source

I think this is O(2^sqrt(n))

You have T(n) = T(n/2) + T(n-2) .

Turning on the solution, dividing by 2^sqrt(n) and given n β†’ ∞ :

 2^sqrt(n) = 2^sqrt(n/2) + 2^sqrt(n-2) --------- ----------- ----------- 2^sqrt(n) 2^sqrt(n) 2^sqrt(n) 1 = 2^sqrt(-n/2) + 1 

And 2^sqrt(-n/2) β†’ 0

0
source

T (n) = T (n / 2) + T (n-2) + 1

we know that T (n-2)> T (n / 2)

we can say that:

T (n) = 2T (n-2) + 1

so T (n-2) = 2T (n-4) + 1

T (n) = 4T (n-4) + 2

T (n) = 8T (n-6) + 3

T (n) = (2 ^ k) T (n-2k) + k

since we know that T (1) = 1

then n-2k = 1 β†’ k = (n-1) / 2

T (n) = 2 ^ ((n-1) / 2) T (1) + (n-1) / 2

T (n) = 2 ^ ((n-1) / 2) + (n-1) / 2

T (n) = 2 ^ ((n-1) / 2)

However, there must be a better solution.

0
source

There is always an empirical approach - its time ...

 #include "timer.h" #include <stdio.h> static int f2(int n) { if (n < 2) return 1; return f2(n/2) + f2(n-2); } int main(void) { printf("Fast:\n"); for (int i = 1; i <= 512; i *= 2) { Clock clk; clk_init(&clk); clk_start(&clk); long sum = 0; for (int j = 0; j < 100; j++) sum += f2(i); clk_stop(&clk); char buffer[32]; printf("%8d: elapsed %10s s; sum = %12ld\n", i, clk_elapsed_us(&clk, buffer, sizeof(buffer)), sum/100); } printf("\nSlow:\n"); for (int i = 512; i < 1024; i += 16) { Clock clk; clk_init(&clk); clk_start(&clk); long sum = 0; for (int j = 0; j < 100; j++) sum += f2(i); clk_stop(&clk); char buffer[32]; printf("%8d: elapsed %7s s; sum = %12ld\n", i, clk_elapsed_ms(&clk, buffer, sizeof(buffer)), sum/100); } return 0; } 

Results:

For recording, the test was launched on a MacBook Pro running Mac OS X 10.8.5, Intel Core i7 at a frequency of 2.3 GHz with 16 GB of RAM.

 Fast: 1: elapsed 0.000000 s; sum = 1 2: elapsed 0.000001 s; sum = 2 4: elapsed 0.000001 s; sum = 4 8: elapsed 0.000002 s; sum = 10 16: elapsed 0.000008 s; sum = 36 32: elapsed 0.000045 s; sum = 202 64: elapsed 0.000408 s; sum = 1828 128: elapsed 0.005985 s; sum = 27338 256: elapsed 0.139971 s; sum = 692004 512: elapsed 5.273834 s; sum = 30251722 Slow: 512: elapsed 5.286 s; sum = 30251722 528: elapsed 6.370 s; sum = 36243644 544: elapsed 7.609 s; sum = 43234278 560: elapsed 8.949 s; sum = 51361196 576: elapsed 10.397 s; sum = 60777212 592: elapsed 12.171 s; sum = 71651844 608: elapsed 14.394 s; sum = 84172786 624: elapsed 16.716 s; sum = 98547380 640: elapsed 19.176 s; sum = 115004102 656: elapsed 21.985 s; sum = 133794468 672: elapsed 25.295 s; sum = 155194954 688: elapsed 29.170 s; sum = 179508916 704: elapsed 33.456 s; sum = 207068524 720: elapsed 38.317 s; sum = 238237116 736: elapsed 43.776 s; sum = 273411566 752: elapsed 49.878 s; sum = 313024652 768: elapsed 56.979 s; sum = 357547444 784: elapsed 64.456 s; sum = 407492292 800: elapsed 72.980 s; sum = 463415834 816: elapsed 82.535 s; sum = 525922004 832: elapsed 93.062 s; sum = 595665060 848: elapsed 104.886 s; sum = 673353212 864: elapsed 118.038 s; sum = 759752270 880: elapsed 132.569 s; sum = 855689292 896: elapsed 150.487 s; sum = 962056258 912: elapsed 166.065 s; sum = 1079814524 928: elapsed 185.616 s; sum = 1209999302 944: elapsed 206.875 s; sum = 1353724140 960: elapsed 230.670 s; sum = 1512185428 976: elapsed 256.718 s; sum = 1686667684 992: elapsed 285.158 s; sum = 1878548866 1008: elapsed 316.281 s; sum = 2089305684 

I did not approach this data. Clearly, you could reduce or eliminate iterations for large sizes, etc. Etc. (And this will avoid the overflow that occurs with 1024 and ...).

If someone has gained some experience in drawing graphs, he can show the result (logarithmic time scale).

0
source

I think this is exponential. What for? Each f (n) leads to a call to f (n / 2) and f (n-2).

Check this question for details:

Computational Fibonacci Sequence Complexity

This is the complexity of the Fibonacci order. The structure of your function is similar.

-1
source

Source: https://habr.com/ru/post/954007/


All Articles