Tail Call optimizations explaining this performance difference

I saw three different ways to write the recursive form of the Fibonacci function: using built-in math, with built-in math with caching the result and one using tail recursion. I understand that using memoization converts the O (N) algorithm to O (1) after caching responses. But I don’t understand how tail call optimization can help with this. I was impressed that this could interfere with multiple instances or something like that. It is almost as fast as O (1). What makes Ruby so fast?

Here is a slow naive implementation with mathematical inline. This is by far the slowest running time for O (N), and then when it is looped, as in the display in O (N ^ 2).

puts Benchmark.measure { # Calculate the nth Fibonacci number, f(n). def fibo (n) if n <= 1 return n else value = fibo(n-1) + fibo(n-2) return value end end # Display the Fibonacci sequence. (1..40).each do |number| puts "fibo(#{number}) = #{fibo(number)}" end } 

Times Ruby 1.9.3: 55.989000 0.000000 55.989000 (55.990000)
Times JRuby 1.7.9: 51.629000 0.000000 51.629000 (51.629000)
source ( http://rayhightower.com/blog/2014/04/12/recursion-and-memoization/?utm_source=rubyweekly )

Here is a version that remembers the answers, it is clear why this is fast for me. After doing the math, any next query is executed O (1) times, so when it is included in the loop, it still runs O (N) times in the worst case:

 puts Benchmark.measure { # Fibonacci numbers WITH memoization. # Initialize the memoization array. @scratchpad = [] @max_fibo_size = 50 ( 1..@max _fibo_size).each do |i| @scratchpad[i] = :notcalculated end # Calculate the nth Fibonacci number, f(n). def fibo (n) if n > @max_fibo_size return "n must be #{@max_fibo_size} or less." elsif n <= 1 return n elsif @scratchpad[n] != :notcalculated return @scratchpad[n] else @scratchpad[n] = fibo(n-1) + fibo(n-2) return @scratchpad[n] end end # Display the Fibonacci sequence. (1..40).each { |number| puts "fibo(#{number}) = #{fibo(number)}" } } 

Times Ruby 1.9.3: 0.000000 0.000000 0.000000 (0.025000)
Times JRuby 1.7.9: 0.027000 0.000000 0.027000 (0.028000)
Source ( http://rayhightower.com/blog/2014/04/12/recursion-and-memoization/?utm_source=rubyweekly )

The recursive version of the tail call version starts quite quickly:

 puts Benchmark.measure { # Calculate the nth Fibonacci number, f(n). Using invariants def fibo_tr(n, acc1, acc2) if n == 0 0 elsif n < 2 acc2 else return fibo_tr(n - 1, acc2, acc2 + acc1) end end def fibo (n) fibo_tr(n, 0, 1) end # Display the Fibonacci sequence. (1..50).each do |number| puts "fibo(#{number}) = #{fibo(number)}" end } 

Times Ruby 1.9.3: 0.000000 0.000000 0.000000 (0.021000)
Times JRuby 1.7.9: 0.041000 0.000000 0.041000 (0.041000)
Source ( https://gist.github.com/mvidaurre/11006570 )

0
source share
2 answers

Tail recursion is no different. In fact, Ruby does nothing to optimize tail calls.

The difference is that the naive algorithm recursively calls itself twice every time it calls, which gives O (2 n ) performance, which means that the execution time increases with increasing N with an exponential increase. The tail call version runs in linear mode.

+4
source

TL DR: As Chuck already mentioned, Ruby does not have TCO. However, doing one recursion, not two, has a big impact on how many stacks you use and how many iterations are done. With this answer, I just want to point out that at some point, the memoization version is better than the iterative version. NB: I'm not a ruby ​​programmer. This may not be an idiomatic code.

The test shows that the iterative approach is so fast that it can generate Fibo 1. 1.50 from scratch as fast as your memoization version reuses calculations in every method call above 3.

I think that 1..50 is done so quickly that it is not very reliable to see if the iteration is actually faster. I changed the memopization version to:

 # Initialize the memoization array. @scratchpad = [] # Calculate the nth Fibonacci number, f(n). def fibo (n) if n <= 1 return n end if @scratchpad[n].nil? @scratchpad[n] = fibo(n-1) + fibo(n-2) end return @scratchpad[n] end 

Then I changed the loop to this:

 (1..5000).each { |number| fibo(number) # no need to time character output } 

Here are the results on my computer:

 Iteration: 6.260000 0.010000 6.270000 ( 6.273362) Memoization: 0.000000 0.000000 0.000000 ( 0.006943) 

I used:

 ruby -v ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] 

Increasing the memoization version to 1.50,000 still returned much faster than the iteration version. The reason is that iteration starts from scratch every time, while the memoization version has a more inefficient algorithm, but memoization only makes it max. 2 times for each number, since we have fib(n-1) and fib(n-2) in the array when calculating fib (n) `.

The slowest one has O(fib(n)) , of course. Iterative has O(n) . Since memoization, fib(n-2) free to calculate fib(n-1) , so we go back to O(n) , but in your test, you calculate the previous fibonacci number before the next, so in practice each individual iteration from 1..x equal to O(1) . If you start with the highest number, the first iteration will be O(n) , and each next will be O(1) .

+3
source

Source: https://habr.com/ru/post/1265673/


All Articles