Slow performance of factorial function written in Clojure

I am new to Clojure. In an experiment with him, I wrote a function I to calculate n! . My Clojure code is as follows:

 (defn factorial [n] (reduce * (biginteger 1) (range 1 (inc n)))) 

Then I ran the following in repl.

 (time (factorial 100)) 

And this was the result:

 "Elapsed time: 0.50832 msecs" 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000N 

Then I created a similar solution in Ruby:

 def factorial(n) start = Time.now.to_f (2..n).inject(1) { |p, f| p * f } finish = Time.now.to_f time_taken = finish - start puts "It took: #{(time_taken * 1000)} msecs" end 

With irb I ran factorial(100) Result:

 It took: 0.06556510925292969 msecs => 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 

The performance of the Ruby version appears to be significantly higher, despite most of the evidence I've seen, suggesting that Clojure should have superior performance. Is there something that I misunderstand or some element of my Clojure solution that will slow it down?

+6
source share
3 answers

World benchmarking is very often misleading, and generally quite difficult to fix. The easiest way to get reasonably close in clojure (which I found is the criteria library (thanks Hugo!). If I start with the ugly version of factorial computing with a simple loop, I get about 3 ns.

 user> (defn loopy-fact [x] (loop [yx answer-so-far 1] (if (pos? y) (recur (dec y) (*' answer-so-far y)) answer-so-far))) #'user/loopy-fact user> (loopy-fact 100) 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000N 

And then give an assessment:

 user> (criterium.core/bench #(loopy-fact 100)) WARNING: Final GC required 11.10521514596218 % of runtime WARNING: Final GC required 1.069604210579865 % of runtime Evaluation count : 12632130300 in 60 samples of 210535505 calls. Execution time mean : 2.978360 ns Execution time std-deviation : 0.116043 ns Execution time lower quantile : 2.874266 ns ( 2.5%) Execution time upper quantile : 3.243399 ns (97.5%) Overhead used : 1.844334 ns Found 4 outliers in 60 samples (6.6667 %) low-severe 2 (3.3333 %) low-mild 2 (3.3333 %) Variance from outliers : 25.4468 % Variance is moderately inflated by outliers 

If we then make the code better using the usual clojure style, with a map and reduce it, and don’t make any effort to quickly execute it.

 user> (defn mapy-fact [x] (reduce *' (range 1 (inc x))) #'user/mapy-fact user> (mapy-fact 100) 933262154439441526816992388562667004907159682643816214685929638952175999932299156089414639761565182862536979208272237582511852109168640000000000000000000000N 

Now let's find out how this compares:

 user> (criterium.core/bench #(mapy-fact 100)) Evaluation count : 8674569060 in 60 samples of 144576151 calls. Execution time mean : 5.208031 ns Execution time std-deviation : 0.265287 ns Execution time lower quantile : 5.032058 ns ( 2.5%) Execution time upper quantile : 5.833466 ns (97.5%) Overhead used : 1.844334 ns Found 4 outliers in 60 samples (6.6667 %) low-severe 1 (1.6667 %) low-mild 3 (5.0000 %) Variance from outliers : 36.8585 % Variance is moderately inflated by outliers 

This is a bit slower, but only slower by two nanoseconds .

This is much better than in your test, because the criterion runs the function enough time for the JVM Hotspot compiler to bypass its compilation and encrust all parts. This demonstrates why microobjects can be misleading to the JVM . and you will almost certainly adhere to the criteria for such cases.

PS: *' is an opponent of the multiplication of "auto-promotion", he will contribute to its use for large or large decimal places as necessary

+2
source

BigInteger comes from java, and BigInt is implemented in the core of Clojure. From the very beginning, this involves some of the costs associated with the interaction .


In addition, BigInt is represented as a long or BigInteger . Whenever possible, long used . However, if any operation does this overflow , the new new BigInt will use its BigInteger . Java long compares with the implementation of its own architecture, therefore, much faster. This is like converting Ruby magic between Fixnum and Bignum .

Since you use small numbers almost exclusively (from 1 to 100 and a good piece of intermediate products), you can get a significant increase in performance.

+6
source

In addition to @ndn answer :

You can get extra speed by specifying the argument type n :

 (defn factorial [^long n] (reduce * (bigint 1) (range 1 (inc n)))) 
+2
source

Source: https://habr.com/ru/post/1013094/


All Articles