One value generates a gradient sequence more slowly in haskell than in c

On his blog, Glasgow Haskell and LLVM Compiler David Trey used an example that generated a gradient sequence to compare GHC performance with C. I decide to run it myself, and the result is incredible: the GHC version is more than one size slower. The code is innocent:

import Data.Word collatzLen :: Int -> Word32 -> Int collatzLen c 1 = c collatzLen cn | n `mod` 2 == 0 = collatzLen (c+1) $ n `div` 2 | otherwise = collatzLen (c+1) $ 3*n+1 pmax xn = x `max` (collatzLen 1 n, n) main = print . solve $ 1000000 where solve xs = foldl pmax (1,1) [2..xs-1] 

Except for replacing foldl with foldl' , I don't think I can do anything about it. The GHC version finds the answer in 45+ seconds, no matter which backend I use, while the C version uses only 1.5 seconds!

My setup is Haskell 2011.2.0.1 (32 bit) platform + OS X 10.6.6 vs gcc 4.2.1. David used GHC 6.13 in his post. Is this a known GHC 7.0.3 bug? Or I must have missed something very obvious.

EDIT: Turns out I missed something obvious. Just using the -O2 flag, ghc now produces very fast code.

+6
source share
1 answer

My question is why the GHC released such slow code in this particular case. The answer is to use the optimization flag -O , -O2 , etc. Using this, I saw that the runtime was reduced from 45 + seconds to 0.6 seconds, an improvement of ~ 80 times.

+6
source

Source: https://habr.com/ru/post/892648/


All Articles