This is unsafe in the general case. The reason is that Haskell's expressions may be pure, but they may also not end. The compiler should always end, so the best thing you could do is "evaluate the 1000 steps of this result." 1 But if you add such a limit, what if you compile a program to run on a supercomputer cluster with terabytes of RAM, and the compiler runs out of memory?
You can add many restrictions, but in the end you will reduce the optimization to a slow form of constant folding (especially for most programs whose calculations depend on user input at runtime). And since sum [1..10000000]
takes half a second, it is unlikely that it will fall under any reasonable limit.
Of course, specific cases like this can often be optimized, and the GHC often optimizes complex complex expressions like this. But the compiler cannot just evaluate any expression at compile time; this should be very limited, and this proves how useful it would be for these restrictions. This is a compiler, not an interpreter :)
1 That would significantly slow down the compilation of any program that contains many endless loops, which, since Haskell is not strict, is more likely than you think). Or, more often, any program that contains many lengthy calculations.
ehird source share