Imagine a naive compiler of typed lambda calculus (to eliminate the rather painful problem of nontermination and implicit recursion) that uses normalization (including substitutions for lambdas) as "optimization".
For simple programs, when most or all of the variables are used only once, normalization reduces and speeds up programs.
It is "obvious" to me that in general this is not a good idea. That is, since normalization reduces sharing, there are conditions that get worse due to optimization. Term with 2 multiplications
\x -> let a = x * x in a * a
gets "optimized" in
\x -> (x * x) * (x * x)
with 3 of them.
How to build an example that is getting worse? Is there a term that is likely to overflow RAM during normalization?
We work in a strongly normalized type system, so a discrepancy is impossible, for example. in a suitable subset of the system F with constants and delta rules.
Or with a “free” approach to adding type constants mul, for example.
\mul x -> let a = mul x x in mul a a
Therefore, instead of adding constants, they are simply "additional parameters provided at runtime."
The question may seem to belong to SE Computer Science, but IMO is really the first level, so I find it more appropriate here.
source
share