In SICP video streams 2, Abelson gives an example of using analog computer solutions to solve differential equations. He then programs this in Scheme , using a lazy evaluation to circumvent the circular dependency of the definition.
The problem with this method, he said, is that when developing more complex programs, you get slow-motion expressions everywhere that make it difficult to understand. To solve the problem elegantly, he says, you must make the whole language lazy at the cost of some expressiveness, namely the problem of dragging the tail.
This is the approach used by Miranda and Haskell . In Haskell, I found it difficult to talk about the complexity of big O , and it is easy to write programs that consume too much memory and time.
I once spoke to Robert Harper about this issue, and he does not agree that you have to make your whole language lazy to make it elegant, and that this is a design flaw in Haskell. How exactly could a language be partially lazy to solve this problem? Are there examples of such languages? I would like to learn more about functional languages, but disallowing side effects and impatient evaluation everywhere, including I / O, makes things a little ... contradictory.
source share