Why do most programming languages ​​impatiently evaluate arguments passed to a function?

In most programming languages, the arguments passed to a function are evaluated before the function uses them, that is, they are evaluated impatiently.

It seems to me much more reasonable for me to evaluate the arguments only after the function uses them, i.e. lazily. This makes more sense to me, because it seems that he will have a performance advantage: why evaluate things that are never needed?

Also, suppose you wanted to implement an if function that takes a boolean, and an object to return if the boolean is true, and another object to return if the boolean is false:

 object if(bool condition, object valueIfTrue, object valueIfFalse) { if(condition) return valueIfTrue; return valueIfFalse; } 

In a language that readily evaluates arguments, both objects are always evaluated, although a function will always need only one of them, which in the best case incurs minor unnecessary overhead and, in the worst case, causes an infinite loop.

However, since most programming languages ​​use an impatient evaluation of function arguments, I suppose there must be a reason why this is usually done this way. Is there any great benefit from an impatient assessment here that I am missing from view simply because it was easier to implement languages ​​in this way, is it just a tradition or what?

+6
source share
5 answers

There are several reasons that I have seen for appreciation, both of which are important:

  • A simple assessment means that side effects occur immediately and always. If you use a lazy rating, you cannot rely on the side effects of what you did earlier to take effect.
  • A lazy assessment brings with it a certain amount of bloating memory. Typically, it takes much less memory to store the result of the calculation than it does to store the thunk that describes the calculation. This can lead to too much memory (i.e., Time versus memory trade-offs) and, more importantly, more complex characterization of program / algorithm memory.

Lazy appreciation can be a powerful tool, but it is not without it. Purely functional languages, as a rule, avoid problem No. 1 because they have no side effects (in general), but are still bitten by problem No. 2. Languages ​​that allow deferred evaluation (LISP macros are a form, although not such same as a lazy assessment), can have the best of both worlds, but at the cost of great effort on the part of the programmer.

+5
source

Option 1 - load all arguments into registers, call function

Option 2 - load the first argument, evaluate if necessary, wait for the processor pipeline to clear, get the next argument, evaluate if necessary .... then load the necessary parameters into the registers, execute the function with additional logic to mark which ones registers are used.

An 'if' will already cause a performance hold anyway while you wait to see which code path you are executing (slightly preserved by branch prediction)

+3
source

In order for the lazy evaluation to work, you need to be additional code and data somewhere to keep track of whether the expression has been evaluated. In some cases, this would be more expensive than the expected estimate. Determining whether an expression can benefit from a lazy assessment may require a very high level of knowledge about how the program works; the compiler and / or interpreter will certainly not have such knowledge.

In addition, if a function or expression has side effects, a lazy evaluation strategy can cause programs to behave in such a way that they are incompatible and difficult to debug. This, of course, is not a problem in functional programming languages, where there are no side effects on projects. In fact, lazy evaluation is the default strategy for most, if not all, programming languages.

Speaking, nothing prevents you from using both strategies in different places. I would not be surprised if a non-trivial program uses a hybrid approach.

+3
source

Besides the excellent answers already provided, there is another practical problem with lazy pricing. If you have an expression chain that is only lazily evaluated when the latter is “used”, it can be quite difficult to identify performance bottlenecks.

+2
source

In the Cretaceous, there were several languages ​​that did this. SNOBOL, for example. ALGOL 68 had a "by name" function that did something similar. And C (like many of its derivatives) does this in one very specific situation, which he describes as a “short circuit” of a Boolean expression. In general, it is almost always a source of more confusion and error than to provide power.

0
source

Source: https://habr.com/ru/post/905581/


All Articles