Lazy Evaluation vs Macros

I got used to the lazy rating from Haskell and found that languages ​​looked impatient by default when I used lazy rating properly. This is actually quite harmful, since the other languages ​​that I use mostly make lazy evaluating things very inconvenient, usually including deploying custom iterators, etc. So, having gained some knowledge, I actually made myself less productive in my original languages. Sigh.

But I heard that AST macros offer another clean way to do the same. I often heard statements such as “Flying Evaluation Makes Extra Macros” and vice versa, mainly from the sparring communities Lisp and Haskell.

I used macros in various variants of Lisp. They just seemed like a really organized way of copying and pasting pieces of code that needed to be processed at compile time. They, of course, were not the holy grail that Lispers was thinking of. But this is almost certainly because I cannot use them correctly. Of course, having a macro system in the same basic data structure that the language itself collects is really useful, but it is still basically an organized way of copying and pasting code. I admit that the basic macro system in the same AST as a language that allows you to completely change the execution time is powerful.

I want to know how macros can be used for brevity and succinctly do what lazy evaluation does? If I want to process the file line by line without breaking it all up, I simply return the list on which the routine was displayed. This is a great example of DWIM (what I mean). I should not even think about it.

I obviously don't get macros. I used them and was not particularly impressed given the hype. So I'm missing something that I don’t get by looking at the documentation on the Internet. Can someone explain all this to me?

+44
macros lisp haskell lazy-evaluation scheme
Aug 12 2018-11-21T00:
source share
5 answers

Lazy evaluation can replace some uses of macros (those that delay evaluation to create constructive controls), but the opposite is not the case. You can use macros to make evaluation delay constructs more transparent - see SRFI 41 (threads) for an example of how: http://download.plt-scheme.org/doc/4.1.5/html/srfi-std /srfi-41/srfi-41.html

Alternatively, you could write your own lazy I / O primitives.

However, in my experience, ubiquitously lazy strict language code tends to introduce overhead compared to universally lazy runtime code designed to effectively support it from the very beginning, which, in my opinion, is a real implementation problem.

+21
Aug 12 2018-11-12T00:
source share

Lazy evaluation makes macros redundant

This is pure nonsense (it’s not your fault, I heard it before). It is true that you can use macros to change order, context, etc. Evaluating expressions, but this is the most basic use of macros, and it's actually not very convenient to simulate a lazy language using special macros instead of functions. Therefore, if you came up with macros from this direction, you would really be disappointed.

Macros are intended to expand the language with the help of new syntactic forms. Some of the macro accessibility features

  • Affects order, context, etc. evaluation of expression.
  • Creation of new forms of binding (i.e. influence on the region into which the expression is calculated).
  • Perform compilation time calculations, including code analysis and conversion.

The macros that make (1) can be quite simple. For example, in Racket , the with-handlers exception handling form is just a macro that expands into a call-with-exception-handler , some conventions, and some continuation code. It is used as follows:

 (with-handlers ([(lambda (e) (exn:fail:network? e)) (lambda (e) (printf "network seems to be broken\n") (cleanup))]) (do-some-network-stuff)) 

The macro implements the concept of "predicate and handler sentences in the dynamic context of an exception" based on the call-with-exception-handler primitive, which processes all exceptions at the point at which they were raised.

A more complex use of macros is the implementation of the LALR (1) parser generator. Instead of a separate file that requires preprocessing, the parser form is just a different kind of expression. It requires a grammar description, computes tables at compile time, and creates a parser function. Action routines are lexically limited, so they can refer to other definitions in a file or even to lambda -variables. You can even use other language extensions in action procedures.

At the extreme end, Typed Racket is a typical Racket dialect implemented through macros. It has a sophisticated type system designed to match the idioms of Racket / Scheme code, and it interacts with untyped modules to protect typed functions with dynamic software contracts (also implemented with macros). It is implemented by the "typed module" macro, which extends, checks the type and converts the body of the module, as well as auxiliary macros for binding type information to definitions, etc.

FWIW, there is also Lazy Racket , the lazy dialect of Racket. It is not implemented by turning each function into a macro, but by reordering the lambda , define and application syntax for macros that create and force promises.

Thus, lazy evaluation and macros have a small intersection point, but they are very different. And macros, of course, are not lazy evaluations.

+58
Aug 12 '11 at 23:32
source share

Laziness is denotative , but macros are not. More precisely, if you do not add rigor to the denotative language, the result is still denotative, but if you add macros, the result is not denotative. In other words, the meaning of an expression in a lazy pure language is a function of exclusively the meaning of the expressions of the components; while macros can produce semantically different results from semantically equal arguments.

In this sense, macros are more powerful, and laziness is accordingly more semantically correct.

Edit : more precisely, macros are not denotative, except in relation to an identical / trivial value (where the concept of "denotative" becomes empty).

+21
Aug 14 '11 at 19:47
source share

Lisp began in the late 50s of the last millennium. See RESULTS OF FUNCTIONS OF SYMBOLIC EXPRESSIONS AND THEIR CALCULATION OF THE MACHINE . Macros were not part of this Lisp. The idea was to calculate using symbolic expressions that can represent all kinds of formulas and programs: mathematical expressions, logical expressions, sentences in the language of the natural language, computer programs, ...

Lisp macros were invented, and they are an application of the above idea to Lisp: macros convert Lisp expressions (or Lisp -like) into other Lisp expressions using the full Lisp> language as a translation language.

You can imagine that with macros you can implement powerful preprocessors and compilers as a Lisp user.

A typical Lisp dialect uses strict argument evaluation: all function arguments are checked before the function is executed. Lisp also has several built-in forms that have different evaluation rules. IF is such an example. Common Lisp IF has a so-called special operator.

But we can define a new Lisp-like (sub) language that uses lazy evaluation, and we can write macros to convert that language to Lisp. This application is for macros, but far from the only one.

An example (relatively old) for such a Lisp extension that uses macros to implement a code transformer that provides data structures with a lazy evaluation of SERIES extension to a generic Lisp.

+9
Aug 13 '11 at 18:36
source share

Macros can be used to process lazy evaluations, but only parts of them. The main point of macros is that thanks to them, nothing is recorded in the language in it.

If programming is like playing with LEGO bricks, with macros you can also change the shape of the bricks or the material they created.

Macros are not just a deferred evaluation. It was available as fexpr (the macro predecessor in lisp history). Macros are a rewrite of a program where fexpr is just a special case ...

As an example, consider what I write in my free time, the tiny lisp compiler for javascript and initially (in the JavaScript core), I only had a lambda with support for the &rest arguments. Now support for keyword arguments, and that is because I redefined what lambda means in lisp.

Now I can write:

 (defun foo (xy &key (z 12) w) ...) 

and call the function with

 (foo 12 34 :w 56) 

When making this call in the body of the function, parameter w will be bound to 56, and parameter z to 12, because it has not been passed. I will also get a runtime error if the unsupported keyword argument is passed to the function. I could even add support for compile-time checking by overriding what compilation of expressions means (for example, adding checks if the form calls to the "static" function pass the correct parameters to the functions).

The central point is that the source (core) language did not support the keyword arguments at all, and I was able to add it using the language itself. The result is the same as if it were from the very beginning; it is just part of the language.

The syntax is important (even if it is technically possible to just use a turing machine). The syntax shapes your thoughts. Macros (and reading macros) give you complete control over the syntax.

The key point is that the code rewrite code does not use the crippled dumb language brainf ** k-like as a metaprogramming of C ++ templates (where just creating an if is an amazing achievement) or with even more dumb less than-regexp, like C -preprocessor.

Code rewriting code uses the same full-blown (and extensible) language. This lisp is all the way down; -)

Of course, writing macros is more difficult than writing regular code; but this is the “substantial complexity” of the problem, not artificial complexity, because you are forced to use a dumb semi-language, for example, with metaprogramming in C ++.

Writing macros is more complicated because the code is complex, and when writing macros you write complex things that create complex things themselves. It’s not even so unusual to go up one level more and write macro-generating macros (which comes from the old lisp joke “I write code that writes code that writes code that I get paid for”).

But macro power is simply limitless.

+5
Aug 13 2018-11-11T00:
source share



All Articles