A common example is that the parser requests tokens. The analyzer loop usually takes the form of โreading the next token, determining what to do with it, updating the partially processed tree, repeatingโ, and it is easier to achieve it if the parser calls the lexer itself (instead of the third code fragment read from lexer and the feed tokens to the parser) .
Of course, the heart of the analyzer depends on the algorithm you are using (recursive descent, LR ...), but it usually consists in determining whether the new token matches the rule that you are currently parsing (for example, find - when reading EXPR = '-' EXPR | ... ), fits with a rule (for example, when searching for a class while reading DEF = CLASS | ... such that CLASS = 'class' ... ) or does not fit at all (at this point you should abort current rule by building the corresponding AST node and repeat the process according to the parent rule).
Recursive descent analyzers do this using subheadings (for sub-rules) and returning values โโ(to return to the parent rule), while RL partners tend to compress multiple rules and sub-chapters into a single shift to stay in the current set of rules or reduce to break the rules and build one or more AST nodes.
source share