What is the use of incremental SLR parser generator?

I once applied an SLR parser generator that generates incremental parsers. The parser can parse a piece of text from start to finish, but when you delete or insert text, it does the minimum amount of work and the minimum number of changes in the token stream and syntax tree instead of just redrawing everything from the very beginning. The problem is that I did not find the ability to use this? A parser does a bit more work than a regular parser. Is there such an opportunity? PS. If you want to know how, the basics of Google for developing the diku compiler, is a free book, then all I had to do was change the algorithm a bit so that it would keep the parser state everywhere, which is an additional work I mentioned above.

+3
source share
1 answer

The obvious answer is to support a structured editor that stores the editor, namely the text AST. This allows the editor to suggest how to continue editing when only partial input is provided (for example, after the word “while”, the editor knows that “(” is necessary and can suggest, he can insert the full “if” operator after providing only the keyword, he may complain that the syntax entered is incorrect during input, etc.)

Many such editors were built, most of them were not successful; people seem to love / hate editors who do this.

, , Harmonia . : GLR.

+3

Source: https://habr.com/ru/post/1773613/


All Articles