In my teacher, I saw how to write parsers, compilers using ANTLR. But in the real world, there is often a need to analyze and extract relevant content from a large load of incoming stream data. Each language has its own regular expression mechanism, which is convenient to use for data analysis. Alternatively, we can write an EBNF grammar and take a smooth tool, such as ANTLR, to automatically create the analyzer. The latter approach is less error prone and guaranteed to be more reliable than the first (especially in the case of some additional spaces, new lines).
I just would like to know what the border between this 2nd world would be, when you could go and write a whole grammar and create your own parser against quickly using the built-in regular language engine and deploying a small parser that can make the work quite fast . Again, I'm not looking for arguments, but I'm trying to analyze the extent to which people approach writing parsers.
source
share