The rules engine has a fact database and a set of rules that can check database items, as well as modify, insert, or delete facts. Typically, a database consists of a set of tagged structures (T V1 V2 ... Vn), each of which has different types of V_i values. A rule is often a template indicating that if any set of instances of a structure has properties [some condition on the values ββof these structures, it can be conjunctive or disjunctive], then one or more values ββof one of the agreed structures are changed, or the remote structure is deleted, or a new structure is inserted with some calculated set of values. A truly complex rule engine treats rules as such structures, and therefore can also insert and delete rules, but this is rather unusual. The rules engine (efficiently, and this is the difficult part) determines which set of rules can correspond at any time, selects one and re-executes it. The significance of this idea is that you can have an arbitrary bucket of "facts" (each of which is represented by a labeled structure) that are roughly independent, and a set of rules that are equally independent, and combine them all together in a single way. The hope is that it is easy to identify structures representing aspects of the world, and it is easier to define rules for their manipulation. This is a way of coding a lot of scattered knowledge, and why the "business" guys love them. (The idea comes from the world of AI).
Compiler analyzers have two tasks entangled in one type of activity: 1) deciding whether the input text stream (broken into langauge tokens) is a legitimate instance of a particular programmable langauge and 2) if so, building compiler data structures (usually abstract syntax trees and symbol tables), which constitute a program, so that the rest of the compiler can generate code. Compilers have spent around 50 years on how to do this quickly, and use very specialized algorithms (such as LALR parser generators with custom encoded actions for the grammar rule) to get the job done.
It is possible to implement a compiler-parser with a rule mechanism; you will need a data type consisting of token streams and other data types corresponding to AST nodes and character table entries. Most likely, it will be more difficult to encode the parser and is unlikely to approach the speed of the parser compiler, and therefore no one does this.
You cannot use a parser compiler to implement a rule mechanism, a period. Thus, the rule engine is strictly more powerful.
source share