Here is a simple grammar that is LR (1) but not LALR (1):
G -> SS -> c X t -> c Y n -> r Y t -> r X n X -> a Y -> a
The parser generator LALR (1) gives you the state machine LR (0). The parser generator LR (1) gives you the state machine LR (1). With this grammar, the state machine LR (1) has one more state than the state machine LR (0).
The state machine LR (0) contains this state:
X -> a . Y -> a .
The state machine LR (1) contains these two states instead of the one shown above:
X -> a . { t } Y -> a . { n } X -> a . { n } Y -> a . { t }
The problem with LALR is that states are created first without any outside observer knowledge. Then, learning or created heads are studied after the creation of states. Then LALR has this single state, and the look-a-head, which are usually added later, will look like this:
X -> a . { t, n } Y -> a . { n, t }
Can anyone see the problem here? If the forecast is "t", which abbreviation will you choose? This is ambiguous! In this way, the LALR (1) parser generator provides you with a reduce-reduce conflict report that can confuse an inexperienced grammar writer.
That's why I made LRSTAR a LR (1) parser generator. It can handle the above grammar.
source share