FIRST
tokens is determined. The FIRST
set of
a nonterminal defines all terminal tokens that can be encountered when
beginning to recognize that nonterminal.
FOLLOW
tokens is determined. A FOLLOW
set of
a nonterminal defines all terminal tokens that can be encountered next,
following the recognition of that nonterminal.
FIRST
and FOLLOW
sets, the grammar
itself is analyzed. Starting from the start rule all possible syntactically
correct derivations of the grammar are determined.
parse()
function) will
process input according to the tables generated by the parser generator.
All the above phases will be illustrated and discussed in the next sections. Additional details of the parsing process can be found in various books about compiler construction, e.g., in Aho, Sethi and Ullman's (2003) book Compilers (Addison-Wesley).
In the sections below, the following grammar will be used to illustrate the various phases:
%token NR %left '+' %% start: start expr | // empty ; expr: NR | expr '+' expr ;The grammar is interesting since it has a rule containing an empty alternative and since it formally suffers from a shift-reduce conflict. The shift-reduce conflict is solved by explictly assigning a priority and association to the
'+'
token.
The analysis starts by defining an additional rule, which is recognized at
end of input. This rule and the rules specified in the grammar together define
what is known as the augmented grammar. In the coming sections
the symbol $
is used to indicate `end of
input'. From the above grammar the following augmented grammar is derived:
1. start: start expr 2. start: // empty 3. expr: NR 4. expr: expr '+' expr 5. start_$: start (input ends here)
bisonc++ itself will produce an analysis of any grammar it is offered when the
option --construction
is provided.
FIRST
set defines all terminal tokens that can be encountered when
beginning to recognize a grammatical symbol. For each grammatical symbol
(terminal and nonterminal) a FIRST
set can be determined as follows:
FIRST
set of a terminal symbol is the symbol itself.
FIRST
set of an empty alternative is the empty set. The empty
set is indicated by e
and is considered an actual element of the
FIRST
set (So, a FIRST
set could contain two elements:
'+'
and e
).
X: X1 X2 X3..., Xi, ...Xn
, then
initialize FIRST(X)
to empty (i.e., not even holding e
). Then,
for each Xi (1..n):
FIRST(Xi)
to FIRST(X)
FIRST(Xi)
does not contain e
FIRST(Xn)
does not contain e
remove e
from FIRST(X)
(unless
analyzing another production rule) e
is already part of FIRST(X)
.
When starting this algorithm, only the nonterminals need to be
considered. Also, required FIRST
sets may not yet be available. Therefore
the above algorithm iterates over all nonterminals until no changes were
observed. In the algorithm $
is not considered.
Applying the above algorithm to the rules of our grammar we get:
nonterminal | rule | FIRST set | |
start_$ | start | not yet available | |
start | start expr | not yet available | |
start | // empty | e |
|
expr | NR | NR |
|
expr | expr '+' expr | NR |
|
changes in the next cycle: | |||
start | start expr | NR e |
|
start | // empty | NR e |
|
changes in the next cycle: | |||
start_$ | start | NR e |
|
no further changes |
FOLLOW
set defines all terminal tokens that can be encountered when
beginning to recognize a grammatical symbol. For each grammatical symbol
(terminal and nonterminal) a FOLLOW
set can be determined as follows
(remember that EOF
is indicated by $
):
$
is put in the FOLLOW
set of the start rule.
firstSet
' to FIRST(lastSymbol)
, where
lastSymbol
is the production rule's last element.
E
before the rule's last element:
E
is a nonterminal, compute FOLLOW(E)
+= firstset
FIRST(E)
contains e
, compute firstset +=
FIRST(E)
FIRST(E)
does not contain e
, compute firstset =
FIRST(E)
FOLLOW
set has changed. At each production rule:
L
).
E
:
E
is a terminal;
E
is not equal to L
, compute FOLLOW(E)
+=
FOLLOW(L)
;
FIRST(E)
does not contain e
, stop processing this
production tule.
Applying the above algorithm to the example grammar we get:
FOLLOW(start_$)
= $
.
start_$: start
start: start expr
firstSet = FIRST(expr) = {NR}
FIRST(start)
contains e
, so
FOLLOW(start) += firstSet
; so FOLLOW(start) = {NR}
expr: NR
expr: expr '+' expr
firstSet = FIRST(expr) = {NR}
FIRST('+')
doesn not contain e
, so firstSet = '+'
FOLLOW(expr) += firstSet
, so FOLLOW(expr) = {'+'}
FOLLOW(start_$) = { $ } FOLLOW(start) = { NR } FOLLOW(expr) = { '+' }
start_$: start
LHS: start_$
FOLLOW(start)
+=
FOLLOW(start_$)
, so FOLLOW(start)
= { NR $ }
start: start expr
LSH: start
FOLLOW(expr)
+=
FOLLOW(start)
, so FOLLOW(expr)
= { '+' NR $ }
LHS
).
FOLLOW
sets, so the eventual sets become:
start_$: { $ } start: { NR $ } expr: { '+' NR $ }
FIRST
and FOLLOW
sets, bisonc++ determines the
states of the grammar. The analysis starts at the augmented grammar rule
and proceeds until all possible states have been determined.
For this analysis the concept of the dot symbol is used. The dot shows
the position we are when analyzing production rules defined by a grammar.
Using the provided example grammar the analysis proceeds as follows:
start_$ -> . start
From the above kernel item the following non-kernel items are derived:
start -> . start expr
start -> .
LHS
of the reduced production rule is performed. This process is discussed
in more detail in section 7.1.6.
Looking at the state's items, two actions are detected:
start
, to a state in which start
has been seen (state 1)
start -> .
start_$ -> start .
start -> start . expr
expr -> . NR
expr -> . expr '+' expr
start_$
rule has been recognized, and so the input could be
recognized by the grammar. Other transitions are possible to, though:
expr
to state 2
start -> start expr .
expr -> expr . '+' expr
'+'
to state 4
start
according to its first item (removing
two elements from the parser's stack).
expr -> NR .
expr
(removing one element from the parser's stack).
expr -> expr '+' . expr
expr -> . NR
expr -> . expr '+' expr
expr
to state 5
NR
to state 3
NR
token.
expr -> expr '+' expr .
expr -> expr . '+' expr
'+'
to state 4
expr
according to its first item (removing three
elements from the parser's stack).
With the current grammar it turns out (and the reason why this is so will be discussed in the next section) that the first action will never take place: in this state there will always be a reduction.
n
tokens and groupings match a rule. This is because such a simple
strategy is inadequate to handle most languages. Instead, when a reduction is
possible, the parser sometimes "looks ahead" at the next token in order to
decide what to do.
When a token is read, it is not immediately shifted; first it becomes the look-ahead token, which is not on the stack. Now the parser can perform one or more reductions of tokens and groupings on the stack, while the look-ahead token remains off to the side. When no more reductions should take place, the look-ahead token is shifted onto the stack. This does not mean that all possible reductions have been done; depending on the token type of the look-ahead token, some rules may choose to delay their application.
Here is a simple case where look-ahead is needed. These three rules define
expressions which contain binary addition operators and postfix unary
factorial operators (`!
'), and allow parentheses for grouping.
expr: term '+' expr | term ; term: '(' expr ')' | term '!' | NUMBER ;Suppose that the tokens `
1 + 2
' have been read and shifted; what
should be done? If the following token is `)', then the first three
tokens must be reduced to form an expr
. This is the only valid course,
because shifting the `)' would produce a sequence of symbols term
')', and no rule allows this.
If the following token is `!
', then it must be shifted immediately so that
`2 !
' can be reduced to make a term. If instead the parser were to reduce
before shifting, `1 + 2
' would become an expr
. It would then be
impossible to shift the `!
' because doing so would produce on the stack
the sequence of symbols expr '!
'. No rule allows that sequence.
The current look-ahead token is stored in the parser's private data member
d_token
. However, this data member is not normally modified by member
functions not generated by bisonc++. See section 6.6.
In the previous section it was stated that although state 5 has two possible
actions, in fact only one is used. This is a direct consequence of the
%left '+'
specification, as will be discussed in this section.
When analyzing a grammar all states that can be reached from the augmented
start rule are determined. In state 5 bisonc++ is confronted with a choice: either
a shift on '+'
or a reduction according to the item `expr -> expr '+'
expr .
'. What choice will bisonc++ make?
At this point the fact that bisonc++ implements a parser for a Look Ahead Left to Right (1) (LALR(1)) grammar becomes relevant. Bisonc++ will use computed lookahead sets to determine which alternative to select, when confronted with a choice. The lookahead set can be used to favor one transition over the other when eventually generating the tables for the parsing function.
Sometimes the lookahead sets allow bisonc++ simply to remove one action from
the set of possible actions. When bisonc++ is called to process the example grammar
while specifying the --construction
option state 5 will only show the
reduction and not the shifting action: it has removed that alternative
from the action set. This is a direct consequence of a hidden shift-reduce
conflict in the grammar: in state 5 the choice is between shifting or reducing
when encountering a '+'
token. As we'll see in this section, '+'
is in
the lookahead set of the reduce-item, and thus bisonc++ is faced with a conflict:
what to do on '+'
?
In this case the grammar designer has provided bisonc++ with a way out: the
%left
directive tells bisonc++ to favor a reduction over a shift, and so it
removed expr -> expr . '+' expr
from its set of actions in state 5.
In this section we'll have a look at the way bisonc++ determines lookahead (LA) sets.
To determine which items have LA set that depend on a particular item the symbol following the item's dot position is inspected. If it's a nonterminal, then all items whose LHSs are equal to that nonterminal depend on the item being considered.
Inspecting the states of our example grammar, using offsets (0-based) to indicate their items, the following dependencies are observed:
$
. All other LA sets are initiaized to an empty set.
The LA sets of the dependent items are equal to the FIRST
set of the subrule of their parent items, starting at the
symbol following their parent item's dot positions.
FIRST
set contains e
, then that item's
LA set is added to the subrule's LA set, removing the e
.
Applying the above algorithm to the example grammar we get:
start_$ -> . start
LA: {$}
{$}
) to the items resulting from
the start
productions
start -> . start expr
LA: {$}
start -> .
LA: {$}
start
rules, adding
FIRST(expr)
= {NR}
to the LA sets of those rules:
start -> . start expr
LA: {$ NR}
start -> .
LA: {$ NR}
start_$ -> start .
inherits LA: {$}
from item
0, state 0.
start -> start . expr
inherits LA: {NR $}
from item 1, state 0. {NR $}
) to the items resulting from
the expr
productions
expr -> . NR
LA: {NR $}
expr -> . expr '+' expr
LA: {NR $}
expr
rules, adding
FIRST('+')
= {'+'}
to the LA sets of those rules:
expr -> . NR
LA: {+ NR $}
expr -> . expr '+' expr
LA: {+ NR $}
start -> start expr .
inherits LA: {NR $}
from item 1, state 1. expr -> expr . '+' expr
inherits LA: + NR $
from item 3, state 1.
expr -> NR .
inherits LA: {+ NR $}
from item 2, state 1.
expr -> expr '+' . expr
inherits LA: {+ NR
$}
from item 2, state 2. {+ NR $}
) to the items resulting from
the expr
productions expr -> . NR
LA: {+ NR $}
expr -> . expr '+' expr
LA: + NR $
expr
production rules need to be
considered again. This time no LA sets change, so the LA sets
of all items in this state have been determined.
expr -> expr '+' expr .
inherits LA: + NR $
from item 0, state 4
expr -> expr . '+' expr
inherits LA: + NR $
from item 2, state 4.
Once again, look at state 5. In this state, item 0 calls for a reduction
on tokens '+', NR
or EOF
. However, according to item 1 a shift
must be performed when the next token is a '+'
. This choice represents a
shift-reduce conflict which is reported by bisonc++ unless special actions are
taken. One of the actions is to tell bisonc++ what to do. A %left
directive
tells bisonc++ to prefer a reduction over a shift when encountering a shift-reduce
conflict for the token(s) mentioned with the %left
directive. Analogously,
a %right
tells bisonc++ to perform a shift rather than a reduction.
Since a %left '+'
was specified, bisonc++ drops the shift alternative, and a
listing of the grammar's construction process (using the option
--construction
) shows for state 5:
State 5: 0: [P4 3] expr -> expr '+' expr . { NR '+' <EOF> } 1, () -1 1: [P4 1] expr -> expr . '+' expr { NR '+' <EOF> } 0, () 0 0: Reduce item(s): 0The shift action (implied by item 1) is not reported.
parse()
is implemented using a finite-state
machine. The values pushed on the parser stack are not simply token type
codes; they represent the entire sequence of terminal and nonterminal symbols
at or near the top of the stack. The current state collects all the
information about previous input which is relevant to deciding what to do
next.
Each time a look-ahead token is read, the current parser state together with the current (not yet processed) token are looked up in a table. This table entry can say Shift the token. This also specifies a new parser state, which is then pushed onto the top of the parser stack. Or it can say Reduce using rule number n. This means that a certain number of tokens or groupings are taken off the top of the stack, and that the rule's grouping becomes the `next token' to be considered. That `next token' is then used in combination with the state then at the stack's top, to determine the next state to consider. This (next) state is then again pushed on the stack, and a new token is requested from the lexical scanner, and the process repeats itself.
There are two special situations the parsing algorithm must consider:
parse()
returns the value 0, indicating a successful
parsing.
Once bisonc++ has successfully analyzed the grammar it generates the tables that are used by the parsing function to parse input according to the provided grammar. Each state results in a state transition table. For the example grammar used so far there are five states. Each table consists of rows having two elements. The meaning of the elements depends on their position in the table.
NORMAL | Despite its name, it's not used |
ERR_ITEM | The state allows error recovery |
REQ_TOKEN | The state requires a token (which may already be available) |
ERR_REQ | combines ERR_ITEM and REQ_TOKEN |
DEF_RED | This state has a default reduction |
ERR_DEF | combines ERR_ITEM and DEF_RED |
REQ_DEF | combines REQ_TOKEN and DEF_RED |
ERR_REQ_DEF | combines ERR_ITEM, REQ_TOKEN and DEF_RED |
--thread-safe
was specified)
PARSE_ACCEPT
rather than 0) may be
used as well.
SR__ s_0[] = { { { DEF_RED}, { 2} }, { { 258}, { 1} }, // start { { 0}, { -2} }, }; SR__ s_1[] = { { { REQ_TOKEN}, { 4} }, { { 259}, { 2} }, // expr { { 257}, { 3} }, // NR { { _EOF_}, { PARSE_ACCEPT} }, { { 0}, { 0} }, }; SR__ s_2[] = { { { REQ_DEF}, { 2} }, { { 43}, { 4} }, // '+' { { 0}, { -1} }, }; SR__ s_3[] = { { { DEF_RED}, { 1} }, { { 0}, { -3} }, }; SR__ s_4[] = { { { REQ_TOKEN}, { 3} }, { { 259}, { 5} }, // expr { { 257}, { 3} }, // NR { { 0}, { 0} }, }; SR__ s_5[] = { { { REQ_DEF}, { 1} }, { { 0}, { -4} }, };
parse()
. This
function obtains its tokens from the member lex()
and processes all tokens
until a syntactic error, a non-recoverable error, or the end of input is
encountered.
The algorithm used by parse()
is the same, irrespective of the used
grammar. In fact, the parse()
member's behavior is completely determined
by the tables generated by bisonc++.
The parsing algorithm is known as the shift-reduce (S/R) algorithm, and it
allows parse()
to perform two actions while processing series of tokens:
NR
token is observed in
state 1 of the example's grammar) a transition to state 3 is performed.
The parsing function maintains two stacks, which are manipulated by the above
two actions: a state stack and a value stack. These stacks are not accessible
to the parser: they are private data structures defined in the parser's base
class. The parsing member parse()
may use the following member functions
to manipulate these stacks:
push__(stateIdx)
pushes stateIdx
on the state stack and pushes
the current semantic value (i.e., LTYPE_ d_val__
) on the value stack;
pop__(size_t count = 1)
removes count
elements from the two
stacks;
top__()
returns the state currently on top of the state stack;
Apart from the state- and semantic stacks, the S/R algorithm itself sometimes
needs to push a token on a two-element stack. Rather than using a formal
stack, two variables (d_token__
and d_nextToken__
) are used to
implement this little token-stack. The member function pushToken__()
pushes a new value on the token stack, the member popToken__()
pops a previously pushed value from the token stack. At any time,
d_token__
contains the topmost element of the token stack.
The member nextToken()
determines the next token to be processed. If the
token stack contains a value it is returned. Otherwise, lex()
is called to
obtain the next token to be pushed on the token stack.
The member lookup()
looks up the current token in the current state's
SR__
table. For this a simple linear search algorithm is used. If
searching fails to find an action for the token an UNEXPECTED_TOKEN__
exception is thrown, which starts the error recovery. If an action was found,
it is returned.
Rules may have actions associated with them. These actions are executed when a
grammatical rule has been completely recognized. This is always at the end of
a rule: mid-rule actions are converted by bisonc++ into pseudo nonterminals,
replacing mid-rule action blocks by these pseudo nonterminals. The pseudo
nonterminals show up in the verbose grammar output as rules having LHSs
starting with #
. So, once a rule has been recognized its action (if
defined) is executed. For this the member function executeAction()
is
available.
Finally, the token stack can be cleared using the member clearin()
.
Now that the relevant support functions have been introduced, the S/R algorithm itself turns out to be a fairly simple algorithm. First, the parser's stack is initialized with state 0 and the token stack is cleared. Then, in a never ending loop:
REQ_TOKEN
has been specified for
that state), nextToken()
is called to obtain the next token;
lookup()
determines the next
action;
reduce__()
):
the semantic and state stacks are reduced by the number of elements found in
that production rule, and the production rule's LHS is pushed on the token
stack
EOF
is encountered in state 1) then the parsing function
terminates, returning 0.
The following table shows the S/R algorithm in action when the example grammar
is given the input 3 + 4 + 5
. The first column shows the (remaining)
input, the second column the current token stack (with -
indicating an
empty token stack), the third column the state stack. The fourth column
provides a short description. The leftmost elements of
the stacks represent the tops of the stacks. The information shown below is
also (in more elaborate form) shown when the --debug
option is provided to
Bisonc++ when generating the parsing function.
remaining input | token stack | state stack | description | |||
3 + 4 + 5 | - | 0 |
initialization | |||
3 + 4 + 5 | start | 0 |
reduction by rule 2 | |||
3 + 4 + 5 | - | 1 0 |
shift `start' | |||
+ 4 + 5 | NR | 1 0 |
obtain NR token | |||
+ 4 + 5 | - | 3 1 0 |
shift NR | |||
+ 4 + 5 | expr | 1 0 |
reduction by rule 3 | |||
+ 4 + 5 | - | 2 1 0 |
shift `expr' | |||
4 + 5 | + | 2 1 0 |
obtain `+' token | |||
4 + 5 | - | 4 2 1 0 |
shift `+' | |||
+ 5 | NR | 4 2 1 0 |
obtain NR token | |||
+ 5 | - | 3 4 2 1 0 |
shift NR | |||
+ 5 | expr | 4 3 1 0 |
reduction by rule 3 | |||
+ 5 | - | 5 4 3 1 0 |
shift `expr' | |||
5 | + | 5 4 3 1 0 |
obtain `+' token | |||
5 | expr + | 1 0 |
reduction by rule 4 | |||
5 | + | 2 1 0 |
shift `expr' | |||
5 | - | 4 2 1 0 |
shift '+' | |||
| NR | 4 2 1 0 |
obtain NR token | |||
| - | 3 4 2 1 0 |
shift NR | |||
| expr | 4 2 1 0 |
reduction by rule 3 | |||
| - | 5 4 2 1 0 |
shift `expr' | |||
| EOF | 5 4 2 1 0 |
obtain EOF | |||
| expr EOF | 1 0 |
reduction by rule 4 | |||
| EOF | 2 1 0 |
shift `expr' | |||
| start EOF | 2 1 0 |
reduction by rule 1 | |||
| EOF | 1 0 |
shift `start' | |||
| EOF | 1 0 |
ACCEPT |
if
and if-else
statements, with a pair of rules like this:
if_stmt: IF '(' expr ')' stmt | IF '(' expr ')' stmt ELSE stmt ;Here we assume that
IF
and ELSE
are terminal symbols for specific
keywords, and that expr
and stmnt
are defined non-terminals.
When the ELSE
token is read and becomes the look-ahead token, the contents
of the stack (assuming the input is valid) are just right for reduction by
the first rule. But it is also legitimate to shift the ELSE
, because
that would lead to eventual reduction by the second rule.
This situation, where either a shift or a reduction would be valid, is called
a shift/reduce
conflict. Bisonc++ is designed to resolve these conflicts
by implementing a shift, unless otherwise directed by operator precedence
declarations. To see the reason for this, let's contrast it with the other
alternative.
Since the parser prefers to shift the ELSE
, the result is to attach the
else-clause to the innermost if-statement, making these two inputs
equivalent:
if (x) if (y) then win(); else lose(); if (x) { if (y) then win(); else lose(); }But if the parser would perform a reduction whenever possible rather than a shift, the result would be to attach the else-clause to the outermost if-statement, making these two inputs equivalent:
if (x) if (y) then win(); else lose(); if (x) { if (y) win(); } else lose();The conflict exists because the grammar as written is ambiguous: either parsing of the simple nested if-statement is legitimate. The established convention is that these ambiguities are resolved by attaching the else-clause to the innermost if-statement; this is what bisonc++ accomplishes by implementing a shift rather than a reduce. This particular ambiguity was first encountered in the specifications of Algol 60 and is called the dangling else ambiguity.
To avoid warnings from bisonc++ about predictable, legitimate shift/reduce
conflicts, use the %expect n
directive. There will be no warning as long
as the number of shift/reduce conflicts is exactly n
. See section
5.6.5.
The definition of if_stmt above is solely to blame for the conflict, but the
plain stmnt
rule, consisting of two recursive alternatives will of course
never be able to match actual input, since there's no way for the grammar to
eventually derive a sentence this way. Adding one non-recursive alternative is
enough to convert the grammar into one that does derive sentences. Here is
a complete bisonc++ input file that actually manifests the conflict:
%token IF ELSE VAR %% stmt: VAR ';' | IF '(' VAR ')' stmt | IF '(' VAR ')' stmt ELSE stmt ;
1 - 2 * 3
' can be parsed in two different ways):
expr: expr '-' expr | expr '*' expr | expr '<' expr | '(' expr ')' ... ;Suppose the parser has seen the tokens `
1
', `-'
and `2
';
should it reduce them via the rule for the addition operator? It depends on
the next token. Of course, if the next token is `)', we must reduce;
shifting is invalid because no single rule can reduce the token sequence `-
2
)' or anything starting with that. But if the next token is `*
'
or `<
', we have a choice: either shifting or reduction would allow the
parse to complete, but with different results.
To decide which one bisonc++ should do, we must consider the results. If
the next operator token op
is shifted, then it must be reduced first in
order to permit another opportunity to reduce the sum. The result is (in
effect) `1 - (2 op 3)
'. On the other hand, if the subtraction is reduced
before shifting op
, the result is `(1 - 2) op 3
'. Clearly, then, the
choice of shift or reduce should depend on the relative precedence of the
operators `-
' and op
: `*
' should be shifted first, but not
`<
'.
What about input such as `1 - 2 - 5
'; should this be `(1 - 2) - 5
' or
should it be `1 - (2 - 5)
'? For most operators we prefer the former, which
is called left association. The latter alternative, right association,
is desirable for, e.g., assignment operators. The choice of left or right
association is a matter of whether the parser chooses to shift or reduce when
the stack contains `1 - 2
' and the look-ahead token is `-
': shifting
results in right-associativity.
%left
and %right
. Each such directive contains a list of
tokens, which are operators whose precedence and associativity is being
declared. The %left
directive makes all those operators left-associative
and the %right
directive makes them right-associative. A third alternative
is %nonassoc
, which declares that it is a syntax error to find the same
operator twice `in a row'. Actually, %nonassoc
is not currently (0.98.004)
punished that way by bisonc++. Instead, %nonassoc
and %left
are
handled identically.
The relative precedence of different operators is controlled by the order in
which they are declared. The first %left
or %right
directive in the
file declares the operators whose precedence is lowest, the next such
directive declares the operators whose precedence is a little higher, and so
on.
%left '<' %left '-' %left '*'In a more complete example, which supports other operators as well, we would declare them in groups of equal precedence. For example, '
+
' is
declared with '-
':
%left '<' '>' '=' NE LE GE %left '+' '-' %left '*' '/'(Here
NE
and so on stand for the operators for `not equal' and so
on. We assume that these tokens are more than one character long and therefore
are represented by names, not character literals.)
Finally, the resolution of conflicts works by comparing the precedence of the
rule being considered with that of the look-ahead token. If the token's
precedence is higher, the choice is to shift. If the rule's precedence is
higher, the choice is to reduce. If they have equal precedence, the choice is
made based on the associativity of that precedence level. The verbose output
file made by `-V
' (see section 9) shows how each conflict was
resolved.
Not all rules and not all tokens have precedence. If either the rule or the look-ahead token has no precedence, then the default is to shift.
The bisonc++ precedence directives, %left, %right and %nonassoc, can only be used once for a given token; so a token has only one precedence declared in this way. For context-dependent precedence, you need to use an additional mechanism: the %prec modifier for rules.
The %prec modifier declares the precedence of a particular rule by specifying a terminal symbol whose precedence should be used for that rule. It's not necessary for that symbol to appear otherwise in the rule. The modifier's syntax is:
%prec terminal-symbol
and it is written after the components of the rule. Its effect is to assign the rule the precedence of terminal-symbol, overriding the precedence that would be deduced for it in the ordinary way. The altered rule precedence then affects how conflicts involving that rule are resolved (see section Operator Precedence).
Here is how %prec solves the problem of unary minus. First, declare a precedence for a fictitious terminal symbol named UMINUS. There are no tokens of this type, but the symbol serves to stand for its precedence:
... %left '+' '-' %left '*' %left UMINUS
Now the precedence of UMINUS can be used in specific rules:
exp: ... | exp '-' exp ... | '-' exp %prec UMINUS
For example, here is an erroneous attempt to define a sequence of zero or more word groupings: %stype char * %token WORD
%%
sequence: // empty { cout << "empty sequence\n"; } | maybeword | sequence WORD { cout << "added word " << $2 << endl; } ;
maybeword: // empty { cout << "empty maybeword\n"; } | WORD { cout << "single word " << $1 << endl; } ;
The error is an ambiguity: there is more than one way to parse a single word into a sequence. It could be reduced to a maybeword and then into a sequence via the second rule. Alternatively, nothing-at-all could be reduced into a sequence via the first rule, and this could be combined with the word using the third rule for sequence.
There is also more than one way to reduce nothing-at-all into a sequence. This can be done directly via the first rule, or indirectly via maybeword and then the second rule.
You might think that this is a distinction without a difference, because it does not change whether any particular input is valid or not. But it does affect which actions are run. One parsing order runs the second rule's action; the other runs the first rule's action and the third rule's action. In this example, the output of the program changes.
Bisonc++ resolves a reduce/reduce conflict by choosing to use the rule that appears first in the grammar, but it is very risky to rely on this. Every reduce/reduce conflict must be studied and usually eliminated. Here is the proper way to define sequence:
sequence: /* empty */ { printf ("empty sequence\n"); } | sequence word { printf ("added word %s\n", $2); } ;
Here is another common error that yields a reduce/reduce conflict:
sequence: /* empty */ | sequence words | sequence redirects ;
words: /* empty */ | words word ;
redirects:/* empty */ | redirects redirect ;
The intention here is to define a sequence which can contain either word or redirect groupings. The individual definitions of sequence, words and redirects are error-free, but the three together make a subtle ambiguity: even an empty input can be parsed in infinitely many ways!
Consider: nothing-at-all could be a words. Or it could be two words in a row, or three, or any number. It could equally well be a redirects, or two, or any number. Or it could be a words followed by three redirects and another words. And so on.
Here are two ways to correct these rules. First, to make it a single level of sequence:
sequence: /* empty */ | sequence word | sequence redirect ;
Second, to prevent either a words or a redirects from being empty:
sequence: /* empty */ | sequence words | sequence redirects ;
words: word | words word ;
redirects:redirect | redirects redirect ;
%token ID %% def: param_spec return_spec ',' ; param_spec: type | name_list ':' type ; return_spec: type | name ':' type ; type: ID ; name: ID ; name_list: name | name ',' name_list ;It would seem that this grammar can be parsed with only a single token of look-ahead: when a param_spec is being read, an
ID
is a name
if a
comma or colon follows, or a type
if another ID
follows. In other
words, this grammar is LR(1).
However, bisonc++, like most parser generators, cannot actually handle all LR(1)
grammars. In this grammar, two contexts, that after an ID
at the beginning
of a param_spec
and likewise at the beginning of a return_spec
, are
similar enough that bisonc++ assumes they are the same. They appear similar
because the same set of rules would be active--the rule for reducing to a name
and that for reducing to a type. Bisonc++ is unable to determine at that stage of
processing that the rules would require different look-ahead tokens in the two
contexts, so it makes a single parser state for them both. Combining the two
contexts causes a conflict later. In parser terminology, this occurrence means
that the grammar is not LALR(1).
In general, it is better to fix deficiencies than to document them. But this particular deficiency is intrinsically hard to fix; parser generators that can handle LR(1) grammars are hard to write and tend to produce parsers that are very large. In practice, bisonc++ is more useful as it is now.
When the problem arises, you can often fix it by identifying the two parser
states that are being confused, and adding something to make them look
distinct. In the above example, adding one rule to return_spec
as follows
makes the problem go away:
%token BOGUS ... %% ... return_spec: type | name ':' type | ID BOGUS // This rule is never used. ;This corrects the problem because it introduces the possibility of an additional active rule in the context after the
ID
at the beginning of
return_spec
. This rule is not active in the corresponding context in a
param_spec
, so the two contexts receive distinct parser states. As long as
the token BOGUS
is never generated by the parser's member function
lex()
, the added rule cannot alter the way actual input is parsed.
In this particular example, there is another way to solve the problem: rewrite
the rule for return_spec
to use ID
directly instead of via name. This
also causes the two confusing contexts to have different sets of active rules,
because the one for return_spec
activates the altered rule for
return_spec
rather than the one for name.
param_spec: type | name_list ':' type ; return_spec: type | ID ':' type ;