
- Compiler Design - Home
- Compiler Design - Overview
- Compiler Design - Architecture
- Phases
- Compiler Design - Phases
- Compiler Design - Global Optimization
- Compiler Design - Local Optimization
- Lexical Analysis
- Compiler Design - Lexical Analysis
- Compiler Design - Regular Expressions
- Compiler Design - Finite Automata
- Compiler Design - Language Elements
- Compiler Design - Lexical Tokens
- Compiler Design - FSM
- Compiler Design - Lexical Table
- Compiler Design - Sequential Search
- Compiler Design - Binary Search Tree
- Compiler Design - Hash Table
- Syntax Analysis
- Compiler Design - Syntax Analysis
- Compiler Design - Parsing Types
- Compiler Design - Grammars
- Compiler Design - Classes Grammars
- Compiler Design - Pushdown Automata
- Compiler Design - Ambiguous Grammar
- Parsing
- Compiler Design - Top-Down Parser
- Compiler Design - Bottom-Up Parser
- Compiler Design - Simple Grammar
- Compiler Design - Quasi-Simple Grammar
- Compiler Design - LL(1) Grammar
- Error Recovery
- Compiler Design - Error Recovery
- Semantic Analysis
- Compiler Design - Semantic Analysis
- Compiler Design - Symbol Table
- Run Time
- Compiler Design - Run-time Environment
- Code Generation
- Compiler Design - Code Generation
- Converting Atoms to Instructions
- Compiler Design - Transfer of Control
- Compiler Design - Register Allocation
- Forward Transfer of Control
- Reverse Transfer of Control
- Code Optimization
- Compiler Design - Code Optimization
- Compiler Design - Intermediate Code
- Basic Blocks and DAGs
- Control Flow Graph
- Compiler Design - Peephole Optimization
- Implementing Translation Grammars
- Compiler Design - Attributed Grammars
Simple Grammars in Compiler Design
In compiler design, parsing is one of the most widely discussed topics. Parsers are classified into different types, primarily top-down and bottom-up. Since parsers rely on grammars, understanding program structure requires a solid grasp of grammatical rules.
Among various grammars, simple grammars represent a specific type of context-free grammar designed to simplify parsing. However, they are not commonly used in complex programming languages. Instead, simple grammars serve as an excellent introduction to the fundamental concepts of top-down parsing and compiler construction.
In this chapter, we will explore simple grammars, their structure, and their role in compiler design.
What are Simple Grammars?
A grammar is treated as simple grammar if it follows specific restrictions that simplify the parsing process. In this case, every production rule must be of the following form −
A → a
Here, A is a non-terminal, a is a terminal, and α is a string of terminals and non-terminals.
From the structure itself, we can understand that the grammar is simple. For every nonterminal A, the production rules defining A must start with different terminal symbols. These rules are showing that simple grammars are easy to parse since there is no ambiguity when selecting which rule to apply.
Examples of Simple and Non-Simple Grammars
What are non-simple grammars? Let us consider the following examples −
Simple Grammar G1
This grammar is simple because both rules start with different terminal symbols (a and b).
G1: S → aSb S → b
There is no ambiguity in deciding which rule to use during parsing.
Non-Simple Grammar G2
Next, consider another grammar −
G2: S → aSb S →
This grammar is not simple because it includes an epsilon (ε) rule, which makes it difficult to decide between rules during parsing.
Non-Simple Grammar G3
Another example of non-simple grammar −
G3: S → aSb S → a
This grammar is not simple because both rules defining S start with the same terminal symbol (a). This is creating an ambiguity.
Parsing with Simple Grammars
One of the key advantages of simple grammars is that they make top-down parsing straightforward. In the famous top-down parsing, the parser constructs a derivation tree by expanding grammar rules which are starting from the root (the starting nonterminal) and working toward the input string.
For simple grammars, each rules selection set (which is the set of input symbols that trigger a rule) contains exactly one terminal symbol. This shows there is no overlap between selection sets. This is making it easy to check and apply the rule.
Example: Parsing with Simple Grammar G4
Consider the following grammar −
G4: S → a b S d S → b a S d S → d
Using the input "abbaddd", we can construct a derivation tree step by step −
Input String: abbaddd
Step 1 − Start with S (the root of the tree). The first symbol in the input is a, so apply rule 1 −
S → a b S d

Step 2 − Expand the second S based on the next input symbol. The input now begins with b, so apply rule 2 −
S → b a S d

Step 3 − Expand the third S. The input now begins with d, so apply rule 3 −
S → d

Final Derivation Tree
This is the completed derivation tree for the input abbaddd.

This process demonstrates how simple grammars guide the parser step by step without ambiguity.
Advantages and Limitations of Simple Grammars
The following table highlights the advantages and disadvantages of Simple Grammars −
Advantages of Simple Grammars | Limitations of Simple Grammars |
---|---|
Ease of Parsing − The unique terminal symbol in each selection set ensures that parsing decisions are straightforward. | Limited Usefulness − Simple grammars are not practical for real-world programming languages because they lack expressive power. |
No Ambiguity − Simple grammars eliminate ambiguity, making them ideal for introducing top-down parsing concepts. | Restrictions on Rules − The requirement that all rules for a nonterminal must start with different terminal symbols limits the types of languages that can be described. |
Direct Parser Construction − Parsers for simple grammars can be constructed mechanically, making them a good learning tool. |
Parsing Simple Grammars with Pushdown Machines
Pushdown machines are often used to parse simple grammars. For example, consider the following grammar −
G5: S → aSB S → b B → a B → bBa
To parse input strings for this grammar, we can make a one-state pushdown machine. The stack stores the portion of the input string yet to be parsed. For each input symbol, the machine will follow the steps.
- Pushes or pops symbols from the stack based on the grammar rules.
- Advances through the input if the top of the stack matches the current input symbol.
- Accepts the input if the stack is empty (except for the bottom marker) and the input is fully read.
Recursive Descent Parsers for Simple Grammars
Another way to parse simple grammars is by using a recursive descent parser, where −
- Each nonterminal in the grammar corresponds to a method in the parser.
- The parser processes input by calling these methods recursively, expanding rules as needed.
For example, a recursive descent parser for grammar G5 could look like this:
The recursive descent parser simplifies parsing by handling each nonterminal through method calls.
void S() { if (inp == 'a') { // Apply rule 1 inp = getInp(); S(); B(); } else if (inp == 'b') { // Apply rule 2 inp = getInp(); } else { reject(); } } void B() { if (inp == 'a') { // Apply rule 3 inp = getInp(); } else if (inp == 'b') { // Apply rule 4 inp = getInp(); B(); if (inp == 'a') { inp = getInp(); } else { reject(); } } else { reject(); } }
Conclusion
In this chapter, we explored the concept of simple grammars and their role in compiler design. We examined their structure, rules, and how they simplify top-down parsing. Through examples, we saw how simple grammars guide parsing decisions without ambiguity. Additionally, we discussed parsing methods such as pushdown machines and recursive descent parsers.
While simple grammars may not be practical for real-world programming languages, they serve as a foundational tool for understanding grammars and parsing techniques.