Parsers
Question 1 
1024  
1025  
1026  
1027 
Question 1 Explanation:
To get 10 'if' we need to use grammar to get,
if then else ; stmt
if then else ; if then else . stmt
:
:
:
(keep doing 10 times to get 10 'if')
We know that every if statement has 2 control flows as given in question. Hence,
We have 2 control flow choices for 1st 'if'
We have 2 control flow choices for 2nd 'if'
:
:
:
We have 2 control flow choices for 10th 'if'
Since all the choices are in one single structure or combination, so total choices are
2 × 2 × 2 × ........ 10 times = 2^{10} = 1024
if
if
:
:
:
(keep doing 10 times to get 10 'if')
We know that every if statement has 2 control flows as given in question. Hence,
We have 2 control flow choices for 1st 'if'
We have 2 control flow choices for 2nd 'if'
:
:
:
We have 2 control flow choices for 10th 'if'
Since all the choices are in one single structure or combination, so total choices are
2 × 2 × 2 × ........ 10 times = 2^{10} = 1024
Question 2 
I only  
II only  
III only  
II and III only 
Question 2 Explanation:
Canonical LR is more powerful than SLR as every grammar which can be parsed by SLR parser, can also be parsed by CLR parser. The power in increasing order is:
LR(0) < SLR < LALR < CLR
Hence only I is true.
LR(0) < SLR < LALR < CLR
Hence only I is true.
Question 3 
Which one of the following is True at any valid state in shiftreduce parsing?
Viable prefixes appear only at the bottom of the stack and not inside
 
Viable prefixes appear only at the top of the stack and not inside  
The stack contains only a set of viable prefixes  
The stack never contains viable prefixes

Question 3 Explanation:
A handle is actually on the top of the stack. A viable prefixes is prefix of the handle and so can never extend to the right of handle, i.e., top of stack. So set of viable prefixes is in stack.
Question 4 
Among simple LR (SLR), canonical LR, and lookahead LR (LALR), which of the following pairs identify the method that is very easy to implement and the method that is the most powerful, in that order?
SLR, LALR
 
Canonical LR, LALR
 
SLR, canonical LR
 
LALR, canonical LR 
Question 4 Explanation:
SLR is very easy to implement and CLR is most powerful method.
Question 5 
Only S1  
Only S2  
Both S1 and S2  
Neither S1 nor S2 
Question 5 Explanation:
For LL(1),
For first production,
So, there is 'c' common in both the first(s) in the production of S. So not LL(1).
For LR(1),
Since RR conflict is present. So, not LR(1).
Hence, Option (D) is the correct answer.
For first production,
So, there is 'c' common in both the first(s) in the production of S. So not LL(1).
For LR(1),
Since RR conflict is present. So, not LR(1).
Hence, Option (D) is the correct answer.
Question 6 
a shiftreduce conflict and a reducereduce conflict.  
a shiftreduce conflict but not a reducereduce conflict.  
a reducereduce conflict but not a shiftreduce conflict.  
neither a shiftreduce nor a reducereduce conflict. 
Question 6 Explanation:
The input symbol is “<” which is not in canonical set of item, so it is neither a shiftreduce nor a reducereduce conflict with reference to “<” symbol.
But if it would have asked about “>” then it will be a SR conflict.
But if it would have asked about “>” then it will be a SR conflict.
Question 7 
FIRST(A) = {a,b,ε} = FIRST(B) FOLLOW(A) = {a,b} FOLLOW(B) = {a,b,$}  
FIRST(A) = {a,b,$} FIRST(B) = {a,b,ε} FOLLOW(A) = {a,b} FOLLOW(B) = {$}  
FIRST(A) = {a,b,ε} = FIRST(B) FOLLOW(A) = {a,b} FOLLOW(B) = ∅  
FIRST(A) = {a,b} = FIRST(B) FOLLOW(A) = {a,b} FOLLOW(B) = {a,b} 
Question 7 Explanation:
FIRST (P): is the set of terminals that begin the strings derivable from non terminal P. If P derives epsilon then we include epsilon in FIRST(P).
FOLLOW(P): is the set of terminals that can appear immediately to the right of P in some sentential form.
FIRST(A) = FIRST (S)
FIRST (S) = FIRST (aAbB) and FIRST (bAaB) and FIRST (ϵ)
FIRST(S) = {a, b, ϵ}
FIRST (B) = FIRST (S) = {a, b, ϵ}= FIRST (A)
FOLLOW(A) = {b} // because of production S→a A b B
FOLLOW(A) = {a} // because of production S→ b A a B
So FOLLOW (A) = {a, b}
FOLLOW (B) = FOLLOW (S) // because of production S→ a A b B
FOLLOW (S) = FOLLOW (A) // because of production S → A
So FOLLOW (S) = {$, a, b}= FOLLOW(B)
FOLLOW(P): is the set of terminals that can appear immediately to the right of P in some sentential form.
FIRST(A) = FIRST (S)
FIRST (S) = FIRST (aAbB) and FIRST (bAaB) and FIRST (ϵ)
FIRST(S) = {a, b, ϵ}
FIRST (B) = FIRST (S) = {a, b, ϵ}= FIRST (A)
FOLLOW(A) = {b} // because of production S→a A b B
FOLLOW(A) = {a} // because of production S→ b A a B
So FOLLOW (A) = {a, b}
FOLLOW (B) = FOLLOW (S) // because of production S→ a A b B
FOLLOW (S) = FOLLOW (A) // because of production S → A
So FOLLOW (S) = {$, a, b}= FOLLOW(B)
Question 8 
E1: S → aAbB,A → S E2: S → bAaB,B→S E3: B → S  
E1: S → aAbB,S→ ε E2: S → bAaB,S → ε E3: S → ε  
E1: S → aAbB,S → ε E2: S → bAaB,S→ε E3: B → S  
E1: A → S,S →ε E2: B → S,S → ε E3: B →S 
Question 8 Explanation:
The entries in E1, E2 and E3 is related to S and B, so we have to take only those production which have S and B in LHS.
S→ aAbB  bAaB  ε
The production S→ aAbB will go under column FIRST (aAbB) = a, so S→ aAbB will be in E1.
S→ bAaB will go under column FIRST(bAaB) = b, so S→ bAaB will be in E2.
S→ ε will go under FOLLOW (S) = FOLLOW(B) = {a, b, $ } , So S→ ε will go in E1, E2 and under column of $.
So E1 will have: S→ aAbB and S→ ε.
E2 will have S→ bAaB and S→ ε.
Now, B→ S will go under FIRST (S) = {a, b, ε}
Since FIRST(S) = ε so B→ S will go under FOLLOW (B) = {a, b, $}
So E3 will contain B→ S.
S→ aAbB  bAaB  ε
The production S→ aAbB will go under column FIRST (aAbB) = a, so S→ aAbB will be in E1.
S→ bAaB will go under column FIRST(bAaB) = b, so S→ bAaB will be in E2.
S→ ε will go under FOLLOW (S) = FOLLOW(B) = {a, b, $ } , So S→ ε will go in E1, E2 and under column of $.
So E1 will have: S→ aAbB and S→ ε.
E2 will have S→ bAaB and S→ ε.
Now, B→ S will go under FIRST (S) = {a, b, ε}
Since FIRST(S) = ε so B→ S will go under FOLLOW (B) = {a, b, $}
So E3 will contain B→ S.
Question 9 
Consider two binary operators ‘↑’ and ‘↓’ with the precedence of operator ↓ being lower than that of the operator ↑. Operator ↑ is right associative while operator ↓, is left associative. Which one of the following represents the parse tree for expression (7↓3↑4↑3↓2)?
Question 9 Explanation:
7 ↓ 3 ↑ 4 ↑ 3 ↓ 2
⇒ 7 ↓ (3 ↑ (4 ↑ 3)) ↓ 2
⇒ 7 ↓ (3 ↑ (4 ↑ 3))) ↓ 2 as ↓ is left associative
⇒ 7 ↓ (3 ↑ (4 ↑ 3)) ↓ 2
⇒ 7 ↓ (3 ↑ (4 ↑ 3))) ↓ 2 as ↓ is left associative
Question 10 
The grammar S → aSabSc is
LL(1) but not LR(1)  
LR(1) but not LR(1)  
Both LL(1) and LR(1)
 
Neither LL(1) nor LR(1) 
Question 10 Explanation:
The LL(1) parsing table for the given grammar is:
As there is no conflict in LL(1) parsing table, hence the given grammar is LL(1) and since every LL(1) is LR(1) also, so the given grammar is LL(1) as well as LR(1).
As there is no conflict in LL(1) parsing table, hence the given grammar is LL(1) and since every LL(1) is LR(1) also, so the given grammar is LL(1) as well as LR(1).
Question 11 
I and II
 
I and IV
 
III and IV
 
I, III and IV 
Question 11 Explanation:
Statement II is false, as a programming language which allows recursion requires dynamic storage allocation. Statement III is false, as Lattributed definition (assume for instance the Lattributed definition has synthesized attribute only) can be evaluated in bottom up framework.
Statement I is true, as the bottom up and top down parser take O(n) time to parse the string , i.e. only one scan of input is required.
Statement IV is true,Code improving transformations can be performed at both source language and intermediate code level. For example implicit type casting is also a kind of code improvement which is done during semantic analysis phase and intermediate code optimization is a topic itself which uses various techniques to improve the code such as loop unrolling, loop invariant.
Statement I is true, as the bottom up and top down parser take O(n) time to parse the string , i.e. only one scan of input is required.
Statement IV is true,Code improving transformations can be performed at both source language and intermediate code level. For example implicit type casting is also a kind of code improvement which is done during semantic analysis phase and intermediate code optimization is a topic itself which uses various techniques to improve the code such as loop unrolling, loop invariant.
Question 12 
Which of the following describes a handle (as applicable to LRparsing) appropriately?
It is the position in a sentential form where the next shift or reduce operation will occur.
 
It is nonterminal whose production will be used for reduction in the next step.  
It is a production that may be used for reduction in a future step along with a position in the sentential form where the next shift or reduce operation will occur.
 
It is the production p that will be used for reduction in the next step along with a position in the sentential form where the right hand side of the production may be found.

Question 12 Explanation:
A handle is the production p that will be used for reduction in the next step along with a position in the sentential form where the right hand side of the production may be found.
Question 13 
Which one of the following is a topdown parser?
Recursive descent parser.
 
Operator precedence parser.  
An LR(k) parser.
 
An LALR(k) parser.

Question 13 Explanation:
Recursive descent parser is top down parser, while others are bottom up parser.
Question 14 
it is left recursive
 
it is right recursive  
it is ambiguous
 
it is not contextfree 
Question 14 Explanation:
The given grammar is not left recursive and also it is context free (Type 2 grammar), so option A and D is wrong. Being a right recursive grammar is not an issue for LL(1) grammar. So even if given grammar is right recursive, this is not a reason for NOT LL(1).
This grammar has two parse tree for string “ibt ibt aea”.
This grammar has two parse tree for string “ibt ibt aea”.
Question 15 
Both P and Q are true
 
P is true and Q is false
 
P is false and Q is true
 
Both P and Q are false

Question 15 Explanation:
Every regular grammar is LL(1) is false, as the grammar may have left recursion or left factoring or also it is possible that grammar is ambiguous.
For ex: Consider a regular grammar
S>aS  a  ϵ
this grammar is ambiguous as for string "a" two parse tree is possible.
Hence it is regular but not LL(1).
But every regular set has a language acceptor as DFA , so every regular set must have atleast one grammar which is unambiguous.
Hence, every regular set has LR(1) grammar.
For ex: Consider a regular grammar
S>aS  a  ϵ
this grammar is ambiguous as for string "a" two parse tree is possible.
Hence it is regular but not LL(1).
But every regular set has a language acceptor as DFA , so every regular set must have atleast one grammar which is unambiguous.
Hence, every regular set has LR(1) grammar.
Question 16 
(i) and (ii)
 
(ii) and (iii)  
(i) and (iii)  
None of the above

Question 16 Explanation:
As we can see in the below given LR(0) items, that all three belongs to different state (sets).
Question 17 
{S → FR} and {R → ε}
 
{S → FR} and { }  
{S → FR} and {R → *S}  
{F → id} and {R → ε} 
Question 17 Explanation:
Predictive parsing table for the mentioned grammar:
The representation M[X,Y] means X represents Variable (rows) and Y represents terminals (columns).
The productions are filled in parsing table by the below mentioned rules:
For every production P → α, we have:
Rule 1: If P → α is a production then add this production for each terminal “t” which is in FIRST of [α] i.e., ADD P → α to M[P, a]
Rule 2: If “ϵ” belongs to FIRST of [P] then add P → α to M[P, b] where “b” represents terminals FOLLOW[P].
By the above rules, we can see that production S → FR will go M[S, a] where “a” is FIRST [FR] which is equal to FIRST[F] = id, So S → FR will go in M[S,id].
Since in the production R→ϵ , FIRST[ϵ] = ϵ, hence the production will go in M[R, b] where “b” represents terminals FOLLOW[R] and FOLLOW[R] = $, so production R→ϵ will go in M[R,$]
The representation M[X,Y] means X represents Variable (rows) and Y represents terminals (columns).
The productions are filled in parsing table by the below mentioned rules:
For every production P → α, we have:
Rule 1: If P → α is a production then add this production for each terminal “t” which is in FIRST of [α] i.e., ADD P → α to M[P, a]
Rule 2: If “ϵ” belongs to FIRST of [P] then add P → α to M[P, b] where “b” represents terminals FOLLOW[P].
By the above rules, we can see that production S → FR will go M[S, a] where “a” is FIRST [FR] which is equal to FIRST[F] = id, So S → FR will go in M[S,id].
Since in the production R→ϵ , FIRST[ϵ] = ϵ, hence the production will go in M[R, b] where “b” represents terminals FOLLOW[R] and FOLLOW[R] = $, so production R→ϵ will go in M[R,$]
Question 18 
The grammar A → AA  (A)  ε is not suitable for predictiveparsing because the grammar is:
ambiguous
 
leftrecursive  
rightrecursive  
an operatorgrammar

Question 18 Explanation:
The given grammar can be turned into a infinite parse tree. So it is ambiguous.
It have A → AA has left recursion.
It have A → AA has left recursion.
Question 19 
Equal precedence and left associativity; expression is evaluated to 7
 
Equal precedence and right associativity; expression is evaluated to 9
 
Precedence of '×' is higher than that of '+', and both operators are left associative; expression is evaluated to 7
 
Precedence of '+' is higher than that of '×', and both operators are left associative; expression is evaluated to 9

Question 19 Explanation:
First of all, it is ambiguous grammar. Hence, equal precedence and associativity. Now as Yacc resolved it with shift move we will shift until the last operator and then we will start reducing.
Hence, the answer is 9 and right associative.
Hence, the answer is 9 and right associative.
Question 20 
n_{1} < n_{2} < n_{3}
 
n_{1} = n_{3} < n_{2}
 
n_{1} = n_{2} = n_{3}
 
n_{1} ≥ n_{3} ≥ n_{2} 
Question 20 Explanation:
→ SLR(1) and LALR(1) both are be the states of LR(0) items then SLR(1) = LALR(1).
→ LR(1) be the states of LR(1) items.
→ LR(0) items never be greater than LR(1) items then SLR(1) = LALR(1) < LR(1)
n_{1} = (n_{3}) < (n_{2})
→ LR(1) be the states of LR(1) items.
→ LR(0) items never be greater than LR(1) items then SLR(1) = LALR(1) < LR(1)
n_{1} = (n_{3}) < (n_{2})
Question 21 
(i) only
 
(i) and (iii) only
 
(ii) and (iii) only
 
(iii) and (iv) only

Question 21 Explanation:
Operator values doesn't contains nullable values and two adjacent nonterminals on RHS production.
i) On RHS it contains two adjacent nonterminals.
ii) Have nullable values.
i) On RHS it contains two adjacent nonterminals.
ii) Have nullable values.
Question 22 
Assume that the SLR parser for a grammar G has n_{1} states and the LALR parser for G has n_{2} states. The relationship between n_{1} and n_{2} is
n_{1} is necessarily less than n_{2}  
n_{1} is necessarily equal to n_{2}
 
n_{1} is necessarily greater than n_{2}
 
None of the above

Question 22 Explanation:
No. of states in SLR and LALR are equal and no. of states in SLR and LALR are less than or equal to LR(1).
Question 23 
{S'→e S} and {S'→ε}
 
{S'→e S} and { }  
{S'→ε} and {S'→ε}  
{S'→e S, S'→ε} and {S'→ε}

Question 23 Explanation:
First(S) = {1,a}
First(S') = {e,ε}
First(E) = {b}
Follow(S') = {e,$}
Only when 'First' contains ε, we need to consider FOLLOW for getting the parse table entry.
Hence, option (D) is correct.
First(S') = {e,ε}
First(E) = {b}
Follow(S') = {e,$}
Only when 'First' contains ε, we need to consider FOLLOW for getting the parse table entry.
Hence, option (D) is correct.
Question 24 
LL(1)
 
SLR(1) but not LL(1)
 
LALR(1) but not SLR(1)
 
LR(1) but not LALR(1)

Question 24 Explanation:
Hence, it is LL(1).
Question 25 
Which of the following derivations does a topdown parser use while parsing an input string? The input is assumed to be scanned in left to right order.
Leftmost derivation  
Leftmost derivation traced out in reverse  
Rightmost derivation  
Rightmost derivation traced out in reverse 
Question 25 Explanation:
Topdown parser  Leftmost derivation
BottomUp parser  Reverse of rightmost derivation
BottomUp parser  Reverse of rightmost derivation
Question 26 
Which of the following is the most powerful parsing method?
LL (1)  
Canonical LR  
SLR  
LALR 
Question 26 Explanation:
Canonical LR is most powerful.
LR > LALR > SLR
LR > LALR > SLR
Question 27 
Which of the following statements is true?
SLR parser is more powerful than LALR  
LALR parser is more powerful than Canonical LR parser  
Canonical LR parser is more powerful than LALR parser  
The parsers SLR, Canonical CR, and LALR have the same power 
Question 27 Explanation:
LR > LALR > SLR
Canonical LR parser is more powerful than LALR parser.
Canonical LR parser is more powerful than LALR parser.
Question 28 
23131  
11233  
11231  
33211 
Question 28 Explanation:
⇒ 23131
Note SR is bottom up parser.
Question 29 
Consider the SLR(1) and LALR (1) parsing tables for a context free grammar. Which of the following statements is/are true?
The go to part of both tables may be different.  
The shift entries are identical in both the tables.  
The reduce entries in the tables may be different.  
The error entries in the tables may be different.  
B, C and D. 
Question 29 Explanation:
Goto parts and shift entry must be same.
Reduce entry and error entry may be different due to conflicts.
Reduce entry and error entry may be different due to conflicts.
Question 30 
Recursive descent parsing cannot be used for grammar with left recursion.  
The intermediate form the representing expressions which is best suited for code optimization is the post fix form.
 
A programming language not supporting either recursion or pointer type does not need the support of dynamic memory allocation.  
Although C does not support call by name parameter passing, the effect can be correctly simulated in C.
 
No feature of Pascal violates strong typing in Pascal.  
A and D 
Question 30 Explanation:
(A) It is true. Left recursive grammar if used directly in recursive descent parsing causes an infinite loop. So, left recursion must be removed before giving to a recursive descent parser.
(B) False.
(C) It is false. The language can have dynamic data types which required dynamically growing memory when data type size increases.
(D) Is true and using macro we can do this.
(E) Out of syllabus now.
(B) False.
(C) It is false. The language can have dynamic data types which required dynamically growing memory when data type size increases.
(D) Is true and using macro we can do this.
(E) Out of syllabus now.
Question 31 
Which of the following statements is false?
An unambiguous grammar has same leftmost and rightmost derivation  
An LL(1) parser is a topdown parser  
LALR is more powerful than SLR  
An ambiguous grammar can never be LR(k) for any k 
Question 31 Explanation:
Option B: LL parser is a topdown parser for a subset of contextfree languages. It parses the input from Left to right, performing Left most derivation of the sentence.
Option C: LALR is more powerful than SLR.
Option D: An ambiguous grammar can never be LR (k) for any k, because LR(k) algorithm aren’t designed to handle ambiguous grammars. It would get stuck into undecidability problem, if employed upon an ambiguous grammar, no matter how large the constant k is.
Option C: LALR is more powerful than SLR.
Option D: An ambiguous grammar can never be LR (k) for any k, because LR(k) algorithm aren’t designed to handle ambiguous grammars. It would get stuck into undecidability problem, if employed upon an ambiguous grammar, no matter how large the constant k is.
Question 33 
Merging states with a common core may produce __________ conflicts and does not produce ___________ conflicts in an LALR purser.
ReduceReduce, ShiftReduce 
Question 33 Explanation:
Merge states with a common core may produce ReduceReduce conflicts and does not produce ShiftReduce conflicts in an LALR parser.
Question 34 
An operator precedence parser is a
Bottomup parser.  
Topdown parser.  
Back tracking parser.  
None of the above. 
Question 34 Explanation:
An operator precedence parser is a Bottomup parser.
There are 34 questions to complete.