1 of 87

Pushdown Automata

Robert M. Keller

April 2016

2 of 87

Wanted: A Machine Characterizing Context-Free Languages

  • Similar to the DFA/NFA characterization of regular languages, we’d like a machine characterization of context-free languages.�
  • We know that finite-state machines are inadequate.�
  • We need to add an extra memory component, one that permits unbounded storage.�
  • We’d like something more structured than a Turing machine.

3 of 87

Applications

Parsing a language, such as a programming language

Creating a grammar by starting with a machine viewpoint is sometimes easier.

4 of 87

A Proper Context-Free Language

  • Example: {xcxR | x ∈ Σ*} where c ∉ Σ.�
  • Supposing Σ ={a, b}, give a context-free grammar for this language. �
  • Is this language regular?

5 of 87

Using a Single Stack: �PDA = Pushdown Automaton

  • By adding a stack to a finite-state machine, we can recognize non-regular languages.�
  • Example: {xcxR | x ∈ Σ*} where c ∉ Σ.�
    • Begin reading symbols.
    • Until we encounter a ‘c’, push the symbols read onto a stack.�
    • After ‘c’ is encountered, pop the symbols from the stack, comparing with the next symbol read, if any.
    • Accept when the stack is empty.�
    • Must also be checking for no c’s, more than one c, etc. and reject these cases.

6 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q0

a

b

b

c

b

b

a

(empty stack)

unread input

7 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q1

a

b

b

c

b

b

a

$

$ indicates otherwise-empty stack,

so the machine can test for that when

necessary.

unread input

8 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q1

b

b

c

b

b

a

a

The input symbols are pushed

as they are read.

$

unread input

9 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q1

$

b

c

b

b

a

b

The input symbols are pushed

as they are read.

a

unread input

10 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q1

a

$

c

b

b

a

b

… until a c is encountered in the input,

causing a change of control state

b

unread input

11 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q2

a

$

b

b

a

b

The control state is changed.

The c is not pushed.

b

unread input

12 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q2

a

$

b

a

In the new control state,

input symbols are matched against

the stack symbols, which are discarded.

A mismatch stops the process, as

no successor is specified.

b

b

a

unread input

13 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q2

a

$

a

Matching continues.

a

unread input

14 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q2

$

The input is empty. However, there

is no way to react to input emptiness

as such.

The machine can go to an accepting�state whenever the current control state

is q2 and $ is on top of the stack

(according to the way we have defined

δ in this case).

unread input

15 of 87

Movie of the PDA accepting a string in�{xcxR | x ∈ {a, b}*}

q3

$

q3 is the accepting state, so the original�input abbcbba is accepted.

Had the input been abbcbb only, the

input would not be accepted, because

$ would not have been on top of the

stack when the input became empty,

and q3 would not have been entered.

16 of 87

Pushdown Automata (PDAs)

  • The type of behavior described on the preceding slide is that of a (deterministic) PDA (DPDA).�
  • In general, by convention, PDA’s are �non-deterministic.�
  • The situation for PDA’s is different from DFA’s.�There is no subset construction, a PDA is not a finite-state, so the set of all subsets of states is generally uncountably infinite.

17 of 87

PDAs in practice

  • PDA’s are a mathematical model designed to capture context-free language recognition.

  • They are also model how actual parsers work in a skeletal way.

  • In practice, one wants restrictions on the language so that parsing can be done deterministically (without backtracking) using a little bit of “look-ahead”.

18 of 87

PDA Historical

The PDA was invented by �Anthony Oettinger at Harvard, �my “academic grandfather”.

Robert Keller

http://genealogy.math.ndsu.nodak.edu/id.php?id=13305

19 of 87

PDA Defined: (Q, Σ, Γ , δ, q0, F)

Sipser’s Version:

  • Q is a finite set of control states
  • Σ is the input alphabet
  • Γ is the stack alphabet
  • q0 is the initial control state
  • F ⊆ Q is the set of accepting control states
  • δ is the state transition relation:��δ: Q x Σε x Γε → subsets of Q x Γε�(The non-determinism is in choosing a member of δ(q, σ, γ).)

Notation: Σε = Σ ∪ {ε}, Γε = Γ ∪ {ε}. �

20 of 87

PDA Diagram

The state is generally of the form (q, x, γ) ∈ Q x Σ* x Γ*,

i.e. (control-state, remaining input, stack-contents).�

Sometimes this state is called the configuration or instantaneous

description (ID), to distinguish it from the control state. �I prefer to call it the state, which is what it is.

The initial state is (q0, x, ε). We will describe δ presently.

q

b

a

a

a

b

a

a

a

c

$

a

Control state:

input (one-way read-only):

stack (two-way):

∈ Q

∈ Γ*

∈ Σ*

21 of 87

Generalized Version: 5-Tuple Notation

In lieu of δ: Q x Σε x Γε → subsets of Q x Γε

use a set of 5-tuples for more convenience.

q, x, y → q′, z q, q′ ∈ Q

x ∈ Σε is what is read from the input � (either a letter or nothing)

y ∈ Γ* is what is on the top of the stack and popped

z ∈ Γ* is what is pushed

JFLAP supports this model.

22 of 87

δ can also be described by a graph

q'

q

σ , γ γ '

control state before

control state after

input symbol (or ε)

stack symbol after (or ε)

stack symbol before (or ε)

In lieu of (q’, γ’) ∈ δ(q, σ, γ)

23 of 87

Sipser’s vs. Other Models

I think that Sipser’s model was conceived with a certain proof in mind. It is definitely not the easiest to program.

Knowing that these models can be inter-converted means that we can have ease in programming as well as proof elegance.

24 of 87

PDA Acceptance

  • A PDA in general is a non-deterministic recognizer.�
  • A PDA accepts or does not accept the input seen so far provided that it is in an accepting control state.

  • Due to non-determinism, the machine could be in various control states after seeing a given input.�
  • It is only required that the machine be in some accepting control state for acceptance.

  • Acceptance is not final, in that providing more input can take the pda to non-accepting.

25 of 87

PDA Non-Acceptance

  • As a pda is non-deterministic, it is important that it not be able to enter an accepting state for an input string that is not in the language.�
  • If there is no way for the PDA to be in an accepting state having read the input, the input is implicitly rejected.

  • Rejection could be ensured by making a transition to a state from which there is no escape to an accepting state.�
  • However it is possible for a PDA to not accept, but loop forever.

26 of 87

Empty-Stack Acceptance

  • This is an alternate common model.

  • Empty-stack acceptance means that the PDA accepts when its stack is empty after the input has been read. �
  • The control state does not matter in this case.

  • The two versions of acceptance can be shown inter-convertible.

27 of 87

Initial Stack Symbol

  • Some versions specify an initial stack symbol (Z in the case of JFLAP) as part of the model. This symbol is always on the stack at the start of a run.�
  • Sipser does not, although many of his examples begin by placing a special symbol, such as $, on the stack before anything else happens.

  • This symbol is commonly used to test whether the stack would otherwise be empty and react to this condition.

28 of 87

Other PDA Alternatives

  • Most versions allow a string, rather than just a letter, to be written to the stack in one step.

  • The same effect can be achieved in Sipser’s model, which only allows one letter at a time to be added, �by introducing additional control states that add the string one symbol at a time.

29 of 87

Worksheet

Work out the 5-tuples for a pda recognizing the language {xcxR | x ∈ {a, b}*}.

30 of 87

PDA accepting {xxR | x ∈ {a, b}*}

  • While it appears similar to �{xcxR | x ∈ {a, b}*}, the set {xxR | x ∈ {a, b}*} is, in some sense, more difficult to recognize.�
  • The reason is that for a given input, we don’t have an indicator of when x ends and xR starts.�
  • Here a PDA can use its non-determinism: It can guess at the point it thinks x has ended.
    • If it guessed right, it will get to an accepting state.
    • If it guessed wrong, nothing lost.
    • An important thing is that guessing never leads to an accepting state when the input is not in the language.

31 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q1

a

b

b

b

b

a

$

$ indicates otherwise “empty” stack,

so the machine can test for that when

necessary.

32 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q1

b

b

b

b

a

a

Symbols from the input are pushed

as read.

$

33 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q1

$

b

b

b

a

b

Symbols from the input are pushed

as read.

a

34 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q1

$

b

b

a

b

At any point, such as this one,

the machine can guess that its

at the middle of the string and

switch modes (without necessarily

using any input).

If it makes the “wrong” guess, that

particular sequence will end up not

accepting. There only need to be at

least one correct guess.

b

a

35 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q2

$

b

b

a

b

In control state q2, the machine begins

matching the input against the stack.

b

a

36 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q2

b

a

b

Matching continues.

a

$

37 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q2

a

a

Matching continues.

Any mis-match would result

in no further transitions.

$

38 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q2

$

In state with $ on top of stack,

the machine can decide to accept

what it has seen so far.

That doesn’t mean that it accepts

an extension of that input, however.

39 of 87

Movie of the PDA accepting a string in

{xxR | x ∈ {a, b}*}

q3

$

State q3 is accepting.

The original input abbbba is accepted.

40 of 87

Worksheet

Work out the 5-tuples for the

{xxR | x ∈ {a, b}*} PDA.

41 of 87

PDA described by a graph

q2

q3

q1

a, ε a

b, ε b

c, ε ε

a, a ε

b, b ε

ε, $ $

q0

ε, ε $

42 of 87

δ can also be described by “transition rules”

q1, a, ε → q1, a

q1, b, ε → q1, b

q1, ε, ε → q2, ε

q2, a, a → q2, ε

q2, b, b → q2, ε

q2, ε, $ → q3, $

Each rule is essentially a 5-tuple:�� current-state, input-read, stack-popped, next-state, stack-pushed

43 of 87

JFLAP Version of PDA

  • Graphical specification�
  • λ rather than ε is used for empty string.

  • Initial stack symbol Z is always on stack at start.�
  • String (rather than just symbol) can be pushed: �Top of stack is on the left.�
  • String can also be popped as an option.
  • Acceptance is by accepting (“final”) state, OR by empty stack (choose one).
  • JFLAP uses “NPDA” for what is usually called “PDA”. Non-determinism is usually implicit. JFLAP’s “PDA” is usually “DPDA”.

44 of 87

http://www.jflap.org/tutorial/pda/construct/

JFLAP defines a pushdown automaton M as the septuple �M = (Q, Σ, Γ, δ, qs, Z, F) where�

  • Q is a finite set of states {qi | i is a nonnegative integer}
  • Σ is the finite input alphabet
  • Γ is the finite stack alphabet
  • δ is the transition function, � δ : Q × Σ* × Γ* → finite subsets of Q × Γ*
  • qs (is member of Q) is the initial state
  • Z is the start stack symbol (must be a capital Z)
  • F (a subset of Q) is the set of final states

45 of 87

Example: Well-balanced Paren Strings

Input, Pop; Push

(Z is atop stack at start)

(Z is popped on acceptance)

Intentionally not accepted: ε, ()(), etc.

46 of 87

Running Examples

47 of 87

CFL Characterization Theorem

  • A language is context-free iff there is a pushdown acceptor that recognizes it.

  • (In general, the PDA will be a non-deterministic machine.)

48 of 87

The ⇒ and ⇒* notation for PDA states

  • Let q, q’ ∈ Q; x, x’ ∈ Σ*; γ, γ’ ∈ Γ* �� (q, x, γ) (q’, x’, γ’)��means that there is a 1-step transition of the PDA from state �(q, x, γ) to (q’, x’, γ’).�
  • * is the transitive closure of , meaning:�
    • (q, x, γ) * (q, x, γ)�
    • If (q, x, γ) * (q’, x’, γ’) and (q’, x’, γ’) (q”, x”, γ”),��then (q, x, γ) * (q”, x”, γ”).�

49 of 87

A key property (“stack property”) of PDA’s:

  • If (q, x, γ) ⇒* (q’, x’, γ’) ��where x, x’ ∈ Σ*; γ , γ’ ∈ Γ*��then for any y ∈ Σ* and ζ ∈ Γ* ��also (q, xy, γζ) ⇒* (q’, x’y, γ’ζ).

  • In other words, steps that can take place with a given input and stack contents can also take place with additional input and lower-down stack contents, without depending upon or using the additional input or stack symbols.

50 of 87

Leftmost/Rightmost Derivation Nomenclature

  • A derivation in a context-free grammar is called leftmost if it is always the leftmost auxiliary that is replaced.

  • A derivation in a context-free grammar is called rightmost if it is always the rightmost auxiliary that is replaced.

  • Observation: For each derivation tree there is exactly one leftmost and one rightmost derivation.

51 of 87

Parallel Between CFG and PDA

Each derivation � S ⇒* x �in a CFG

corresponds to a series of moves �

(q, x, S) ⇒* (q′, ε, ε)

of a PDA accepting by empty stack.�

52 of 87

Left/Rightmost Derivations

A V | V + A

V a | b | c

leftmost: AV+A ⇒ c+A⇒ c+V+A⇒ c+a+A⇒ c+a+V⇒ c+a+b�

rightmost: A ⇒ V+A ⇒ V+V+A ⇒ V+V+V ⇒ V+V+b ⇒ V+a+b ⇒ c+a+b

b

A

V + A

V + A

a

V

c

53 of 87

CFL PDA Lemma

  • Every context-free language is accepted by some PDA.

  • This discussion is simplified by the model that allows a transition to push a sequence of symbols in a single step.

(No generality is lost.)

54 of 87

Proof of the CFL → PDA Lemma using �Top-Down, LL, or Produce-Match technique

  • Let G be a context-free grammar for the CFL.�
  • Construct from G a PDA M that simulates an arbitrary leftmost derivation in G. Initially M pushes onto the stack an empty-stack marker say $, followed by the start symbol of G.

  • In a leftmost derivation, the leftmost auxiliary in a string is replaced. The stack holds a suffix of the current working string, with the top of stack corresponding to the leftmost symbol in the string.

55 of 87

Produce-Match technique continued

  • Match Step: If the top of stack is a terminal, it is matched against the corresponding symbol in the input and removed.

  • Produce Step: If the top of stack is a non-terminal A, then a production with A as LHS is chosen and the RHS is pushed onto the stack.

  • When the stack is otherwise empty (e.g. $ or Z is on top of the stack) then, and only then, the machine can enter an accepting state.

56 of 87

Example of CFG to PDA Transition Rules

Production Transition (top of stack at left)� q0, ε, ε → q1, S$ initial � S → (T q1, ε, S → q1, (T produce � S → (ST q1, ε, S → q1, (ST produce� S → (TS q1, ε, S → q1, (TS produce� S → (STS q1, ε, S → q1, (STS produce� T → ) q1, ε, T → q1, )

Extra rules: q1, (, ( → q1, ε match

q1, ), ) → q1, ε match

q1, $, ε → q2, ε accept� � q0 is initial, q2 is accepting

57 of 87

Derivation vs. Transition Sequence

State, Input, Stack Transitions

Derivation q0, ()(), ε init �S ⇒ q1, ()(), S$ produce S → (TS �(TS ⇒ q1, ()(), (TS$ match (� q1,)(), TS$ produce T → ) �()S ⇒ q1,)(), )S$ match ) � q1,(), S$ produce S → (T �()(T ⇒ q1,(), (T$ match (� q1,), T$ produce T → ) �()() q1,), )$ match )

q1, ε, $ accept

q2, ε, $

Derivation: S ⇒ (TS ⇒ ()S ⇒ ()(T ⇒ ()()

58 of 87

Another Way of Showing

  • Productions: S → (T � S → (ST � S → (TS � S → (STS � T → )
  • Input | Stack Match steps are implicit.
  • (()()) | S$ Red shows matched portions of input
  • (()()) | (ST$ not remaining in input and
  • (()()) | ((TST$ not actually on the stack.
  • (()()) | (()ST$
  • (()()) | (()(TT$ Underscore shows the LHS symbol
  • (()()) | (()())T$ being rewritten in the next step.
  • (()()) | (()())$

59 of 87

Stack vs. Leftmost Derivation

S

S

S

T

(

S

T

(

S

T

S

(

S

T

(

S

T

S

(

)

S

T

(

S

T

S

(

)

T

(

S

T

(

S

T

S

(

)

T

(

)

S

T

(

S

T

S

(

)

T

(

)

)

Grammar

60 of 87

Produce-Match Version PDA in JFLAP

61 of 87

JFLAP Tests of Top-Down Version

62 of 87

A JFLAP-generated Tree for ()()

using Produce Match (LL)

63 of 87

A 2nd Proof of the CFG ⇒ PDA Lemma: �Bottom-Up, LR, or Shift-Reduce technique

  • Assume L is a context-free language. Then L has a context-free grammar G.
  • Create a pda that accepts by accepting state.
  • For each terminal symbol σ, create a transition:� q0, σ, ε → q0, σ

These have the effect of shifting the input string onto the stack (and reversing it in the process).�

  • For each production A → x1x2x3…xn create a transition:� q0, ε, xnxn-1xn-2 . . . x1 → q0, A Note reversal!
  • Add “accept” transitions: q0, ε, S → q1, ε � q1, ε, $ → qa, $
  • This pda simulates a rightmost derivation of an accepted input string.

64 of 87

Example of Bottom-Up

Production Transitions� S → (T q0, ε, T( → q0, S� � T → S) q0, ε, )S → q0, T� � S → () q0, ε, )( → q0, S � S → SS q0, ε, SS → q0, S

shift transitions q0, (, ε → q0, ( � q0, ), ε → q0, )�� accept q0, ε, S → q1, ε � transitions q1, ε, $ → qa, $

Input | State | Stack Rule

(()()) | q0 | $ q0, (, ε → q0, (

(()()) | q0 | ($ q0, (, ε → q0, ((()()) | q0 | (($ q0, ), ε → q0, )(()()) | q0 | )(($ q0, ε, )( → q0, S(()()) | q0 | S($ q0, (, ε → q0, ((()()) | q0 | (S($ q0, ), ε → q0, )(()()) | q0 | )(S($ q0, ε, )( → q0, S(()()) | q0 | SS($ q0, ε, SS → q0, S(()()) | q0 | S($ q0, ), ε → q0, )(()()) | q0 | )S($ q0, )S, ε → q0, T(()()) | q0 | T($ q0, T(, ε → q0, S(()()) | q0 | S$ q0, ε, S → q1, ε(()()) | q1 | $ q1, ε, $ → qa, $(()()) | qa | $ accept

Derivation: S ⇒ (T ⇒ (S) ⇒ (SS) ⇒ (S()) ⇒ (()())

65 of 87

Shift-Reduce Version in JFLAP

66 of 87

JFLAP Shift-Reduce Acceptance

67 of 87

How these methods differ from actual parsers

  • Both methods are used in practical parsing.�
  • Non-determinism is eliminated by using look-ahead in the input to determine which rule to use.�
  • A 1-letter look-ahead, produce-match, parser is called LL(1).�
  • A 1-letter look-ahead, shift-reduce, parser is called an LR(1).

68 of 87

LL(k) vs. LR(k)

  • Both methods require the grammar to be in a certain form for determinism.

  • LL(k) is easier to understand.

  • LR(k) is described in Sipser, 3rd ed. pp 130-154.

  • LR(1) grammars are equivalent to languages accepted by deterministic PDA’s.

69 of 87

PDA CFG

  • For every PDA M, there is a CFG G that generates the language accepted by M.�
  • Read Sipser’s elegant proof: Lemma 2.27.�

70 of 87

PDA CFG Lemma

Additional constraints are imposed on PDA M, without loss of generality:

  • On each transition, M either
      • pushes a single symbol, or
      • pops a single symbol,

but not both.

[If a transition does both, add an intermediate state and have it pop, then push. ��If a transition does neither, have it push an arbitrary symbol, then immediately pop it.]

  • M has only one accept state qaccept and empties its stack before entering it.

71 of 87

Construction of G from M

We want �G to generate a terminal string x

iff

M goes from its initial state to accepting state with x as input, �with the stack empty before and after.

Construction (which is slightly more general than the main goal):

For each pair of states p, q in M, �define an auxiliary Apq in G.

We want:� � ∀x∈ Σ* [(p, x, ε) ⇒* (q, ε, ε) in M iffApq* x in G].�

How to achieve this?

72 of 87

Construction continued

  • Suppose (p, x, ε) ⇒+ (q, ε, ε) � (non-empty transition sequence)
  • Since the stack starts empty, � the first transition must be a push.
  • Similarly, the stack ends empty, � so the last transition must be a pop.�
  • There are two possibilities:
    1. The stack becomes empty one or more times between first and last transitions, or�
    2. The stack never becomes empty in between first and last transitions.

73 of 87

Two Possibilities

stack size

amount of

input x read

A.

stack�becomes

empty

stack size

amount of

input x read

B.

stack �never�empty

in between

p

q

p

q

r

x

y

z

x = yz

(It may become empty �again and again.)

74 of 87

Simulating Transition Sequences with G

  • In case A, suppose that r is a state in which the stack becomes empty in between. �This is achievable by a rule in G of the form:�� Apq → Apr Arq

because x can be written yz, where �(p, yz, ε) ⇒* (r, z, ε) and (r, z, ε) ⇒* (q, ε, ε)

[which combine to (p, yz, ε) ⇒* (q, ε, ε)].

75 of 87

Simulating Transition Sequences with G

  • In case B, let a be the first input symbol read and b be the last. Let u be the first stack symbol pushed, which is the last popped.�Then in M we must have, for one or more r, s, u: � (r, u) ∈ δ(q, a, ε) � (q, ε) ∈ δ(s, b, u) ��so include in G a production�� Apq → aArsb

Note that x can be written ayb, where �(p, ayb, ε) ⇒* (r, yb, ε) and (r, z, ε) ⇒* (q, ε, ε)

[which combine to (p, yz, ε) ⇒* (q, ε, ε)].

76 of 87

Summary Construction of G

  • For each 4-tuple of states p, q, r, s and pair of input symbols a, b, and pair of stack symbols α, β there are rules of G:
    • App → ε�
    • Apq → Apr Arq

    • Apq → aArsb �provided (r, u) ∈ δ(p, a, ε) and (q, ε) ∈ δ(s, b, u) for some u

  • The start symbol of G is:Aq0 qaccept
  • Note: Not every production will necessarily get used. The productions are enabling, rather than coercing.

77 of 87

Proof that L(G) = L(M)

  • Having constructed G from M, it must be shown that the G generates the same language that M accepts.

  • This is done in two parts:
    • L(G) ⊆ L(M): More generally, Apq* x in G implies �(p, x, ε) ⇒* (q, ε, ε) in M (claim 2.30).�
    • L(M) ⊆ L(G): More generally, (p, x, ε) ⇒* (q, ε, ε) in M implies Apq * x in G (claim 2.31).

78 of 87

Claim 2.30 (Sipser):L(G) ⊆ L(M)

  • For every p, q: If Apq ⇒* x in G, �then (p, x, ε) ⇒*(q, ε, ε) in M.
  • Proof by induction on the number n of steps of the derivation of x in G.�
    • Basis: n = 1. The only rules in G that yield a terminal string in one step are of the form App → ε. But in M, �(p, ε, ε) ⇒*(p, ε, ε) follows from definition of ⇒*.�
    • Induction Step: n = k+1 > 1. If Apq ⇒* x in k+1 steps, then we consider the two possible cases A vs. B for the first step in the derivation.

79 of 87

Claim 2.30 Induction, case A.

  • In case A, the first step is Apq → Apr Arq and x = yz for some z such that Apr ⇒* x and Arq ⇒* y.�
  • By the induction hypothesis, (p, yz, ε) ⇒*(r, z, ε) and �(r, z, ε) ⇒*(q, ε, ε) in M, so by transitivity of ⇒*, �(p, yz, ε) ⇒*(q, ε, ε).

80 of 87

Claim 2.30 Induction, case B.

  • In case B, the first step is Apq → aArsb. Also x = ayb for some y such that Ars ⇒* y.

  • Apq → aArsb is a rule of G only due to � (r, u) δ(p, a, ε) and (q, ε) δ(s, b, u) in M
  • By the induction hypothesis, (r, y, ε) ⇒*(s, ε, ε) in M, so using the transitions above, we have �(p, ayb, ε) (r, yb, u) ⇒* [by IH] (s, b, u) (q, ε, ε).

81 of 87

Claim 2.31 (Sipser) L(M) ⊆ L(G)

  • If (p, x, ε) ⇒*(q, ε, ε) in M, �then Apq ⇒* x in G.

  • Proof is by induction on the number of steps in the transition sequence.

  • The induction step decomposes into the two cases A and B per the construction.

  • Please see text for details.

82 of 87

Sipser Proof vs. JFLAP

  • Generally, the proof will generate a very large set of productions.

  • JFLAP has automation for conversion PDA to Grammar.

  • JFLAP PDA requirements slightly different: Every transition pops 1 symbol and pushes 0 or 2 symbols.

83 of 87

PDA to CFG as a way to get grammar�

  • Example: L = {x | #1(x) = #0(x)}

84 of 87

PDA for L

85 of 87

JFLAP-Generated Grammar for L

86 of 87

Conversion of Grammar back to PDA Using LL (Produce-Match)

87 of 87

Conversion of Grammar back to PDA Using LR (Shift-Reduce)