LL grammar
In formal language theory, an LL grammar is a context-free grammar that can be parsed by an LL parser, which parses the input from Left to right, and constructs a Leftmost derivation of the sentence (hence LL, compared with LR parser that constructs a rightmost derivation). A language that has an LL grammar is known as an LL language. These form subsets of deterministic context-free grammars (DCFGs) and deterministic context-free languages (DCFLs), respectively. One says that a given grammar or language "is an LL grammar/language" or simply "is LL" to indicate that it is in this class.
LL parsers are table-based parsers, similar to LR parsers. LL grammars can alternatively be characterized as precisely those that can be parsed by a predictive parser – a recursive descent parser without backtracking – and these can be readily written by hand. This article is about the formal properties of LL grammars; for parsing, see LL parser or recursive descent parser.
Formal definition
Finite case
Given a natural number [math]\displaystyle{ k \geq 0 }[/math], a context-free grammar [math]\displaystyle{ G = (V, \Sigma, R, S) }[/math] is an LL(k) grammar if
- for each terminal symbol string [math]\displaystyle{ w \in \Sigma^* }[/math] of length up to [math]\displaystyle{ k }[/math] symbols,
- for each nonterminal symbol [math]\displaystyle{ A \in V }[/math], and
- for each terminal symbol string [math]\displaystyle{ w_1 \in \Sigma^* }[/math],
there is at most one production rule [math]\displaystyle{ r \in R }[/math] such that for some terminal symbol strings [math]\displaystyle{ w_2, w_3 \in \Sigma^* }[/math],
- the string [math]\displaystyle{ w_1 A w_3 }[/math] can be derived from the start symbol [math]\displaystyle{ S }[/math],
- [math]\displaystyle{ w_2 }[/math] can be derived from [math]\displaystyle{ A }[/math] after first applying rule [math]\displaystyle{ r }[/math], and
- the first [math]\displaystyle{ k }[/math] symbols of [math]\displaystyle{ w }[/math] and of [math]\displaystyle{ w_2 w_3 }[/math] agree.[2]
An alternative, but equivalent, formal definition is the following: [math]\displaystyle{ G = (V, \Sigma, R, S) }[/math] is an LL(k) grammar if, for arbitrary derivations
- [math]\displaystyle{ \begin{array}{ccccccc} S & \Rightarrow^L & w_1 A \chi & \Rightarrow & w_1 \nu \chi & \Rightarrow^* & w_1 w_2 w_3 \\ S & \Rightarrow^L & w_1 A \chi & \Rightarrow & w_1 \omega \chi & \Rightarrow^* & w_1 w'_2 w'_3, \\ \end{array} }[/math]
when the first [math]\displaystyle{ k }[/math] symbols of [math]\displaystyle{ w_2 w_3 }[/math] agree with those of [math]\displaystyle{ w'_2 w'_3 }[/math], then [math]\displaystyle{ \nu = \omega }[/math].[3][4]
Informally, when a parser has derived [math]\displaystyle{ w_1 A w_3 }[/math], with [math]\displaystyle{ A }[/math] its leftmost nonterminal and [math]\displaystyle{ w_1 }[/math] already consumed from the input, then by looking at that [math]\displaystyle{ w_1 }[/math] and peeking at the next [math]\displaystyle{ k }[/math] symbols [math]\displaystyle{ w }[/math] of the current input, the parser can identify with certainty the production rule [math]\displaystyle{ r }[/math] for [math]\displaystyle{ A }[/math].
When rule identification is possible even without considering the past input [math]\displaystyle{ w_1 }[/math], then the grammar is called a strong LL(k) grammar.[5] In the formal definition of a strong LL(k) grammar, the universal quantifier for [math]\displaystyle{ w_1 }[/math] is omitted, and [math]\displaystyle{ w_1 }[/math] is added to the "for some" quantifier for [math]\displaystyle{ w_2, w_3 }[/math]. For every LL(k) grammar, a structurally equivalent strong LL(k) grammar can be constructed.[6]
The class of LL(k) languages forms a strictly increasing sequence of sets: LL(0) ⊊ LL(1) ⊊ LL(2) ⊊ ….[7] It is decidable whether a given grammar G is LL(k), but it is not decidable whether an arbitrary grammar is LL(k) for some k. It is also decidable if a given LR(k) grammar is also an LL(m) grammar for some m.[8]
Every LL(k) grammar is also an LR(k) grammar. An ε-free LL(1) grammar is also an SLR(1) grammar. An LL(1) grammar with symbols that have both empty and non-empty derivations is also an LALR(1) grammar. An LL(1) grammar with symbols that have only the empty derivation may or may not be LALR(1).[9]
LL grammars cannot have rules containing left recursion.[10] Each LL(k) grammar that is ε-free can be transformed into an equivalent LL(k) grammar in Greibach normal form (which by definition does not have rules with left recursion).[11]
Regular case
Let [math]\displaystyle{ \Sigma }[/math] be a terminal alphabet. A partition [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ \Sigma^* }[/math] is called a regular partition if for every [math]\displaystyle{ R \in \pi }[/math] the language [math]\displaystyle{ R }[/math] is regular.
Let [math]\displaystyle{ G = (V, \Sigma, R, S) }[/math] be a context free grammar and let [math]\displaystyle{ \pi = \{ R_1, \dotso, R_n \} }[/math] be a regular partition of [math]\displaystyle{ \Sigma^* }[/math]. We say that [math]\displaystyle{ G }[/math] is an LL([math]\displaystyle{ \pi }[/math]) grammar if, for arbitrary derivations
- [math]\displaystyle{ \begin{array}{ccccccc} S & \Rightarrow^L & w_1 A \chi_1 & \Rightarrow & w_1 \nu \chi_1 & \Rightarrow^* & w_1 x\\ S & \Rightarrow^L & w_2 A \chi_2 & \Rightarrow & w_2 \omega \chi_2 & \Rightarrow^* & w_2 y, \\ \end{array} }[/math]
such that [math]\displaystyle{ x \equiv y \mod \pi }[/math] it follows that [math]\displaystyle{ \nu=\omega }[/math].[12]
A grammar G is said to be LL-regular (LLR) if there exists a regular partition of [math]\displaystyle{ \Sigma^* }[/math] such that G is LL([math]\displaystyle{ \pi }[/math]). A language is LL-regular if it is generated by an LL-regular grammar.
LLR grammars are unambiguous and cannot be left-recursive.
Every LL(k) grammar is LLR. Every LL(k) grammar is deterministic, but there exists a LLR grammar that is not deterministic.[13] Hence the class of LLR grammars is strictly larger than the union of LL(k) for each k.
It is decidable whether, given a regular partition [math]\displaystyle{ \pi }[/math], a given grammar is LL([math]\displaystyle{ \pi }[/math]). It is, however, not decidable whether an arbitrary grammar G is LLR. This is due to the fact that deciding whether a grammar G generates a regular language, which would be necessary to find a regular partition for G, can be reduced to the Post correspondence problem.
Every LLR grammar is LR-regular (LRR, the corresponding[clarify] equivalent for LR(k) grammars), but there exists an LR(1) grammar that is not LLR.[13]
Historically, LLR grammars followed the invention of the LRR grammars. Given a regular partition a Moore machine can be constructed to transduce the parsing from right to left, identifying instances of regular productions. Once that has been done, an LL(1) parser is sufficient to handle the transduced input in linear time. Thus, LLR parsers can handle a class of grammars strictly larger than LL(k) parsers while being equally efficient. Despite that the theory of LLR does not have any major applications. One possible and very plausible reason is that while there are generative algorithms for LL(k) and LR(k) parsers, the problem of generating an LLR/LRR parser is undecidable unless one has constructed a regular partition upfront. But even the problem of constructing a suitable regular partition given grammar is undecidable.
Simple deterministic languages
A context-free grammar is called simple deterministic,[14] or just simple,[15] if
- it is in Greibach normal form (i.e. each rule has the form [math]\displaystyle{ Z \rightarrow aY_1 \ldots Y_n, n \geq 0 }[/math]), and
- different right hand sides for the same nonterminal [math]\displaystyle{ Z }[/math] always start with different terminals [math]\displaystyle{ a }[/math].
A set of strings is called a simple deterministic, or just simple, language, if it has a simple deterministic grammar.
The class of languages having an ε-free LL(1) grammar in Greibach normal form equals the class of simple deterministic languages.[16] This language class includes the regular sets not containing ε.[15] Equivalence is decidable for it, while inclusion is not.[14]
Applications
LL grammars, particularly LL(1) grammars, are of great practical interest, as they are easy to parse, either by LL parsers or by recursive descent parsers, and many computer languages[clarify] are designed to be LL(1) for this reason. Languages based on grammars with a high value of k have traditionally been considered[citation needed] to be difficult to parse, although this is less true now given the availability and widespread use[citation needed] of parser generators supporting LL(k) grammars for arbitrary k.
See also
- Comparison of parser generators for a list of LL(k) and LL(*) parsers
Notes
- ↑ Kernighan & Ritchie 1988, Appendix A.13 "Grammar", p.193 ff. The top image part shows a simplified excerpt in an EBNF-like notation..
- ↑ (Rosenkrantz Stearns). Def.1. The authors do not consider the case k=0.
- ↑ where "[math]\displaystyle{ \Rightarrow^L }[/math]" denotes derivability by leftmost derivations, and [math]\displaystyle{ w_1,w_2,w_3,w'_2,w'_3 \in \Sigma^* }[/math], [math]\displaystyle{ A \in V }[/math], and [math]\displaystyle{ \chi, \nu, \omega \in (\Sigma \cup V)^* }[/math]
- ↑ (Waite Goos) Def. 5.22
- ↑ (Rosenkrantz Stearns) Def.2
- ↑ (Rosenkrantz Stearns) Theorem 2
- ↑ (Rosenkrantz Stearns): Using "[math]\displaystyle{ + }[/math]" to denote "or", the string set [math]\displaystyle{ \{ a^n(b^k d + b + cc)^n: n \geq 1 \} }[/math] has an [math]\displaystyle{ LL(k+1) }[/math], but no ε-free [math]\displaystyle{ LL(k) }[/math] grammar, for each [math]\displaystyle{ k \geq 1 }[/math].
- ↑ (Rosenkrantz Stearns)
- ↑ (Beatty 1982)
- ↑ (Rosenkrantz Stearns) Lemma 5
- ↑ (Rosenkrantz Stearns) Theorem 4
- ↑ Poplawski, David (1977). Properties of LL-Regular Languages. Purdue University.
- ↑ 13.0 13.1 David A. Poplawski (Aug 1977). Properties of LL-Regular Languages (Technical Report). https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1176&context=cstech.
- ↑ 14.0 14.1 (Korenjak Hopcroft)
- ↑ 15.0 15.1 (Hopcroft Ullman) Exercise 9.3
- ↑ (Rosenkrantz Stearns)
Sources
- Beatty, J. C. (1982). "On the relationship between LL(1) and LR(1) grammars". Journal of the ACM 29 (4 (Oct)): 1007–1022. doi:10.1145/322344.322350. https://cs.uwaterloo.ca/research/tr/1979/CS-79-36.pdf.
- Hopcroft, John E.; Ullman, Jeffrey D. (1979). Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. ISBN 978-0-201-02988-8. https://archive.org/details/introductiontoau00hopc.
- Kernighan, Brian W.; Ritchie, Dennis M. (April 1988). The C Programming Language. Prentice Hall Software Series (2nd ed.). Englewood Cliffs/NJ: Prentice Hall. ISBN 978-013110362-7. https://archive.org/details/cprogramminglang00bria.
- Korenjak, A.J.; Hopcroft, J.E. (1966). "Simple deterministic languages". IEEE Conf. Rec. 7th Ann. Symp. on Switching and Automata Theory (SWAT). IEEE Pub. No.. 16-C-40. pp. 36–46. doi:10.1109/SWAT.1966.22.
- Parr, T.; Fisher, K. (2011). "LL(*): The Foundation of the ANTLR Parser Generator". ACM SIGPLAN Notices 46 (6): 425–436. doi:10.1145/1993316.1993548. http://www.antlr.org/papers/LL-star-PLDI11.pdf.
- Rosenkrantz, D. J.; Stearns, R. E. (1970). "Properties of Deterministic Top Down Grammars". Information and Control 17 (3): 226–256. doi:10.1016/s0019-9958(70)90446-8.
- Waite, William M.; Goos, Gerhard (1984). Compiler Construction. Texts and Monographs in Computer Science. Heidelberg: Springer. ISBN 978-3-540-90821-0.
Further reading
- Sippu, Seppo; Soisalon-Soininen, Eljas (1990). Parsing Theory: LR(k) and LL(k) Parsing. Springer Science & Business Media. ISBN 978-3-540-51732-0.
Original source: https://en.wikipedia.org/wiki/LL grammar.
Read more |