Earley parser
Class  Parsing grammars that are contextfree 

Data structure  String 
Worstcase performance  [math]\displaystyle{ O(n^3) }[/math] 
Bestcase performance 

Average performance  [math]\displaystyle{ \Theta(n^3) }[/math] 
In computer science, the Earley parser is an algorithm for parsing strings that belong to a given contextfree language, though (depending on the variant) it may suffer problems with certain nullable grammars.^{[1]} The algorithm, named after its inventor, Jay Earley, is a chart parser that uses dynamic programming; it is mainly used for parsing in computational linguistics. It was first introduced in his dissertation^{[2]} in 1968 (and later appeared in an abbreviated, more legible, form in a journal^{[3]}).
Earley parsers are appealing because they can parse all contextfree languages, unlike LR parsers and LL parsers, which are more typically used in compilers but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case [math]\displaystyle{ {O}(n^3) }[/math], where n is the length of the parsed string, quadratic time for unambiguous grammars [math]\displaystyle{ {O}(n^2) }[/math],^{[4]} and linear time for all deterministic contextfree grammars. It performs particularly well when the rules are written leftrecursively.
Earley recogniser
The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser.
The algorithm
In the following descriptions, α, β, and γ represent any string of terminals/nonterminals (including the empty string), X and Y represent single nonterminals, and a represents a terminal symbol.
Earley's algorithm is a topdown dynamic programming algorithm. In the following, we use Earley's dot notation: given a production X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.
Input position 0 is the position prior to input. Input position n is the position after accepting the nth token. (Informally, input positions can be thought of as locations at token boundaries.) For every input position, the parser generates a state set. Each state is a tuple (X → α • β, i), consisting of
 the production currently being matched (X → α β)
 the current position in that production (visually represented by the dot •)
 the position i in the input at which the matching of this production began: the origin position
(Earley's original algorithm included a lookahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)
A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state.
The state set at input position k is called S(k). The parser is seeded with S(0) consisting of only the toplevel rule. The parser then repeatedly executes three operations: prediction, scanning, and completion.
 Prediction: For every state in S(k) of the form (X → α • Y β, j) (where j is the origin position as above), add (Y → • γ, k) to S(k) for every production in the grammar with Y on the lefthand side (Y → γ).
 Scanning: If a is the next symbol in the input stream, for every state in S(k) of the form (X → α • a β, j), add (X → α a • β, j) to S(k+1).
 Completion: For every state in S(k) of the form (Y → γ •, j), find all states in S(j) of the form (X → α • Y β, i) and add (X → α Y • β, i) to S(k).
Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.
The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top levelrule and n the input length, otherwise it rejects.
Pseudocode
Adapted from Speech and Language Processing^{[5]} by Daniel Jurafsky and James H. Martin,
DECLARE ARRAY S; function INIT(words) S ← CREATE_ARRAY(LENGTH(words) + 1) for k ← from 0 to LENGTH(words) do S[k] ← EMPTY_ORDERED_SET function EARLEY_PARSE(words, grammar) INIT(words) ADD_TO_SET((γ → •S, 0), S[0]) for k ← from 0 to LENGTH(words) do for each state in S[k] do // S[k] can expand during this loop if not FINISHED(state) then if NEXT_ELEMENT_OF(state) is a nonterminal then PREDICTOR(state, k, grammar) // non_terminal else do SCANNER(state, k, words) // terminal else do COMPLETER(state, k) end end return chart procedure PREDICTOR((A → α•Bβ, j), k, grammar) for each (B → γ) in GRAMMAR_RULES_FOR(B, grammar) do ADD_TO_SET((B → •γ, k), S[k]) end procedure SCANNER((A → α•aβ, j), k, words) if j < LENGTH(words) and a ⊂ PARTS_OF_SPEECH(words[k]) then ADD_TO_SET((A → αa•β, j), S[k+1]) end procedure COMPLETER((B → γ•, x), k) for each (A → α•Bβ, j) in S[x] do ADD_TO_SET((A → αB•β, j), S[k]) end
Example
Consider the following simple grammar for arithmetic expressions:
<P> ::= <S> # the start rule <S> ::= <S> "+" <M>  <M> <M> ::= <M> "*" <T>  <T> <T> ::= "1"  "2"  "3"  "4"
With the input:
2 + 3 * 4
This is the sequence of state sets:
(state no.)  Production  (Origin)  Comment 

S(0): • 2 + 3 * 4  
1  P → • S  0  start rule 
2  S → • S + M  0  predict from (1) 
3  S → • M  0  predict from (1) 
4  M → • M * T  0  predict from (3) 
5  M → • T  0  predict from (3) 
6  T → • number  0  predict from (5) 
S(1): 2 • + 3 * 4  
1  T → number •  0  scan from S(0)(6) 
2  M → T •  0  complete from (1) and S(0)(5) 
3  M → M • * T  0  complete from (2) and S(0)(4) 
4  S → M •  0  complete from (2) and S(0)(3) 
5  S → S • + M  0  complete from (4) and S(0)(2) 
6  P → S •  0  complete from (4) and S(0)(1) 
S(2): 2 + • 3 * 4  
1  S → S + • M  0  scan from S(1)(5) 
2  M → • M * T  2  predict from (1) 
3  M → • T  2  predict from (1) 
4  T → • number  2  predict from (3) 
S(3): 2 + 3 • * 4  
1  T → number •  2  scan from S(2)(4) 
2  M → T •  2  complete from (1) and S(2)(3) 
3  M → M • * T  2  complete from (2) and S(2)(2) 
4  S → S + M •  0  complete from (2) and S(2)(1) 
5  S → S • + M  0  complete from (4) and S(0)(2) 
6  P → S •  0  complete from (4) and S(0)(1) 
S(4): 2 + 3 * • 4  
1  M → M * • T  2  scan from S(3)(3) 
2  T → • number  4  predict from (1) 
S(5): 2 + 3 * 4 •  
1  T → number •  4  scan from S(4)(2) 
2  M → M * T •  2  complete from (1) and S(4)(1) 
3  M → M • * T  2  complete from (2) and S(2)(2) 
4  S → S + M •  0  complete from (2) and S(2)(1) 
5  S → S • + M  0  complete from (4) and S(0)(2) 
6  P → S •  0  complete from (4) and S(0)(1) 
The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.
Constructing the parse forest
Earley's dissertation^{[6]} briefly describes an algorithm for constructing parse trees by adding a set of pointers from each nonterminal in an Earley item back to the items that caused it to be recognized. But Tomita noticed^{[7]} that this does not take into account the relations between symbols, so if we consider the grammar S → SS  b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb.
Another method^{[8]} is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.
 Predicted items have a null SPPF pointer.
 The scanner creates an SPPF node representing the nonterminal it is scanning.
 Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the nonterminal or completed item).
SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.
Optimizations
Philippe McLean and R. Nigel Horspool in their paper "A Faster Earley Parser" combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.
See also
Citations
 ↑ Kegler, Jeffrey. "What is the Marpa algorithm?". http://blogs.perl.org/users/jeffrey_kegler/2011/11/whatisthemarpaalgorithm.html.
 ↑ Earley, Jay (1968). An Efficient ContextFree Parsing Algorithm. CarnegieMellon Dissertation. http://reportsarchive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMUCS68earley.pdf.
 ↑ "An efficient contextfree parsing algorithm", Communications of the ACM 13 (2): 94–102, 1970, doi:10.1145/362007.362035, http://www2.cs.cmu.edu/afs/cs.cmu.edu/project/cmt55/lti/Courses/711/Classnotes/p94earley.pdf
 ↑ John E. Hopcroft and Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Reading/MA: AddisonWesley. ISBN 9780201029888. https://archive.org/details/introductiontoau00hopc. p.145
 ↑ Jurafsky, D. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Pearson Prentice Hall. ISBN 9780131873216. https://books.google.com/books?id=fZmj5UNK8AQC.
 ↑ Earley, Jay (1968). An Efficient ContextFree Parsing Algorithm. CarnegieMellon Dissertation. p. 106. http://reportsarchive.adm.cs.cmu.edu/anon/anon/usr/ftp/scan/CMUCS68earley.pdf.
 ↑ Tomita, Masaru (April 17, 2013). Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems. Springer Science and Business Media. p. 74. ISBN 9781475718850. https://books.google.com/books?id=DAjkBwAAQBAJ&q=Tomita%20Efficient%20Parsing%20for%20natural%20Language&pg=PA74. Retrieved 16 September 2015.
 ↑ Scott, Elizabeth (April 1, 2008). "SPPFStyle Parsing From Earley Recognizers". Electronic Notes in Theoretical Computer Science 203 (2): 53–67. doi:10.1016/j.entcs.2008.03.044.
Other reference materials
 Aycock, John (2002). "Practical Earley Parsing". The Computer Journal 45 (6): 620–630. doi:10.1093/comjnl/45.6.620.
 Leo, Joop M. I. M. (1991), "A general contextfree parsing algorithm running in linear time on every LR(k) grammar without using lookahead", Theoretical Computer Science 82 (1): 165–176, doi:10.1016/03043975(91)90180A
 Tomita, Masaru (1984). "LR parsers for natural languages". 10th International Conference on Computational Linguistics. pp. 354–357. https://aclanthology.info/pdf/P/P84/P841073.pdf.
Implementations
C, C++
 'Yet Another Earley Parser (YAEP)' – C/C++ libraries
Haskell
Java
 [1] – a Java implementation of the Earley algorithm
 PEN – a Java library that implements the Earley algorithm
 Pep – a Java library that implements the Earley algorithm and provides charts and parse trees as parsing artifacts
 digitalheir/javaprobabilisticearleyparser  a Java library that implements the probabilistic Earley algorithm, which is useful to determine the most likely parse tree from an ambiguous sentence
C#
 coonsta/earley  An Earley parser in C#
 patrickhuber/pliant  An Earley parser that integrates the improvements adopted by Marpa and demonstrates Elizabeth Scott's tree building algorithm.
 ellisonch/CFGLib  Probabilistic Context Free Grammar (PCFG) Library for C# (Earley + SPPF, CYK)
JavaScript
 Nearley – an Earley parser that's starting to integrate the improvements that Marpa adopted
 A Pintsized Earley Parser – a toy parser (with annotated pseudocode) to demonstrate Elizabeth Scott's technique for building the shared packed parse forest
 lagodiuk/earleyparserjs – a tiny JavaScript implementation of Earley parser (including generation of the parsingforest)
 digitalheir/probabilisticearleyparserjavascript  JavaScript implementation of the probabilistic Earley parser
OCaml
 Simple Earley  An implementation of a simple Earleylike parsing algorithm, with documentation.
Perl
 Marpa::R2 – a Perl module. Marpa is an Earley's algorithm that includes the improvements made by Joop Leo, and by Aycock and Horspool.
 Parse::Earley – a Perl module implementing Jay Earley's original algorithm
Python
 Lark – an objectoriented, procedural implementation of an Earley parser, that outputs a SPPF.
 NLTK – a Python toolkit with an Earley parser
 Spark – an objectoriented little language framework for Python implementing an Earley parser
 spark_parser – updated and packaged version of the Spark parser above, which runs in both Python 3 and Python 2
 earley3.py – a standalone implementation of the algorithm in less than 150 lines of code, including generation of the parsingforest and samples
 tjr_python_earley_parser  a minimal Earley parser in Python
 Earley Parsing  A well explained and complete Earley parser tutorial in Python with epsilon handling and Leo optimization for rightrecursion.
Rust
 Santiago – A lexing and parsing toolkit for Rust implementing an Earley parser.
Common Lisp
 CLEarleyparser – a Common Lisp library implementing an Earley parser
Scheme, Racket
 ChartyRacket – a SchemeRacket implementation of an Earley parser
Wolfram
 properEarleyParser  A basic minimal implementation of an Earley parser in Wolfram programming language with some essential test cases.
Resources
Original source: https://en.wikipedia.org/wiki/Earley parser.
Read more 