Categorial grammar
Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.
Basics
A categorial grammar consists of two parts: a lexicon, which assigns a set of types (also called categories) to each basic symbol, and some type inference rules, which determine how the type of a string of symbols follows from the types of the constituent symbols. It has the advantage that the type inference rules can be fixed once and for all, so that the specification of a particular language grammar is entirely determined by the lexicon.
A categorial grammar shares some features with the simply typed lambda calculus. Whereas the lambda calculus has only one function type [math]\displaystyle{ A \rightarrow B }[/math], a categorial grammar typically has two function types, one type that is applied on the left, and one on the right. For example, a simple categorial grammar might have two function types [math]\displaystyle{ B/A\,\! }[/math] and [math]\displaystyle{ A\backslash B }[/math]. The first, [math]\displaystyle{ B/A\,\! }[/math], is the type of a phrase that results in a phrase of type [math]\displaystyle{ B\,\! }[/math] when followed (on the right) by a phrase of type [math]\displaystyle{ A\,\! }[/math]. The second, [math]\displaystyle{ A\backslash B\,\! }[/math], is the type of a phrase that results in a phrase of type [math]\displaystyle{ B\,\! }[/math] when preceded (on the left) by a phrase of type [math]\displaystyle{ A\,\! }[/math].
The notation is based upon algebra. A fraction when multiplied by (i.e. concatenated with) its denominator yields its numerator. As concatenation is not commutative, it makes a difference whether the denominator occurs to the left or right. The concatenation must be on the same side as the denominator for it to cancel out.
The first and simplest kind of categorial grammar is called a basic categorial grammar, or sometimes an AB-grammar (after Ajdukiewicz and Bar-Hillel). Given a set of primitive types [math]\displaystyle{ \text{Prim}\,\! }[/math], let [math]\displaystyle{ \text{Tp}(\text{Prim})\,\! }[/math] be the set of types constructed from primitive types. In the basic case, this is the least set such that [math]\displaystyle{ \text{Prim}\subseteq \text{Tp}(\text{Prim}) }[/math] and if [math]\displaystyle{ X, Y\in \text{Tp}(\text{Prim}) }[/math] then [math]\displaystyle{ (X/Y), (Y\backslash X) \in \text{Tp}(\text{Prim}) }[/math]. Think of these as purely formal expressions freely generated from the primitive types; any semantics will be added later. Some authors assume a fixed infinite set of primitive types used by all grammars, but by making the primitive types part of the grammar, the whole construction is kept finite.
A basic categorial grammar is a tuple [math]\displaystyle{ (\Sigma, \text{Prim}, S, \triangleleft) }[/math] where [math]\displaystyle{ \Sigma\,\! }[/math] is a finite set of symbols, [math]\displaystyle{ \text{Prim}\,\! }[/math] is a finite set of primitive types, and [math]\displaystyle{ S \in \text{Tp}(\text{Prim}) }[/math].
The relation [math]\displaystyle{ \triangleleft }[/math] is the lexicon, which relates types to symbols [math]\displaystyle{ (\triangleleft) \subseteq \text{Tp}(\text{Prim}) \times \Sigma }[/math]. Since the lexicon is finite, it can be specified by listing a set of pairs like [math]\displaystyle{ TYPE\triangleleft\text{symbol} }[/math].
Such a grammar for English might have three basic types [math]\displaystyle{ (N,NP, \text{ and } S)\,\! }[/math], assigning count nouns the type [math]\displaystyle{ N\,\! }[/math], complete noun phrases the type [math]\displaystyle{ NP\,\! }[/math], and sentences the type [math]\displaystyle{ S\,\! }[/math]. Then an adjective could have the type [math]\displaystyle{ N/N\,\! }[/math], because if it is followed by a noun then the whole phrase is a noun. Similarly, a determiner has the type [math]\displaystyle{ NP/N\,\! }[/math], because it forms a complete noun phrase when followed by a noun. Intransitive verbs have the type [math]\displaystyle{ NP\backslash S }[/math], and transitive verbs the type [math]\displaystyle{ (NP\backslash S)/NP }[/math]. Then a string of words is a sentence if it has overall type [math]\displaystyle{ S\,\! }[/math].
For example, take the string "the bad boy made that mess". Now "the" and "that" are determiners, "boy" and "mess" are nouns, "bad" is an adjective, and "made" is a transitive verb, so the lexicon is {[math]\displaystyle{ NP/N\triangleleft\text{the} }[/math], [math]\displaystyle{ NP/N\triangleleft\text{that} }[/math], [math]\displaystyle{ N\triangleleft\text{boy} }[/math], [math]\displaystyle{ N\triangleleft\text{mess} }[/math], [math]\displaystyle{ N/N\triangleleft\text{bad} }[/math], [math]\displaystyle{ (NP\backslash S)/NP\triangleleft\text{made} }[/math]}.
and the sequence of types in the string is
[math]\displaystyle{ {\text{the}\atop {NP/N,}} {\text{bad}\atop {N/N,}} {\text{boy}\atop {N,}} {\text{made}\atop {(NP\backslash S)/NP,}} {\text{that}\atop {NP/N,}} {\text{mess}\atop {N}} }[/math]
now find functions and appropriate arguments and reduce them according to the two inference rules [math]\displaystyle{ X\leftarrow X/Y,\; Y }[/math] and [math]\displaystyle{ X\leftarrow Y,\; Y\backslash X }[/math]:
[math]\displaystyle{ .\qquad NP/N,\; N/N,\; N,\; (NP\backslash S)/NP,\; \underbrace{NP/N,\; N} }[/math]
[math]\displaystyle{ .\qquad NP/N,\; N/N,\; N,\; \underbrace{(NP\backslash S)/NP, \quad NP} }[/math]
[math]\displaystyle{ .\qquad NP/N,\; \underbrace{N/N,\; N}, \qquad (NP\backslash S) }[/math]
[math]\displaystyle{ .\qquad \underbrace{NP/N,\; \quad N},\; \qquad (NP\backslash S) }[/math]
[math]\displaystyle{ .\qquad \qquad\underbrace{NP,\; \qquad (NP\backslash S)} }[/math]
[math]\displaystyle{ .\qquad \qquad\qquad\quad\;\;\; S }[/math]
The fact that the result is [math]\displaystyle{ S\,\! }[/math] means that the string is a sentence, while the sequence of reductions shows that it can be parsed as ((the (bad boy)) (made (that mess))).
Categorial grammars of this form (having only function application rules) are equivalent in generative capacity to context-free grammars and are thus often considered inadequate for theories of natural language syntax. Unlike CFGs, categorial grammars are lexicalized, meaning that only a small number of (mostly language-independent) rules are employed, and all other syntactic phenomena derive from the lexical entries of specific words.
Another appealing aspect of categorial grammars is that it is often easy to assign them a compositional semantics, by first assigning interpretation types to all the basic categories, and then associating all the derived categories with appropriate function types. The interpretation of any constituent is then simply the value of a function at an argument. With some modifications to handle intensionality and quantification, this approach can be used to cover a wide variety of semantic phenomena.
Lambek calculus
A Lambek grammar is an elaboration of this idea that has a concatenation operator for types, and several other inference rules. Mati Pentus has shown that these still have the generative capacity of context-free grammars.
For the Lambek calculus, there is a type concatenation operator [math]\displaystyle{ \star }[/math], so that [math]\displaystyle{ \text{Prim}\subseteq \text{Tp}(\text{Prim}) }[/math] and if [math]\displaystyle{ X, Y\in \text{Tp}(\text{Prim}) }[/math] then [math]\displaystyle{ (X/Y), (X\backslash Y), (X\star Y)\in \text{Tp}(\text{Prim}) }[/math].
The Lambek calculus consists of several deduction rules, which specify how type inclusion assertions can be derived. In the following rules, upper case roman letters stand for types, upper case Greek letters stand for sequences of types. A sequent of the form [math]\displaystyle{ X \leftarrow \Gamma }[/math] can be read: a string is of type X if it consists of the concatenation of strings of each of the types in Γ. If a type is interpreted as a set of strings, then the ← may be interpreted as ⊇, that is, "includes as a subset". A horizontal line means that the inclusion above the line implies the one below the line.
The process is begun by the Axiom rule, which has no antecedents and just says that any type includes itself.
- [math]\displaystyle{ \text{(Axiom)}\quad {{}\over X \leftarrow X} }[/math]
The Cut rule says that inclusions can be composed.
- [math]\displaystyle{ \text{(Cut)} \quad {Z \leftarrow \Delta X \Delta' \qquad X \leftarrow \Gamma \over Z \leftarrow \Delta \Gamma \Delta'} }[/math]
The other rules come in pairs, one pair for each type construction operator, each pair consisting of one rule for the operator in the target, one in the source, of the arrow. The name of a rule consists of the operator and an arrow, with the operator on the side of the arrow on which it occurs in the conclusion.
Target Source [math]\displaystyle{ (\backslash \leftarrow) \quad {Y\leftarrow X \Gamma \over X\backslash Y\leftarrow\Gamma} }[/math] [math]\displaystyle{ (\leftarrow \backslash) \quad {Z \leftarrow \Delta Y \Delta' \qquad X\leftarrow\Gamma \over Z \leftarrow \Delta \Gamma(X\backslash Y) \Delta'} }[/math] [math]\displaystyle{ (/\leftarrow) \quad {Y\leftarrow \Gamma X \over Y/X\leftarrow\Gamma} }[/math] [math]\displaystyle{ (\leftarrow/) \quad {Z\leftarrow \Delta Y \Delta' \qquad X\leftarrow\Gamma \over Z\leftarrow \Delta (Y/X)\Gamma \Delta'} }[/math] [math]\displaystyle{ (\star\leftarrow) \quad {X\leftarrow \Gamma \qquad Y \leftarrow \Gamma' \over X \star Y \leftarrow \Gamma\Gamma'} }[/math] [math]\displaystyle{ (\leftarrow\star) \quad {Z\leftarrow \Delta X Y \Delta' \over Z\leftarrow \Delta (X \star Y) \Delta'} }[/math]
For an example, here is a derivation of "type raising", which says that [math]\displaystyle{ (B/A)\backslash B \leftarrow A }[/math]. The names of rules and the substitutions used are to the right.
- [math]\displaystyle{ \dfrac {\dfrac{}{B \leftarrow B} \qquad \dfrac{}{A \leftarrow A} } {\dfrac {B \leftarrow (B/A), \;\; A} {(B/A)\backslash B \leftarrow A} } \qquad \begin{matrix} \mbox{(Axioms)}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad{ }\\ {(\leftarrow/)\,\,[Z=Y=B,X=A,\Gamma=(A),\Delta=\Delta'=()]}\\ {(\backslash\leftarrow)\,\,[Y=B,X=(B/A),\Gamma=(A)]}\qquad\qquad\qquad{ }\\ \end{matrix} }[/math]
Relation to context-free grammars
Recall that a context-free grammar is a 4-tuple [math]\displaystyle{ G = (V,\, \Sigma,\, ::=,\, S) }[/math] where
- [math]\displaystyle{ V\, }[/math] is a finite set of non-terminals or variables.
- [math]\displaystyle{ \Sigma\, }[/math] is a finite set of terminal symbols.
- [math]\displaystyle{ ::=\, }[/math] is a finite set of production rules, that is, a finite relation [math]\displaystyle{ (::=)\subseteq V \times (V \cup \Sigma)^* }[/math].
- [math]\displaystyle{ S\, }[/math] is the start variable.
From the point of view of categorial grammars, a context-free grammar can be seen as a calculus with a set of special purpose axioms for each language, but with no type construction operators and no inference rules except Cut.
Specifically, given a context-free grammar as above, define a categorial grammar [math]\displaystyle{ (\text{Prim},\, \Sigma,\, \triangleleft,\, S) }[/math] where [math]\displaystyle{ \text{Prim}=V\cup\Sigma }[/math], and [math]\displaystyle{ \text{Tp}(\text{Prim})=\text{Prim}\,\! }[/math]. Let there be an axiom [math]\displaystyle{ {x \leftarrow x} }[/math] for every symbol [math]\displaystyle{ x \in V\cup\Sigma }[/math], an axiom [math]\displaystyle{ {X \leftarrow \Gamma} }[/math] for every production rule [math]\displaystyle{ X ::= \Gamma\,\! }[/math], a lexicon entry [math]\displaystyle{ {s \triangleleft s} }[/math] for every terminal symbol [math]\displaystyle{ s \in \Sigma }[/math], and Cut for the only rule. This categorial grammar generates the same language as the given CFG.
Of course, this is not a basic categorial grammar, since it has special axioms that depend upon the language; i.e. it is not lexicalized. Also, it makes no use at all of non-primitive types.
To show that any context-free language can be generated by a basic categorial grammar, recall that any context-free language can be generated by a context-free grammar in Greibach normal form.
The grammar is in Greibach normal form if every production rule is of the form [math]\displaystyle{ A ::= s A_0 \ldots A_{N-1} }[/math], where capital letters are variables, [math]\displaystyle{ s \in \Sigma }[/math], and [math]\displaystyle{ N\ge 0 }[/math], that is, the right side of the production is a single terminal symbol followed by zero or more (non-terminal) variables.
Now given a CFG in Greibach normal form, define a basic categorial grammar with a primitive type for each non-terminal variable [math]\displaystyle{ \text{Prim}=V\,\! }[/math], and with an entry in the lexicon [math]\displaystyle{ A/A_{N-1}/ \ldots /A_0 \triangleleft s }[/math], for each production rule [math]\displaystyle{ A ::= s A_0 \ldots A_{N-1} }[/math]. It is fairly easy to see that this basic categorial grammar generates the same language as the original CFG. Note that the lexicon of this grammar will generally assign multiple types to each symbol.
The same construction works for Lambek grammars, since they are an extension of basic categorial grammars. It is necessary to verify that the extra inference rules do not change the generated language. This can be done and shows that every context-free language is generated by some Lambek grammar.
To show the converse, that every language generated by a Lambek grammar is context-free, is much more difficult. It was an open problem for nearly thirty years, from the early 1960s until about 1991 when it was proven by Pentus.
The basic idea is, given a Lambek grammar, [math]\displaystyle{ (\text{Prim},\, \Sigma,\, \triangleleft,\, S) }[/math] construct a context-free grammar [math]\displaystyle{ (V,\, \Sigma,\, ::=,\, S) }[/math] with the same set of terminal symbols, the same start symbol, with variables some (not all) types [math]\displaystyle{ V\subseteq \text{Tp}(\text{Prim})\,\! }[/math], and with a production rule [math]\displaystyle{ T::=\text{s}\,\! }[/math] for each entry [math]\displaystyle{ T\triangleleft\text{s} }[/math] in the lexicon, and production rules [math]\displaystyle{ T::=\Gamma\,\! }[/math] for certain sequents [math]\displaystyle{ T\leftarrow\Gamma }[/math] that are derivable in the Lambek calculus.
Of course, there are infinitely many types and infinitely many derivable sequents, so in order to make a finite grammar it is necessary put a bound on the size of the types and sequents that are needed. The heart of Pentus's proof is to show that there is such a finite bound.
Notation
The notation in this field is not standardized. The notations used in formal language theory, logic, category theory, and linguistics, conflict with each other. In logic, arrows point to the more general from the more particular, that is, to the conclusion from the hypotheses. In this article, this convention is followed, i.e. the target of the arrow is the more general (inclusive) type.
In logic, arrows usually point left to right. In this article this convention is reversed for consistency with the notation of context-free grammars, where the single non-terminal symbol is always on the left. We use the symbol [math]\displaystyle{ ::= }[/math] in a production rule as in Backus–Naur form. Some authors use an arrow, which unfortunately may point in either direction, depending on whether the grammar is thought of as generating or recognizing the language.
Some authors on categorial grammars write [math]\displaystyle{ B\backslash A }[/math] instead of [math]\displaystyle{ A\backslash B }[/math]. The convention used here follows Lambek and algebra.
Historical notes
The basic ideas of categorial grammar date from work by Kazimierz Ajdukiewicz (in 1935) and other scholars from the Polish tradition of mathematical logic including Stanisław Leśniewski, Emil Post and Alfred Tarski. Ajdukiewicz's formal approach to syntax was influenced by Edmund Husserl's pure logical grammar, which was formalized by Rudolph Carnap. It represents a development in the historical idea of universal logical grammar as an underlying structure of all languages. A core concept of the approach is the substitutability of syntactic categories—hence the name categorial grammar. The membership of an element (e.g., word or phrase) in a syntactic category (word class, phrase type) is established by the commutation test, and the formal grammar is constructed through series of such tests.[1]
The term categorial grammar was coined by Yehoshua Bar-Hillel (in 1953). In 1958, Joachim Lambek introduced a syntactic calculus that formalized the function type constructors along with various rules for the combination of functions. This calculus is a forerunner of linear logic in that it is a substructural logic.
Montague grammar uses an ad hoc syntactic system for English that is based on the principles of categorial grammar. Although Montague's work is sometimes regarded as syntactically uninteresting, it helped to bolster interest in categorial grammar by associating it with a highly successful formal treatment of natural language semantics. More recent work in categorial grammar has focused on the improvement of syntactic coverage. One formalism that has received considerable attention in recent years is Steedman and Szabolcsi's combinatory categorial grammar, which builds on combinatory logic invented by Moses Schönfinkel and Haskell Curry.
There are a number of related formalisms of this kind in linguistics, such as type logical grammar and abstract categorial grammar.
Some definitions
- Derivation
- A derivation is a binary tree that encodes a proof.
- Parse tree
- A parse tree displays a derivation, showing the syntactic structure of a sentence.
- Functor and argument
- In a right (left) function application, the node of the type A\B (B/A) is called the functor, and the node of the type A is called an argument.
- Functor–argument structure[clarification needed]
Refinements of categorical grammar
A variety of changes to categorial grammar have been proposed to improve syntactic coverage. Some of the most common are listed below.
Features and subcategories
Most systems of categorial grammar subdivide categories. The most common way to do this is by tagging them with features, such as person, gender, number, and tense. Sometimes only atomic categories are tagged in this way. In Montague grammar, it is traditional to subdivide function categories using a multiple slash convention, so A/B and A//B would be two distinct categories of left-applying functions, that took the same arguments but could be distinguished between by other functions taking them as arguments.
Function composition
Rules of function composition are included in many categorial grammars. An example of such a rule would be one that allowed the concatenation of a constituent of type A/B with one of type B/C to produce a new constituent of type A/C. The semantics of such a rule would simply involve the composition of the functions involved. Function composition is important in categorial accounts of conjunction and extraction, especially as they relate to phenomena like right node raising. The introduction of function composition into a categorial grammar leads to many kinds of derivational ambiguity that are vacuous in the sense that they do not correspond to semantic ambiguities.
Conjunction
Many categorial grammars include a typical conjunction rule, of the general form X CONJ X → X, where X is a category. Conjunction can generally be applied to nonstandard constituents resulting from type raising or function composition..
Discontinuity
The grammar is extended to handle linguistic phenomena such as discontinuous idioms, gapping and extraction.
See also
- Combinatory categorial grammar
- Link grammar
- Noncommutative logic
- Pregroup Grammar
- Scope
- Type shifter
References
- ↑ Wybraniec-Skardowska, Urszula; Rogalski, Andrzej K. (1998). "On universal grammar and its formalization". The Paideia Archive: Twentieth World Congress of Philosophy 8: 153-172. https://www.pdcnet.org/collection/fshow?id=wcp20-paideia_1998_0008_0153_0172&pdfname=wcp20-paideia_1998_0008_0153_0172.pdf&file_type=pdf. Retrieved 2023-09-05.
- Curry, Haskell B.; Feys, Richard (1958), Combinatory Logic, 1, North-Holland
- Jacobson, Pauline (1999), "Towards a variable-free semantics.", Linguistics and Philosophy 22 (2): 117–184, doi:10.1023/A:1005464228727
- Lambek, Joachim (1958), "The mathematics of sentence structure", Amer. Math. Monthly 65 (3): 154–170, doi:10.1080/00029890.1958.11989160
- Pentus, Mati (1997), Lambek Calculus and Formal Grammars, Amer. Math. Soc. Transl., http://158.250.33.126/~pentus/ftp/papers/ams.pdf
- Steedman, Mark (1987), "Combinatory grammars and parasitic gaps", Natural Language and Linguistic Theory 5 (3): 403–439, doi:10.1007/bf00134555
- Steedman, Mark (1996), Surface Structure and Interpretation, The MIT Press
- Steedman, Mark (2000), The Syntactic Process, The MIT Press
- Szabolcsi, Anna (1989). "Bound variables in syntax (are there any?)". in Bartsch; van Benthem. Semantics and Contextual Expression. Foris. pp. 294–318. https://philpapers.org/archive/SZABVI.pdf.
- Szabolcsi, Anna (1992). "Combinatory grammar and projection from the lexicon". Lexical Matters. 24. Stanford: CSLI Publications. 241–269. http://www.u.tsukuba.ac.jp/~kubota.yusuke.fn/lsa/szabolcsi92.pdf.
- Szabolcsi, Anna (2003), "Binding on the Fly: Cross-Sentential Anaphora in Variable-Free Semantics", in Kruijff; Oehrle, Resource-Sensitivity, Binding and Anaphora, Studies in Linguistics and Philosophy, 80, Kluwer, pp. 215–229, doi:10.1007/978-94-010-0037-6_8, ISBN 978-1-4020-1692-9
- Morril, Glyn (1995), "Discontinuity in categorial grammar", Linguistics and Philosophy 18 (2): 175–219, doi:10.1007/bf00985216
Further reading
- Michael Moortgat, Categorial Type Logics, Chapter 2 in J. van Benthem and A. ter Meulen (eds.) Handbook of Logic and Language. Elsevier, 1997, ISBN:0-262-22053-9
- Wojciech Buszkowski, Mathematical linguistics and proof theory, Chapter 12 in J. van Benthem and A. ter Meulen (eds.) Handbook of Logic and Language. Elsevier, 1997, ISBN:0-262-22053-9
- Gerhard Jäger (2005). Anaphora and Type Logical Grammar. Springer. ISBN 978-1-4020-3904-1.
- Glyn Morrill (2010). Categorial Grammar: Logical Syntax, Semantics, and Processing. Oxford University Press. ISBN 978-0-19-958986-9.
- Richard Moot; Christian Retoré (2012). The Logic of Categorial Grammars: A Deductive Account of Natural Language Syntax and Semantics. Springer Verlag. ISBN 978-3-642-31554-1.
External links
- Grammar, categorial at Springer Encyclopaedia of Mathematics
- http://plato.stanford.edu/entries/typelogical-grammar/
Original source: https://en.wikipedia.org/wiki/Categorial grammar.
Read more |