# Type theory

Short description: Concept in mathematical logic

In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system.[lower-alpha 1] Type theory is the academic study of type systems.

Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that have been proposed as foundations are Alonzo Church's typed λ-calculus and Per Martin-Löf's intuitionistic type theory.

Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions.

## History

Main page: History of type theory

Type theory was created to avoid a paradox in a mathematical equation based on naive set theory and formal logic. Russell's paradox (first described in Gottlob Frege's The Foundations of Arithmetic) is that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem.

By 1908, Russell arrived at a ramified theory of types together with an axiom of reducibility, both of which appeared in Whitehead and Russell's Principia Mathematica published in 1910, 1912, and 1913. This system avoided contradictions suggested in Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a specific type. Entities of a given type were built exclusively of subtypes of that type,[lower-alpha 2] thus preventing an entity from being defined using itself. This resolution of Russell's paradox is similar to approaches taken in other formal systems, such as Zermelo-Faenkel set theory.[3]

Type theory is particularly popular in conjunction with Alonzo Church's lambda calculus. One notable early example of type theory is Church's simply typed lambda calculus. Church's theory of types[4] helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated[lower-alpha 3] that it could serve as a foundation of mathematics and it was referred to as a higher-order logic.

In the modern literature, "type theory" refers to a typed system based around lambda calculus. One influential system is Per Martin-Löf's intuitionistic type theory, which was proposed as a foundation for constructive mathematics. Another is Thierry Coquand's calculus of constructions, which is used as the foundation by Coq, Lean, and other computer proof assistants. Type theory is an active area of research, one direction being the development of homotopy type theory.

## Applications

### Mathematical foundations

The first computer proof assistant, called Automath, used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is ongoing research into mathematical foundations using homotopy type theory.

Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory. This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS).[6] Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy).

### Proof assistants

Main page: Proof assistant

Much of the current research into type theory is driven by proof checkers, interactive proof assistants, and automated theorem provers. Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages:

Many type theories are supported by LEGO and Isabelle. Isabelle also supports foundations besides type theories, such as ZFC. Mizar is an example of a proof system that only supports set theory.

### Programming languages

Any static program analysis, such as the type checking algorithms in the semantic analysis phase of compiler, has a connection to type theory. A prime example is Agda, a programming language which uses UTT (Luo's Unified Theory of dependent Types) for its type system.

The programming language ML was developed for manipulating type theories (see LCF) and its own type system was heavily influenced by them.

### Linguistics

Type theory is also widely used in formal theories of semantics of natural languages,[7][8] especially Montague grammar[9] and its descendants. In particular, categorial grammars and pregroup grammars extensively use type constructors to define the types (noun, verb, etc.) of words.

The most common construction takes the basic types $\displaystyle{ e }$ and $\displaystyle{ t }$ for individuals and truth-values, respectively, and defines the set of types recursively as follows:

• if $\displaystyle{ a }$ and $\displaystyle{ b }$ are types, then so is $\displaystyle{ \langle a,b\rangle }$;
• nothing except the basic types, and what can be constructed from them by means of the previous clause are types.

A complex type $\displaystyle{ \langle a,b\rangle }$ is the type of functions from entities of type $\displaystyle{ a }$ to entities of type $\displaystyle{ b }$. Thus one has types like $\displaystyle{ \langle e,t\rangle }$ which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type $\displaystyle{ \langle\langle e,t\rangle,t\rangle }$ is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981).[10]

Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems.[11][12]

### Social sciences

Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types.

## Type theory as a logic

A type theory is a mathematical logic, which is to say it is a collection of rules of inference that result in judgments. Most logics have judgments asserting "The proposition $\displaystyle{ \varphi }$ is true", or "The formula $\displaystyle{ \varphi }$ is a well-formed formula".[13] A type theory has judgments that define types and assign them to a collection of formal objects, known as terms. A term and its type are often written together as $\displaystyle{ \mathrm{term}:\mathsf{type} }$.

### Terms

A term in logic is recursively defined as a constant symbol, variable, or a function application, where a term is applied to another term. Constant symbols could include the natural number $\displaystyle{ 0 }$, the Boolean value $\displaystyle{ \mathrm{true} }$, and functions such as the successor function $\displaystyle{ \mathrm{S} }$ and conditional operator $\displaystyle{ \mathrm{if} }$. Thus some terms could be $\displaystyle{ 0 }$, $\displaystyle{ (\mathrm{S}\,0) }$, $\displaystyle{ (\mathrm{S}\,(\mathrm{S}\,0)) }$, and $\displaystyle{ (\mathrm{if}\,\mathrm{true}\,0\,(\mathrm{S}\,0)) }$.

### Judgments

Most type theories have 4 judgments:

• "$\displaystyle{ T }$ is a type"
• "$\displaystyle{ t }$ is a term of type $\displaystyle{ T }$"
• "Type $\displaystyle{ T_1 }$ is equal to type $\displaystyle{ T_2 }$"
• "Terms $\displaystyle{ t_1 }$ and $\displaystyle{ t_2 }$ both of type $\displaystyle{ T }$ are equal"

Judgments may follow from assumptions. For example, one might say "assuming $\displaystyle{ x }$ is a term of type $\displaystyle{ \mathsf{bool} }$ and $\displaystyle{ y }$ is a term of type $\displaystyle{ \mathsf{nat} }$, it follows that $\displaystyle{ (\mathrm{if}\,x\,y\,y) }$ is a term of type $\displaystyle{ \mathsf{nat} }$". Such judgments are formally written with the turnstile symbol $\displaystyle{ \vdash }$.

$\displaystyle{ x:\mathsf{bool},y:\mathsf{nat}\vdash(\textrm{if}\,x\,y\,y): \mathsf{nat} }$

If there are no assumptions, there will be nothing to the left of the turnstile.

$\displaystyle{ \vdash \mathrm{S}:\mathsf{nat}\to\mathsf{nat} }$

The list of assumptions on the left is the context of the judgment. Capital greek letters, such as $\displaystyle{ \Gamma }$ and $\displaystyle{ \Delta }$, are common choices to represent some or all of the assumptions. The 4 different judgments are thus usually written as follows.

Formal notation for judgments Description
$\displaystyle{ \Gamma \vdash T }$ Type $\displaystyle{ T }$ is a type (under assumptions $\displaystyle{ \Gamma }$).
$\displaystyle{ \Gamma \vdash t : T }$ $\displaystyle{ t }$ is a term of type $\displaystyle{ T }$ (under assumptions $\displaystyle{ \Gamma }$).
$\displaystyle{ \Gamma \vdash T_1 = T_2 }$ Type $\displaystyle{ T_1 }$ is equal to type $\displaystyle{ T_2 }$ (under assumptions $\displaystyle{ \Gamma }$).
$\displaystyle{ \Gamma \vdash t_1 = t_2 : T }$ Terms $\displaystyle{ t_1 }$ and $\displaystyle{ t_2 }$ are both of type $\displaystyle{ T }$ and are equal (under assumptions $\displaystyle{ \Gamma }$).

Some textbooks use a triple equal sign $\displaystyle{ \equiv }$ to stress that this is judgmental equality and thus an extrinsic notion of equality.[14] The judgments enforce that every term has a type. The type will restrict which rules can be applied to a term.

### Rules of Inference

A type theory's inference rules say what judgments can be made, based on the existence of other judgments. Rules are expressed as a Gentzen-style deduction using a horizontal line, with the required input judgments above the line and the resulting judgment below the line.[15] For example, the following inference rule states a substitution rule for judgmental equality.$\displaystyle{ \begin{array}{c} \Gamma\vdash t:T_1 \qquad \Delta\vdash T_1 = T_2 \\ \hline \Gamma,\Delta\vdash t:T_2 \end{array} }$The rules are syntactic and work by rewriting. The metavariables $\displaystyle{ \Gamma }$, $\displaystyle{ \Delta }$, $\displaystyle{ t }$, $\displaystyle{ T_1 }$, and $\displaystyle{ T_2 }$ may actually consist of complex terms and types that contain many function applications, not just single symbols.

To generate a particular judgment in type theory, there must be a rule to generate it, as well as rules to generate all of that rule's required inputs, and so on. The applied rules form a proof tree, where the top-most rules need no assumptions. One example of a rule that does not require any inputs is one that states the type of a constant term. For example, to assert that there is a term $\displaystyle{ 0 }$ of type $\displaystyle{ \mathsf{nat} }$, one would write the following.$\displaystyle{ \begin{array}{c} \hline \vdash 0 : nat \\ \end{array} }$

#### Type inhabitation

Main page: Type inhabitation

Generally, the desired conclusion of a proof in type theory is one of type inhabitation.[16] The decision problem of type inhabitation (abbreviated by $\displaystyle{ \exists t.\Gamma \vdash t : \tau? }$) is:

Given a context $\displaystyle{ \Gamma }$ and a type $\displaystyle{ \tau }$, decide whether there exists a term $\displaystyle{ t }$ that can be assigned the type $\displaystyle{ \tau }$ in the type environment $\displaystyle{ \Gamma }$.

Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types.

A type theory usually has several rules, including ones to:

• create a judgment (known as a context in this case)
• add an assumption to the context (context weakening)
• rearrange the assumptions
• use an assumption to create a variable
• define reflexivity, symmetry and transitivity for judgmental equality
• define substitution for application of lambda terms
• list all the interactions of equality, such as substitution
• define a heirarchy of type universes
• assert the existence of new types

Also, for each "by rule" type, there are 4 different kinds of rules

• "type formation" rules say how to create the type
• "term introduction" rules define the canonical terms and constructor functions, like "pair" and "S".
• "term elimination" rules define the other functions like "first", "second", and "R".
• "computation" rules specify how computation is performed with the type-specific functions.

For examples of rules, an interested reader may follow Appendix A.2 of the Homotopy Type Theory book,[14] or read Martin-Löf's Intuitionistic Type Theory.[17]

## Connections to foundations

The logical framework of a type theory bears a resemblance to intuitionistic, or constructive, logic. Formally, type theory is often cited as an implementation of the Brouwer–Heyting–Kolmogorov interpretation of intuitionistic logic.[17] Additionally, connections can be made to category theory and computer programs.

### Intuitionistic logic

When used as a foundation, certain types are interpreted to be propositions (statements that can be proven), and terms inhabiting the type are interpreted to be proofs of that proposition. When some types are interpreted as propositions, there is a set of common types that can be used to connect them to make a Boolean algebra out of types. However, the logic is not classical logic but intuitionistic logic, which is to say it does not have the law of excluded middle nor double negation.

Under this intuitionistic interpretation, there are common types that act as the logical operators:

Logic Name Logic Notation Type Notation Type Name
True $\displaystyle{ \top }$ $\displaystyle{ \top }$ Unit Type
False $\displaystyle{ \bot }$ $\displaystyle{ \bot }$ Empty Type
Implication $\displaystyle{ A \to B }$ $\displaystyle{ A \to B }$ Function
Not $\displaystyle{ \neg A }$ $\displaystyle{ A \to \bot }$ Function to Empty Type
And $\displaystyle{ A \land B }$ $\displaystyle{ A \times B }$ Product Type
Or $\displaystyle{ A \lor B }$ $\displaystyle{ A + B }$ Sum Type
For All $\displaystyle{ \forall a \in A, P(a) }$ $\displaystyle{ \Pi a:A.P(a) }$ Dependent Product
Exists $\displaystyle{ \exists a \in A, P(a) }$ $\displaystyle{ \Sigma a: A.P(a) }$ Dependent Sum

Because the law of excluded middle does not hold, there is no term of type $\displaystyle{ \Pi a.A+ (A\to\bot) }$. Likewise, double negation does not hold, so there is no term of type $\displaystyle{ \Pi A.((A\to\bot)\to\bot)\to A }$.

It is possible to include the law of excluded middle and double negation into a type theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other.[citation needed]

#### Constructive mathematics

Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics.[13] Constructive mathematics requires when proving "there exists an $\displaystyle{ x }$ with property $\displaystyle{ P(x) }$", one must construct a particular $\displaystyle{ x }$ and a proof that it has property $\displaystyle{ P }$. In type theory, existence is accomplished using the dependent product type, and its proof requires a term of that type.

An example of a non-constructive proof is proof by contradiction. The first step is assuming that $\displaystyle{ x }$ does not exist and refuting it by contradiction. The conclusion from that step is "it is not the case that $\displaystyle{ x }$ does not exist". The last step is, by double negation, concluding that $\displaystyle{ x }$ exists. Constructive mathematics does not allow the last step of removing the double negation to conclude that $\displaystyle{ x }$ exists.[18]

Most of the type theories proposed as foundations are constructive, and this includes most of the ones used by proof assistants.[citation needed] It is possible to add non-constructive features to a type theory, by rule or assumption. These include operators on continuations such as call with current continuation. However, these operators tend to break desirable properties such as canonicity and parametricity.

### Curry-Howard correspondence

The Curry–Howard correspondence is the observed similarity between logics and programming languages. The implication in logic, "A $\displaystyle{ \to }$ B" resembles a function from type "A" to type "B". For a variety of logics, the rules are similar to expressions in a programming language's types. The similarity goes farther, as applications of the rules resemble programs in the programming languages. Thus, the correspondence is often summarized as "proofs as programs".

The opposition of terms and types can also be viewed as one of implementation and specification. By program synthesis, (the computational counterpart of) type inhabitation can be used to construct (all or parts of) programs from the specification given in the form of type information.[19]

#### Type inference

Main page: Type inference

Many programs that work with type theory (e.g., interactive theorem provers) also do type inferencing. It lets them select the rules that the user intends, with fewer actions by the user.

### Research areas

#### Category theory

Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way:[20]

• cartesian closed categories correspond to the typed λ-calculus (Lambek, 1970);
• C-monoids (categories with products and exponentials and one non-terminal object) correspond to the untyped λ-calculus (observed independently by Lambek and Dana Scott around 1980);
• locally cartesian closed categories correspond to Martin-Löf type theories (Seely, 1984).

The interplay, known as categorical logic, has been a subject of active research since then; see the monograph of Jacobs (1999) for instance.

#### Honmotopy type theory

Homotopy type theory attempts to combine type theory and category theory. It focuses on equalities, especially equalities between types.Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type. In 2016 cubical type theory was proposed, which is a homotopy type theory with normalization.[21][22]

## Definitions

### Terms and types

#### Atomic terms

The most basic types are called atoms, and a term whose type is an atom is known as an atomic term. Common atomic terms included in type theories are natural numbers, often notated with the type $\displaystyle{ \mathsf{nat} }$, Boolean logic values ($\displaystyle{ \mathrm{true} }$/$\displaystyle{ \mathrm{false} }$), notated with the type $\displaystyle{ \mathsf{bool} }$, and formal variables, whose type may vary.[16] For example, the following may be atomic terms.

• $\displaystyle{ 42:\mathsf{nat} }$
• $\displaystyle{ \mathrm{true}:\mathsf{bool} }$
• $\displaystyle{ x:\mathsf{nat} }$
• $\displaystyle{ y:\mathsf{bool} }$

#### Function terms

In addition to atomic terms, most modern type theories also allow for functions. Function types introduce an arrow symbol, and are defined inductively: If $\displaystyle{ \sigma }$ and $\displaystyle{ \tau }$ are types, then the notation $\displaystyle{ \sigma\to\tau }$ is the type of a function which takes a parameter of type $\displaystyle{ \sigma }$ and returns a term of type $\displaystyle{ \tau }$. Types of this form are known as simple types.[16]

Some terms may be declared directly as having a simple type, such as the following term, $\displaystyle{ \mathrm{add} }$, which takes in two natural numbers in sequence and returns one natural number.

$\displaystyle{ \mathrm{add}:\mathsf{nat}\to (\mathsf{nat}\to\mathsf{nat}) }$

Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that $\displaystyle{ \mathrm{add} }$ is a function which takes in a natural number and returns a function of the form $\displaystyle{ \mathsf{nat}\to\mathsf{nat} }$. The parentheses clarify that $\displaystyle{ \mathrm{add} }$ does not have the type $\displaystyle{ (\mathsf{nat}\to \mathsf{nat})\to\mathsf{nat} }$, which would be a function which takes in a function of natural numbers and returns a natural number. The convention is that the arrow is right associative, so the parentheses may be dropped from $\displaystyle{ \mathrm{add} }$'s type.[16]

#### Lambda terms

New function terms may be constructed using lambda expressions, and are called lambda terms. These terms are also defined inductively: a lambda term has the form $\displaystyle{ (\lambda v .t) }$, where $\displaystyle{ v }$ is a formal variable and $\displaystyle{ t }$ is a term, and its type is notated $\displaystyle{ \sigma\to\tau }$, where $\displaystyle{ \sigma }$ is the type of $\displaystyle{ v }$, and $\displaystyle{ \tau }$ is the type of $\displaystyle{ t }$.[16] The following lambda term represents a function which doubles an input natural number.

$\displaystyle{ (\lambda x.\mathrm{add}\,x\,x): \mathsf{nat}\to\mathsf{nat} }$

The variable is $\displaystyle{ x }$ and (implicit from the lambda term's type) must have type $\displaystyle{ \mathsf{nat} }$. The term $\displaystyle{ \mathrm{add}\,x\,x }$ has type $\displaystyle{ \mathsf{nat} }$, which is seen by applying the function application inference rule twice. Thus, the lambda term has type $\displaystyle{ \mathsf{nat}\to\mathsf{nat} }$, which means it is a function taking a natural number as an argument and returning a natural number.

A lambda term is often referred to[lower-alpha 4] as an anonymous function because it lacks a name. The concept of anonymous functions appears in many programming languages.

### Inference Rules

#### Function application

The power of type theories is in specifying how terms may be combined by way of inference rules.[4] Type theories which have functions also have the inference rule of function application: if $\displaystyle{ t }$ is a term of type $\displaystyle{ \sigma\to\tau }$, and $\displaystyle{ s }$ is a term of type $\displaystyle{ \sigma }$, then the application of $\displaystyle{ t }$ to $\displaystyle{ s }$, often written $\displaystyle{ (t\,s) }$, has type $\displaystyle{ \tau }$. For example, if one knows the type notations $\displaystyle{ 0:\textsf{nat} }$, $\displaystyle{ 1:\textsf{nat} }$, and $\displaystyle{ 2:\textsf{nat} }$, then the following type notations can be deduced from function application.[16]

• $\displaystyle{ (\mathrm{add}\,1): \textsf{nat}\to\textsf{nat} }$
• $\displaystyle{ ((\mathrm{add}\,2)\,0): \textsf{nat} }$
• $\displaystyle{ ((\mathrm{add}\,1)((\mathrm{add}\,2)\,0)): \textsf{nat} }$

Parentheses indicate the order of operations; however, by convention, function application is left associative, so parentheses can be dropped where appropriate.[16] In the case of the three examples above, all parentheses could be omitted from the first two, and the third may simplified to $\displaystyle{ \mathrm{add}\,1\, (\mathrm{add}\,2\,0): \textsf{nat} }$.

#### Reductions

Type theories that allow for lambda terms also include inference rules known as $\displaystyle{ \beta }$-reduction and $\displaystyle{ \eta }$-reduction. They generalize the notion of function application to lambda terms. Symbolically, they are written

• $\displaystyle{ (\lambda v. t)\,s\rightarrow t[v \colon= s] }$ ($\displaystyle{ \beta }$-reduction).
• $\displaystyle{ (\lambda v. t\, v)\rightarrow t }$, if $\displaystyle{ v }$ is not a free variable in $\displaystyle{ t }$ ($\displaystyle{ \eta }$-reduction).

The first reduction describes how to evaluate a lambda term: if a lambda expression $\displaystyle{ (\lambda v .t) }$ is applied to a term $\displaystyle{ s }$, one replaces every occurrence of $\displaystyle{ v }$ in $\displaystyle{ t }$ with $\displaystyle{ s }$. The second reduction makes explicit the relationship between lambda expressions and function types: if $\displaystyle{ (\lambda v. t\, v) }$ is a lambda term, then it must be that $\displaystyle{ t }$ is a function term because it is being applied to $\displaystyle{ v }$. Therefore, the lambda expression is equivalent to just $\displaystyle{ t }$, as both take in one argument and apply $\displaystyle{ t }$ to it.[4]

For example, the following term may be $\displaystyle{ \beta }$-reduced.

$\displaystyle{ (\lambda x.\mathrm{add}\,x\,x)\,2\rightarrow \mathrm{add}\,2\,2 }$

In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of $\displaystyle{ \beta }$-equality and $\displaystyle{ \eta }$-equality.[16]

### Common terms and types

#### Empty type

The empty type has no terms. The type is usually written $\displaystyle{ \bot }$ or $\displaystyle{ \mathbb 0 }$. One use for the empty type is proofs of type inhabitation. If for a type $\displaystyle{ a }$, it is consistent to derive a function of type $\displaystyle{ a\to\bot }$, then $\displaystyle{ a }$ is uninhabited, which is to say it has no terms.

#### Unit type

The unit type has exactly 1 canonical term. The type is written $\displaystyle{ \top }$ or $\displaystyle{ \mathbb 1 }$ and the single canonical term is written $\displaystyle{ \ast }$. The unit type is also used in proofs of type inhabitation. If for a type $\displaystyle{ a }$, it is consistent to derive a function of type $\displaystyle{ \top\to a }$, then $\displaystyle{ a }$ is inhabited, which is to say it must have one or more terms.

#### Boolean type

The Boolean type has exactly 2 canonical terms. The type is usually written $\displaystyle{ \textsf{bool} }$ or $\displaystyle{ \mathbb B }$ or $\displaystyle{ \mathbb 2 }$. The canonical terms are usually $\displaystyle{ \mathrm{true} }$ and $\displaystyle{ \mathrm{false} }$.

#### Natural numbers

Natural numbers are usually implemented in the style of Peano Arithmetic. There is a canonical term $\displaystyle{ 0:\mathsf{nat} }$ for zero. Canonical values larger than zero use iterated applications of a successor function $\displaystyle{ \mathrm{S}:\mathsf{nat}\to\mathsf{nat} }$.

### Dependent typing

Some type theories allow for types of complex terms, such as functions or lists, to depend on the types of its arguments. For example, a type theory could have the dependent type $\displaystyle{ \mathsf{list}\,a }$, which should correspond to lists of terms, where each term must have type $\displaystyle{ a }$. In this case, $\displaystyle{ \mathsf{list} }$ has the type $\displaystyle{ U\to U }$, where $\displaystyle{ U }$ denotes the universe of all types in the theory.

Some theories also permit types to be dependent on terms instead of types. For example, a theory could have the type $\displaystyle{ \mathsf{vector}\,n }$, where $\displaystyle{ n }$ is a term of type $\displaystyle{ \mathsf{nat} }$ encoding the length of the vector. This allows for greater specificity and type safety: functions with vector length restrictions or length matching requirements, such as the dot product, can encode this requirement as part of the type.[24]

There are foundational issues that can arise from dependent types if a theory is not careful about what dependencies are allowed, such as Girard's Paradox. The logician Henk Barendegt introduced the lambda cube as a framework for studying various restrictions and levels of dependent typing.[25]

#### Product type

The product type depends on two types, and its terms are commonly written as ordered pairs $\displaystyle{ (s,t) }$ or with the symbol $\displaystyle{ \times }$. The pair $\displaystyle{ (s,t) }$ has the product type $\displaystyle{ \sigma\times\tau }$, where $\displaystyle{ \sigma }$ is the type of $\displaystyle{ s }$ and $\displaystyle{ \tau }$ is the type of $\displaystyle{ t }$. The product type is usually defined with eliminator functions $\displaystyle{ \mathrm{first}:(\Pi\,\sigma\,\tau.\sigma\times\tau\to\sigma) }$ and $\displaystyle{ \mathrm{second}:(\Pi\,\sigma\,\tau.\sigma\times\tau\to\tau) }$.

• $\displaystyle{ \mathrm{first}\,(s,t) }$ returns $\displaystyle{ s }$, and
• $\displaystyle{ \mathrm{second}\,(s,t) }$ returns $\displaystyle{ t }$.

Besides ordered pairs, this type is used for the concepts of logical conjunction and intersection.

#### Sum type

The sum type depends on two types, and it is commonly written with the symbol $\displaystyle{ + }$ or $\displaystyle{ \sqcup }$. In programming languages, sum types may be referred to as tagged unions. The type $\displaystyle{ \sigma\sqcup\tau }$ is usually defined with constructors $\displaystyle{ \mathrm{left}:\sigma\to(\sigma\sqcup\tau) }$ and $\displaystyle{ \mathrm{right}:\tau\to(\sigma\sqcup\tau) }$, which are injective, and an eliminator function $\displaystyle{ \mathrm{match}:(\Pi\,\rho.(\sigma\to\rho)\to(\tau\to\rho)\to(\sigma\sqcup\tau)\to\rho) }$ such that

• $\displaystyle{ \mathrm{match}\,f\,g\,(\mathrm{left}\,x) }$ returns $\displaystyle{ f\,x }$, and
• $\displaystyle{ \mathrm{match}\,f\,g\,(\mathrm{right}\,y) }$ returns $\displaystyle{ g\,y }$.

The sum type is used for the concepts of logical disjunction and union.

#### Dependent products and sums

Two common type dependencies, dependent product and dependent sum types, allow for the theory to encode BHK intuitionistic logic by acting as equivalents to universal and existential quantification; this is formalized by Curry–Howard Correspondence. [24] As they also connect to products and sums in set theory, they are often written with the symbols $\displaystyle{ \Pi }$ and $\displaystyle{ \Sigma }$, respectively.[17] Dependent product and sum types commonly appear in function types and are frequently incorporated in programming languages.[26]

For example, consider a function $\displaystyle{ \mathrm{append} }$, which takes in a $\displaystyle{ \mathsf{list}\,a }$ and a term of type $\displaystyle{ a }$, and returns the list with the element at the end. The type annotation of such a function would be $\displaystyle{ \mathrm{append}:(\Pi\,a.\mathsf{list}\,a\to a\to\mathsf{list}\,a) }$, which can be read as "for any type $\displaystyle{ a }$, pass in a $\displaystyle{ \mathsf{list}\,a }$ and an $\displaystyle{ a }$, and return a $\displaystyle{ \mathsf{list}\,a }$".

Sum types are seen in dependent pairs, where the second type depends on the value of the first term. This arises naturally in computer science where functions may return different types of outputs based on the input. For example, the Boolean type is usually defined with an eliminator function $\displaystyle{ \mathrm{if} }$, which takes three arguments and behaves as follows.

• $\displaystyle{ \mathrm{if}\,\mathrm{true}\,x\,y }$ returns $\displaystyle{ x }$, and
• $\displaystyle{ \mathrm{if}\,\mathrm{false}\,x\,y }$ returns $\displaystyle{ y }$.

The return type of this function depends on its $\displaystyle{ \mathsf{bool} }$ input. If the type theory allows for dependent types, then it is possible to define a function $\displaystyle{ \mathrm{TF}\colon\mathsf{bool}\to U\to U\to U }$ such that

• $\displaystyle{ \mathrm{TF}\,\mathrm{true}\,\sigma\,\tau }$ returns $\displaystyle{ \sigma }$, and
• $\displaystyle{ \mathrm{TF}\,\mathrm{false}\,\sigma\,\tau }$ returns $\displaystyle{ \tau }$.

The type of $\displaystyle{ \mathrm{if} }$ may then be written as $\displaystyle{ (\Pi\,\sigma\,\tau.\mathsf{bool}\to\sigma\to\tau\to(\Sigma\,x:\textsf{bool}.\mathrm{TF}\,x\,\sigma\,\tau)) }$.

#### Identity type

Following the notion of Curry-Howard Correspondence, the identity type is a type introduced to mirror propositional equivalence, as opposed to the judgmental (syntactic) equivalence that type theory already provides.

An identity type requires two terms of the same type and is written with the symbol $\displaystyle{ = }$. For example, if $\displaystyle{ x+1 }$ and $\displaystyle{ 1+x }$ are terms, then $\displaystyle{ x+1=1+x }$ is a possible type. Canonical terms are created with a reflexivity function, $\displaystyle{ \mathrm{refl} }$. For a term $\displaystyle{ t }$, the call $\displaystyle{ \mathrm{refl}\,t }$ returns the canonical term inhabiting the type $\displaystyle{ t=t }$.

The complexities of equality in type theory make it an active research topic; homotopy type theory is a notable area of research that mainly deals with equality in type theory.

#### Inductive types

Inductive types are a general template for creating a large variety of types. In fact, all the types described above and more can be defined using the rules of inductive types. Two methods of generating inductive types are induction-recursion and induction-induction. A method that only uses lambda terms is Scott encoding.

Some proof assistants, such as Coq and Lean, are based on the calculus for inductive constructions, which is a calculus of constructions with inductive types.

## Differences from set theory

The most commonly accepted foundation for mathematics is first-order logic with the language and axioms of Zermelo–Fraenkel set theory with the axiom of choice, abbreviated ZFC. Type theories having sufficient expressibility may also act as a foundation of mathematics. There are a number of differences between these two approaches.

• Set theory has both rules and axioms, while type theories only have rules. Type theories, in general, do not have axioms and are defined by their rules of inference.[14]
• Classical set theory and logic have the law of excluded middle. When a type theory encodes the concepts of "and" and "or" as types, it leads to intuitionistic logic, and does not necessarily have the law of excluded middle.[17]
• In set theory, an element is not restricted to one set. The element can appear in subsets and unions with other sets. In type theory, terms (generally) belong to only one type. Where a subset would be used, type theory can use a predicate function or use a dependently-typed product type, where each element $\displaystyle{ x }$ is paired with a proof that the subset's property holds for $\displaystyle{ x }$. Where a union would be used, type theory uses the sum type, which contains new canonical terms.
• Type theory has a built-in notion of computation. Thus, "1+1" and "2" are different terms in type theory, but they compute to the same value. Moreover, functions are defined computationally as lambda terms. In set theory, "1+1=2" means that "1+1" is just another way to refer the value "2". Type theory's computation does require a complicated concept of equality.
• Set theory encodes numbers as sets. Type theory can encode numbers as functions using Church encoding, or more naturally as inductive types, and the construction closely resembles Peano's axioms.
• In type theory, proofs are types whereas in set theory, proofs are part of the underlying first-order logic.[14]

Proponents of type theory will also point out its connection to constructive mathematics through the BHK interpretation, its connection to logic by the Curry–Howard isomorphism, and its connections to Category theory.

### Properties of type theories

Terms usually belong to a single type. However, there are set theories that define "subtyping".

Computation takes place by repeated application of rules. Many types of theories are strongly normalizing, which means that any order of applying the rules will always end in the same result. However, some are not. In a normalizing type theory, the one-directional computation rules are called "reduction rules", and applying the rules "reduces" the term. If a rule is not one-directional, it is called a "conversion rule".

Some combinations of types are equivalent to other combinations of types. When functions are considered "exponentiation", the combinations of types can be written similarly to algebraic identities.[26] Thus, $\displaystyle{ {\mathbb 0} + A \cong A }$, $\displaystyle{ {\mathbb 1} \times A \cong A }$, $\displaystyle{ {\mathbb 1} + {\mathbb 1} \cong {\mathbb 2} }$, $\displaystyle{ A^{B+C} \cong A^B \times A^C }$, $\displaystyle{ A^{B\times C} \cong (A^B)^C }$.

### Axioms

Most type theories do not have axioms. This is because a type theory is defined by its rules of inference. This is a source of confusion for people familiar with Set Theory, where a theory is defined by both the rules of inference for a logic (such as first-order logic) and axioms about sets.

Sometimes, a type theory will add a few axioms. An axiom is a judgment that is accepted without a derivation using the rules of inference. They are often added to ensure properties that cannot be added cleanly through the rules.

Axioms can cause problems if they introduce terms without a way to compute on those terms. That is, axioms can interfere with the normalizing property of the type theory.[27]

Some commonly encountered axioms are:

• "Axiom K" ensures "uniqueness of identity proofs". That is, that every term of an identity type is equal to reflexivity.[28]
• "Univalence Axiom" holds that equivalence of types is equality of types. The research into this property led to cubical type theory, where the property holds without needing an axiom.[22]
• "Law of Excluded Middle" is often added to satisfy users who want classical logic, instead of intuitionistic logic.

The Axiom of Choice does not need to be added to type theory, because in most type theories it can be derived from the rules of inference. This is because of the constructive nature of type theory, where proving that a value exists requires a method to compute the value. The Axiom of Choice is less powerful in type theory than most set theories, because type theory's functions must be computable and, being syntax-driven, the number of terms in a type must be countable. (See Axiom of choice § In constructive mathematics.)

## List of type theories

### Active research

• Homotopy type theory explores equality of types
• Cubical Type Theory is an implementation of homotopy type theory

• Aarts, C.; Backhouse, R.; Hoogendijk, P.; Voermans, E.; van der Woude, J. (December 1992). "A Relational Theory of Datatypes". Technische Universiteit Eindhoven.
• Andrews B., Peter (2002). An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof (2nd ed.). Kluwer. ISBN 978-1-4020-0763-7.
• Jacobs, Bart (1999). Categorical Logic and Type Theory. Studies in Logic and the Foundations of Mathematics. 141. Elsevier. ISBN 978-0-444-50170-7. Retrieved 2020-07-19.  Covers type theory in depth, including polymorphic and dependent type extensions. Gives categorical semantics.
• Cardelli, Luca (1996). "Type Systems". in Tucker, Allen B.. The Computer Science and Engineering Handbook. CRC Press. pp. 2208–36. ISBN 9780849329098. Retrieved 2004-06-26.
• Collins, Jordan E. (2012). A History of the Theory of Types: Developments After the Second Edition of 'Principia Mathematica'. Lambert Academic Publishing. ISBN 978-3-8473-2963-3.  Provides a historical survey of the developments of the theory of types with a focus on the decline of the theory as a foundation of mathematics over the four decades following the publication of the second edition of 'Principia Mathematica'.
• Constable, Robert L. (2012). "Naïve Computational Type Theory". in Schwichtenberg, H.. Proof and System-Reliability. Nato Science Series II. 62. Springer. pp. 213–259. ISBN 9789401004138.  Intended as a type theory counterpart of Paul Halmos's (1960) Naïve Set Theory
• Coquand, Thierry (2018). "Type Theory". Stanford Encyclopedia of Philosophy.
• Thompson, Simon (1991). Type Theory and Functional Programming. Addison–Wesley. ISBN 0-201-41667-0. Retrieved 2006-04-03.
• Hindley, J. Roger (2008). Basic Simple Type Theory. Cambridge University Press. ISBN 978-0-521-05422-5.  A good introduction to simple type theory for computer scientists; the system described is not exactly Church's STT though. Book review
• Kamareddine, Fairouz D.; Laan, Twan; Nederpelt, Rob P. (2004). A modern perspective on type theory: from its origins until today. Springer. ISBN 1-4020-2334-0.
• Ferreirós, José; Domínguez, José Ferreirós (2007). "X. Logic and Type Theory in the Interwar Period". Labyrinth of thought: a history of set theory and its role in modern mathematics (2nd ed.). Springer. ISBN 978-3-7643-8349-7.
• Laan, T.D.L. (1997). The evolution of type theory in logic and mathematics (PDF) (PhD). Eindhoven University of Technology. doi:10.6100/IR498552. ISBN 90-386-0531-5. Archived (PDF) from the original on 2022-10-09.
• Montague, R. (1973) "The proper treatment of quantification in ordinary English". In K. J. J. Hintikka, J. M. E. Moravcsik, and P. Suppes (eds.), Approaches to Natural Language (Synthese Library, 49), Dordrecht: Reidel, 221–242; reprinted in Portner and Partee (eds.) 2002, pp. 17–35. See: Montague Semantics, Stanford Encyclopedia of Philosophy.

## Notes

1. In Julia's type system, for example, abstract types have no instances, but can have subtype,[1]:110 whereas concrete types do not have subtypes but can have instances, for "documentation, optimization, and dispatch".[2]
2. Church demonstrated his logistic method with his simple theory of types,[4] and explained his method in 1956,[5] pages 47-68.
3. In Julia, for example, a function with no name, but with two parameters in some tuple (x,y) can be denoted by say, (x,y) -> x^5+y, as an anonymous function.[23]

## References

1. Balbaert, Ivo (2015) Getting Started With Julia Programming ISBN 978-1-78328-479-5
2. docs.julialang.org v.1 Types
3. Stanford Encyclopedia of Philosophy (rev. Mon Oct 12, 2020) Russell’s Paradox 3. Early Responses to the Paradox
4. Church, Alonzo (1940). "A formulation of the simple theory of types". The Journal of Symbolic Logic 5 (2): 56–68. doi:10.2307/2266170.
5. Alonzo Church (1956) Introduction To Mathematical Logic Vol 1
6. ETCS in nLab
7. Chatzikyriakidis, Stergios; Luo, Zhaohui (2017-02-07) (in en). Modern Perspectives in Type-Theoretical Semantics. Springer. ISBN 978-3-319-50422-3. Retrieved 2022-07-29.
8. Winter, Yoad (2016-04-08) (in en). Elements of Formal Semantics: An Introduction to the Mathematical Theory of Meaning in Natural Language. Edinburgh University Press. ISBN 978-0-7486-7777-1. Retrieved 2022-07-29.
9. Cooper, Robin. "Type theory and semantics in flux ." Handbook of the Philosophy of Science 14 (2012): 271-323.
10. Barwise, Jon; Cooper, Robin (1981) Generalized quantifiers and natural language Linguistics and Philosophy 4 (2):159--219 (1981)
11. Cooper, Robin (2005). "Records and Record Types in Semantic Theory". Journal of Logic and Computation 15 (2): 99–112. doi:10.1093/logcom/exi004.
12. Cooper, Robin (2010). Type theory and semantics in flux. Handbook of the Philosophy of Science. Volume 14: Philosophy of Linguistics. Elsevier.
13. Martin-Löf, Per (1987-12-01). "Truth of a proposition, evidence of a judgement, validity of a proof" (in en). Synthese 73 (3): 407–420. doi:10.1007/BF00484985. ISSN 1573-0964.
14. The Univalent Foundations Program (2013). Homotopy Type Theory: Univalent Foundations of Mathematics. Homotopy Type Theory.
15. Henk Barendregt; Wil Dekkers; Richard Statman (20 June 2013). Lambda Calculus with Types. Cambridge University Press. pp. 1–66. ISBN 978-0-521-76614-2.
16. Heineman, George T.; Bessai, Jan; Düdder, Boris; Rehof, Jakob (2016). "A long and winding road towards modular synthesis". ISoLA 2016. 9952. Springer. pp. 303–317. doi:10.1007/978-3-319-47166-2_21. ISBN 978-3-319-47165-5.
17. Bell, John L. (2012). "Types, Sets and Categories". in Kanamory, Akihiro. Sets and Extensions in the Twentieth Century. Handbook of the History of Logic. 6. Elsevier. ISBN 978-0-08-093066-4. Retrieved 2012-11-03.
18. Sterling, Jonathan; Angiuli, Carlo (2021-06-29). "Normalization for Cubical Type Theory". 2021 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). Rome, Italy: IEEE. pp. 1–15. doi:10.1109/LICS52264.2021.9470719. ISBN 978-1-6654-4895-6. Retrieved 2022-06-21.
19. Cohen, Cyril; Coquand, Thierry; Huber, Simon; Mörtberg, Anders (2016). "Cubical Type Theory: A constructive interpretation of the univalence axiom". 21st International Conference on Types for Proofs and Programs (TYPES 2015). doi:10.4230/LIPIcs.CVIT.2016.23.
20. Balbaert,Ivo (2015) Getting Started with Julia
21. Bove, Ana; Dybjer, Peter (2009), Bove, Ana; Barbosa, Luís Soares; Pardo, Alberto et al., eds., "Dependent Types at Work" (in en), Language Engineering and Rigorous Software Development: International LerNet ALFA Summer School 2008, Piriapolis, Uruguay, February 24 - March 1, 2008, Revised Tutorial Lectures, Lecture Notes in Computer Science (Berlin, Heidelberg: Springer): pp. 57–99, doi:10.1007/978-3-642-03153-3_2, ISBN 978-3-642-03153-3, retrieved 2024-01-18
22. Barendegt, Henk (April 1991). "Introduction to generalized type systems". Journal of Functional Programming 1 (2): 125–154. doi:10.1017/S0956796800020025.