Intuitionistic type theory

From HandWiki
Short description: Alternative foundation of mathematics

Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory) is a type theory and an alternative foundation of mathematics. Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types.

Design

Martin-Löf designed the type theory on the principles of mathematical constructivism. Constructivism requires any existence proof to contain a "witness". So, any proof of "there exists a prime greater than 1000" must identify a specific number that is both prime and greater than 1000. Intuitionistic type theory accomplished this design goal by internalizing the BHK interpretation. An interesting consequence is that proofs become mathematical objects that can be examined, compared, and manipulated.

Intuitionistic type theory's type constructors were built to follow a one-to-one correspondence with logical connectives. For example, the logical connective called implication ([math]\displaystyle{ A \implies B }[/math]) corresponds to the type of a function ([math]\displaystyle{ A \to B }[/math]). This correspondence is called the Curry–Howard isomorphism. Previous type theories had also followed this isomorphism, but Martin-Löf's was the first to extend it to predicate logic by introducing dependent types.

Type theory

Intuitionistic type theory has three finite types, which are then composed using five different type constructors. Unlike set theories, type theories are not built on top of a logic like Frege's. So, each feature of the type theory does double duty as a feature of both math and logic.

If you are unfamiliar with type theory and know set theory, a quick summary is: Types contain terms just like sets contain elements. Terms belong to one and only one type. Terms like [math]\displaystyle{ 2+2 }[/math] and [math]\displaystyle{ 2\cdot 2 }[/math] compute ("reduce") down to canonical terms like 4. For more, see the article on type theory.

0 type, 1 type and 2 type

There are three finite types: The 0 type contains 0 terms. The 1 type contains 1 canonical term. And the 2 type contains 2 canonical terms.

Because the 0 type contains 0 terms, it is also called the empty type. It is used to represent anything that cannot exist. It is also written [math]\displaystyle{ \bot }[/math] and represents anything unprovable. (That is, a proof of it cannot exist.) As a result, negation is defined as a function to it: [math]\displaystyle{ \neg A := A \to \bot }[/math].

Likewise, the 1 type contains 1 canonical term and represents existence. It also is called the unit type. It often represents propositions that can be proven and is, therefore, sometimes written [math]\displaystyle{ \top }[/math].[citation needed]

Finally, the 2 type contains 2 canonical terms. It represents a definite choice between two values. It is used for Boolean values but not propositions.

Propositions are instead represented by particular types. For instance, a true proposition can be represented by the 1 type, while a false proposition can be represented by the 0 type. But we cannot assert that these are the only propositions, i.e. the law of excluded middle does not hold for propositions in intuitionistic type theory.

Σ type constructor

Σ-types contain ordered pairs. As with typical ordered pair (or 2-tuple) types, a Σ-type can describe the Cartesian product, [math]\displaystyle{ A \times B }[/math], of two other types, [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math]. Logically, such an ordered pair would hold a proof of [math]\displaystyle{ A }[/math] and a proof of [math]\displaystyle{ B }[/math], so one may see such a type written as [math]\displaystyle{ A \wedge B }[/math].

Σ-types are more powerful than typical ordered pair types because of dependent typing. In the ordered pair, the type of the second term can depend on the value of the first term. For example, the first term of the pair might be a natural number and the second term's type might be a sequence of reals of length equal to the first term. Such a type would be written:

[math]\displaystyle{ \sum_{n \mathbin{:} {\mathbb N}} \operatorname{Vec}({\mathbb R}, n) }[/math]

Using set-theory terminology, this is similar to an indexed disjoint union of sets. In the case of usual ordered pairs, the type of the second term does not depend on the value of the first term. Thus the type describing the cartesian product [math]\displaystyle{ {\mathbb N} \times {\mathbb R} }[/math] is written:

[math]\displaystyle{ \sum_{n \mathbin{:} {\mathbb N}} {\mathbb R} }[/math]

It is important to note here that the value of the first term, [math]\displaystyle{ n }[/math], is not depended on by the type of the second term, [math]\displaystyle{ {\mathbb R} }[/math].

Σ-types can be used to build up longer dependently-typed tuples used in mathematics and the records or structs used in most programming languages. An example of a dependently-typed 3-tuple is two integers and a proof that the first integer is smaller than the second integer, described by the type:

[math]\displaystyle{ \sum_{m \mathbin{:} {\mathbb Z}} {\sum_{n \mathbin{:} {\mathbb Z}} ((m \lt n) = \text{True})} }[/math]

Dependent typing allows Σ-types to serve the role of existential quantifier. The statement "there exists an [math]\displaystyle{ n }[/math] of type [math]\displaystyle{ {\mathbb N} }[/math], such that [math]\displaystyle{ P(n) }[/math] is proven" becomes the type of ordered pairs where the first item is the value [math]\displaystyle{ n }[/math] of type [math]\displaystyle{ {\mathbb N} }[/math] and the second item is a proof of [math]\displaystyle{ P(n) }[/math]. Notice that the type of the second item (proofs of [math]\displaystyle{ P(n) }[/math]) depends on the value in the first part of the ordered pair ([math]\displaystyle{ n }[/math]). Its type would be:

[math]\displaystyle{ \sum_{n \mathbin{:} {\mathbb N}} P(n) }[/math]

Π type constructor

Π-types contain functions. As with typical function types, they consist of an input type and an output type. They are more powerful than typical function types however, in that the return type can depend on the input value. Functions in type theory are different from set theory. In set theory, you look up the argument's value in a set of ordered pairs. In type theory, the argument is substituted into a term and then computation ("reduction") is applied to the term.

As an example, the type of a function that, given a natural number [math]\displaystyle{ n }[/math], returns a vector containing [math]\displaystyle{ n }[/math] real numbers is written:

[math]\displaystyle{ \prod_{n \mathbin{:} {\mathbb N}} \operatorname{Vec}({\mathbb R}, n) }[/math]

When the output type does not depend on the input value, the function type is often simply written with a [math]\displaystyle{ \to }[/math]. Thus, [math]\displaystyle{ {\mathbb N} \to {\mathbb R} }[/math] is the type of functions from natural numbers to real numbers. Such Π-types correspond to logical implication. The logical proposition [math]\displaystyle{ A \implies B }[/math] corresponds to the type [math]\displaystyle{ A \to B }[/math], containing functions that take proofs-of-A and return proofs-of-B. This type could be written more consistently as:

[math]\displaystyle{ \prod_{a \mathbin{:} A} B }[/math]

Π-types are also used in logic for universal quantification. The statement "for every [math]\displaystyle{ n }[/math] of type [math]\displaystyle{ {\mathbb N} }[/math], [math]\displaystyle{ P(n) }[/math] is proven" becomes a function from [math]\displaystyle{ n }[/math] of type [math]\displaystyle{ {\mathbb N} }[/math] to proofs of [math]\displaystyle{ P(n) }[/math]. Thus, given the value for [math]\displaystyle{ n }[/math] the function generates a proof that [math]\displaystyle{ P(\,\cdot\,) }[/math] holds for that value. The type would be

[math]\displaystyle{ \prod_{n \mathbin{:} {\mathbb N}} P(n) }[/math]

= type constructor

=-types are created from two terms. Given two terms like [math]\displaystyle{ 2+2 }[/math] and [math]\displaystyle{ 2 \cdot 2 }[/math], you can create a new type [math]\displaystyle{ 2+2=2\cdot 2 }[/math]. The terms of that new type represent proofs that the pair reduce to the same canonical term. Thus, since both [math]\displaystyle{ 2+2 }[/math] and [math]\displaystyle{ 2\cdot 2 }[/math] compute to the canonical term [math]\displaystyle{ 4 }[/math], there will be a term of the type [math]\displaystyle{ 2+2=2\cdot 2 }[/math]. In intuitionistic type theory, there is a single way to introduce =-types and that is by reflexivity:

[math]\displaystyle{ \operatorname{refl} \mathbin{:} \prod_{a \mathbin{:} A} (a = a). }[/math]

It is possible to create =-types such as [math]\displaystyle{ 1=2 }[/math] where the terms do not reduce to the same canonical term, but you will be unable to create terms of that new type. In fact, if you were able to create a term of [math]\displaystyle{ 1=2 }[/math], you could create a term of [math]\displaystyle{ \bot }[/math]. Putting that into a function would generate a function of type [math]\displaystyle{ 1=2 \to \bot }[/math]. Since [math]\displaystyle{ \ldots \to \bot }[/math] is how intuitionistic type theory defines negation, you would have [math]\displaystyle{ \neg (1=2) }[/math] or, finally, [math]\displaystyle{ 1 \neq 2 }[/math].

Equality of proofs is an area of active research in proof theory and has led to the development of homotopy type theory and other type theories.

Inductive types

Main page: Inductive type

Inductive types allow the creation of complex, self-referential types. For example, a linked list of natural numbers is either an empty list or a pair of a natural number and another linked list. Inductive types can be used to define unbounded mathematical structures like trees, graphs, etc.. In fact, the natural numbers type may be defined as an inductive type, either being [math]\displaystyle{ 0 }[/math] or the successor of another natural number.

Inductive types define new constants, such as zero [math]\displaystyle{ 0 \mathbin{:} {\mathbb N} }[/math] and the successor function [math]\displaystyle{ S \mathbin{:} {\mathbb N} \to {\mathbb N} }[/math]. Since [math]\displaystyle{ S }[/math] does not have a definition and cannot be evaluated using substitution, terms like [math]\displaystyle{ S 0 }[/math] and [math]\displaystyle{ S S S 0 }[/math] become the canonical terms of the natural numbers.

Proofs on inductive types are made possible by induction. Each new inductive type comes with its own inductive rule. To prove a predicate [math]\displaystyle{ P(\,\cdot\,) }[/math] for every natural number, you use the following rule:

[math]\displaystyle{ {\operatorname{{\mathbb N}-elim}}\, \mathbin{:} P(0)\, \to \left(\prod_{n \mathbin{:} {\mathbb N}} P(n) \to P(S(n))\right) \to \prod_{n \mathbin{:} {\mathbb N}} P(n) }[/math]

Inductive types in intuitionistic type theory are defined in terms of W-types, the type of well-founded trees. Later work in type theory generated coinductive types, induction-recursion, and induction-induction for working on types with more obscure kinds of self-referentiality. Higher inductive types allow equality to be defined between terms.

Universe types

The universe types allow proofs to be written about all the types created with the other type constructors. Every term in the universe type [math]\displaystyle{ \mathcal{U}_0 }[/math] can be mapped to a type created with any combination of [math]\displaystyle{ 0,1,2,\Sigma,\Pi,=, }[/math] and the inductive type constructor. However, to avoid paradoxes, there is no term in [math]\displaystyle{ \mathcal{U}_0 }[/math] that maps to [math]\displaystyle{ \mathcal{U}_0 }[/math].

To write proofs about all "the small types" and [math]\displaystyle{ \mathcal{U}_0 }[/math], you must use [math]\displaystyle{ \mathcal{U}_1 }[/math], which does contain a term for [math]\displaystyle{ \mathcal{U}_0 }[/math], but not for itself [math]\displaystyle{ \mathcal{U}_1 }[/math]. Similarly, for [math]\displaystyle{ \mathcal{U}_2 }[/math]. There is a predicative hierarchy of universes, so to quantify a proof over any fixed constant [math]\displaystyle{ k }[/math] universes, you can use [math]\displaystyle{ \mathcal{U}_{k+1} }[/math].

Universe types are a tricky feature of type theories. Martin-Löf's original type theory had to be changed to account for Girard's paradox. Later research covered topics such as "super universes", "Mahlo universes", and impredicative universes.

Judgements

The formal definition of intuitionistic type theory is written using judgements. For example, in the statement "if [math]\displaystyle{ A }[/math] is a type and [math]\displaystyle{ B }[/math] is a type then [math]\displaystyle{ \textstyle \sum_{a:A} B }[/math] is a type" there are judgements of "is a type", "and", and "if ... then ...". The expression [math]\displaystyle{ \textstyle \sum_{a:A} B }[/math] is not a judgement; it is the type being defined.

This second level of the type theory can be confusing, particularly where it comes to equality. There is a judgement of term equality, which might say [math]\displaystyle{ 4=2+2 }[/math]. It is a statement that two terms reduce to the same canonical term. There is also a judgement of type equality, say that [math]\displaystyle{ A=B }[/math], which means every element of [math]\displaystyle{ A }[/math] is an element of the type [math]\displaystyle{ B }[/math] and vice versa. At the type level, there is a type [math]\displaystyle{ 4=2+2 }[/math] and it contains terms if there is a proof that [math]\displaystyle{ 4 }[/math] and [math]\displaystyle{ 2+2 }[/math] reduce to the same value. (Terms of this type are generated using the term-equality judgement.) Lastly, there is an English-language level of equality, because we use the word "four" and symbol "[math]\displaystyle{ 4 }[/math]" to refer to the canonical term [math]\displaystyle{ S S S S 0 }[/math]. Synonyms like these are called "definitionally equal" by Martin-Löf.

The description of judgements below is based on the discussion in Nordström, Petersson, and Smith.

The formal theory works with types and objects.

A type is declared by:

  • [math]\displaystyle{ A\ \mathsf{Type} }[/math]

An object exists and is in a type if:

  • [math]\displaystyle{ a \mathbin{:} A }[/math]

Objects can be equal

  • [math]\displaystyle{ a = b }[/math]

and types can be equal

  • [math]\displaystyle{ A = B }[/math]

A type that depends on an object from another type is declared

  • [math]\displaystyle{ (x \mathbin{:} A)B }[/math]

and removed by substitution

  • [math]\displaystyle{ B[x / a] }[/math], replacing the variable [math]\displaystyle{ x }[/math] with the object [math]\displaystyle{ a }[/math] in [math]\displaystyle{ B }[/math].

An object that depends on an object from another type can be done two ways. If the object is "abstracted", then it is written

  • [math]\displaystyle{ [x]b }[/math]

and removed by substitution

  • [math]\displaystyle{ b[x / a] }[/math], replacing the variable [math]\displaystyle{ x }[/math] with the object [math]\displaystyle{ a }[/math] in [math]\displaystyle{ b }[/math].

The object-depending-on-object can also be declared as a constant as part of a recursive type. An example of a recursive type is:

  • [math]\displaystyle{ 0 \mathbin{:} \mathbb{N} }[/math]
  • [math]\displaystyle{ S \mathbin{:} \mathbb{N} \to \mathbb{N} }[/math]

Here, [math]\displaystyle{ S }[/math] is a constant object-depending-on-object. It is not associated with an abstraction. Constants like [math]\displaystyle{ S }[/math] can be removed by defining equality. Here the relationship with addition is defined using equality and using pattern matching to handle the recursive aspect of [math]\displaystyle{ S }[/math]:

[math]\displaystyle{ \begin{align} \operatorname{add} &\mathbin{:}\ (\mathbb{N} \times \mathbb{N}) \to \mathbb{N} \\ \operatorname{add}(0, b) &= b \\ \operatorname{add}(S(a), b) &= S(\operatorname{add}(a, b))) \end{align} }[/math]

[math]\displaystyle{ S }[/math] is manipulated as an opaque constant - it has no internal structure for substitution.

So, objects and types and these relations are used to express formulae in the theory. The following styles of judgements are used to create new objects, types and relations from existing ones:

[math]\displaystyle{ \Gamma\vdash \sigma\ \mathsf{Type} }[/math] σ is a well-formed type in the context Γ.
[math]\displaystyle{ \Gamma\vdash t \mathbin{:} \sigma }[/math] t is a well-formed term of type σ in context Γ.
[math]\displaystyle{ \Gamma\vdash \sigma \equiv \tau }[/math] σ and τ are equal types in context Γ.
[math]\displaystyle{ \Gamma\vdash t \equiv u \mathbin{:} \sigma }[/math] t and u are judgmentally equal terms of type σ in context Γ.
[math]\displaystyle{ \vdash \Gamma\ \mathsf{Context} }[/math] Γ is a well-formed context of typing assumptions.

By convention, there is a type that represents all other types. It is called [math]\displaystyle{ \mathcal{U} }[/math] (or [math]\displaystyle{ \operatorname{Set} }[/math]). Since [math]\displaystyle{ \mathcal{U} }[/math] is a type, the members of it are objects. There is a dependent type [math]\displaystyle{ \operatorname{El} }[/math] that maps each object to its corresponding type. In most texts [math]\displaystyle{ \operatorname{El} }[/math] is never written. From the context of the statement, a reader can almost always tell whether [math]\displaystyle{ A }[/math] refers to a type, or whether it refers to the object in [math]\displaystyle{ \mathcal{U} }[/math] that corresponds to the type.

This is the complete foundation of the theory. Everything else is derived.

To implement logic, each proposition is given its own type. The objects in those types represent the different possible ways to prove the proposition. If there is no proof for the proposition, then the type has no objects in it. Operators like "and" and "or" that work on propositions introduce new types and new objects. So [math]\displaystyle{ A \times B }[/math] is a type that depends on the type [math]\displaystyle{ A }[/math] and the type [math]\displaystyle{ B }[/math]. The objects in that dependent type are defined to exist for every pair of objects in [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math]. If [math]\displaystyle{ A }[/math] or [math]\displaystyle{ B }[/math] has no proof and is an empty type, then the new type representing [math]\displaystyle{ A \times B }[/math] is also empty.

This can be done for other types (booleans, natural numbers, etc.) and their operators.

Categorical models of type theory

Using the language of category theory, R. A. G. Seely introduced the notion of a locally cartesian closed category (LCCC) as the basic model of type theory. This has been refined by Hofmann and Dybjer to Categories with Families or Categories with Attributes based on earlier work by Cartmell.[1]

A category with families is a category C of contexts (in which the objects are contexts, and the context morphisms are substitutions), together with a functor T : CopFam(Set).

Fam(Set) is the category of families of Sets, in which objects are pairs [math]\displaystyle{ (A,B) }[/math] of an "index set" A and a function B: XA, and morphisms are pairs of functions f : AA' and g : XX' , such that B' ° g = f ° B — in other words, f maps Ba to Bg(a).

The functor T assigns to a context G a set [math]\displaystyle{ Ty(G) }[/math] of types, and for each [math]\displaystyle{ A : Ty(G) }[/math], a set [math]\displaystyle{ Tm(G,A) }[/math] of terms. The axioms for a functor require that these play harmoniously with substitution. Substitution is usually written in the form Af or af, where A is a type in [math]\displaystyle{ Ty(G) }[/math] and a is a term in [math]\displaystyle{ Tm(G,A) }[/math], and f is a substitution from D to G. Here [math]\displaystyle{ Af : Ty(D) }[/math] and [math]\displaystyle{ af : Tm(D,Af) }[/math].

The category C must contain a terminal object (the empty context), and a final object for a form of product called comprehension, or context extension, in which the right element is a type in the context of the left element. If G is a context, and [math]\displaystyle{ A : Ty(G) }[/math], then there should be an object [math]\displaystyle{ (G,A) }[/math] final among contexts D with mappings p : DG, q : Tm(D,Ap).

A logical framework, such as Martin-Löf's takes the form of closure conditions on the context-dependent sets of types and terms: that there should be a type called Set, and for each set a type, that the types should be closed under forms of dependent sum and product, and so forth.

A theory such as that of predicative set theory expresses closure conditions on the types of sets and their elements: that they should be closed under operations that reflect dependent sum and product, and under various forms of inductive definition.

Extensional versus intensional

A fundamental distinction is extensional vs intensional type theory. In extensional type theory, definitional (i.e., computational) equality is not distinguished from propositional equality, which requires proof. As a consequence type checking becomes undecidable in extensional type theory because programs in the theory might not terminate. For example, such a theory allows one to give a type to the Y-combinator; a detailed example of this can be found in Nordstöm and Petersson Programming in Martin-Löf's Type Theory.[2] However, this does not prevent extensional type theory from being a basis for a practical tool; for example, Nuprl is based on extensional type theory.

In contrast in intensional type theory type checking is decidable, but the representation of standard mathematical concepts is somewhat more cumbersome, since intensional reasoning requires using setoids or similar constructions. There are many common mathematical objects that are hard to work with or cannot be represented without this, for example, integer numbers, rational numbers, and real numbers. Integers and rational numbers can be represented without setoids, but this representation is difficult to work with. Cauchy real numbers cannot be represented without this.[3][full citation needed]

Homotopy type theory works on resolving this problem. It allows one to define higher inductive types, which not only define first-order constructors (values or points), but higher-order constructors, i.e. equalities between elements (paths), equalities between equalities (homotopies), ad infinitum.

Implementations of type theory

Different forms of type theory have been implemented as the formal systems underlying a number of proof assistants. While many are based on Per Martin-Löf's ideas, many have added features, more axioms, or a different philosophical background. For instance, the Nuprl system is based on computational type theory[4] and Coq is based on the calculus of (co)inductive constructions. Dependent types also feature in the design of programming languages such as ATS, Cayenne, Epigram, Agda,[5] and Idris.[6]

Martin-Löf type theories

Per Martin-Löf constructed several type theories that were published at various times, some of them much later than when the preprints with their description became accessible to the specialists (among others Jean-Yves Girard and Giovanni Sambin). The list below attempts to list all the theories that have been described in a printed form and to sketch the key features that distinguished them from each other. All of these theories had dependent products, dependent sums, disjoint unions, finite types and natural numbers. All the theories had the same reduction rules that did not include η-reduction either for dependent products or for dependent sums, except for MLTT79 where the η-reduction for dependent products is added.

MLTT71 was the first type theory created by Per Martin-Löf. It appeared in a preprint in 1971. It had one universe, but this universe had a name in itself, i.e. it was a type theory with, as it is called today, "Type in Type". Jean-Yves Girard has shown that this system was inconsistent and the preprint was never published.

MLTT72 was presented in a 1972 preprint that has now been published.[7] That theory had one universe V and no identity types (=-types). The universe was "predicative" in the sense that the dependent product of a family of objects from V over an object that was not in V such as, for example, V itself, was not assumed to be in V. The universe was à la Russell's Principia Mathematica, i.e., one would write directly "T∈V" and "t∈T" (Martin-Löf uses the sign "∈" instead of modern ":") without the additional constructor such as "El".

MLTT73 was the first definition of a type theory that Per Martin-Löf published (it was presented at the Logic Colloquium '73 and published in 1975[8]). There are identity types, which he describes as "propositions", but since no real distinction between propositions and the rest of the types is introduced the meaning of this is unclear. There is what later acquires the name of J-eliminator but yet without a name (see pp. 94–95). There is in this theory an infinite sequence of universes V0, ..., Vn, ... . The universes are predicative, à la Russell and non-cumulative. In fact, Corollary 3.10 on p. 115 says that if A∈Vm and B∈Vn are such that A and B are convertible then m = n. This means, for example, that it would be difficult to formulate univalence axiom in this theory—there are contractible types in each of the Vi, but it is unclear how to declare them to be equal since there are no identity types connecting Vi and Vj for ij.

MLTT79 was presented in 1979 and published in 1982.[9] In this paper, Martin-Löf introduced the four basic types of judgement for the dependent type theory that has since become fundamental in the study of the meta-theory of such systems. He also introduced contexts as a separate concept in it (see p. 161). There are identity types with the J-eliminator (which already appeared in MLTT73 but did not have this name there) but also with the rule that makes the theory "extensional" (p. 169). There are W-types. There is an infinite sequence of predicative universes that are cumulative.

Bibliopolis: there is a discussion of a type theory in the Bibliopolis book from 1984,[10] but it is somewhat open-ended and does not seem to represent a particular set of choices and so there is no specific type theory associated with it.

See also

Notes

  1. Clairambault, Pierre; Dybjer, Peter (2014). "The biequivalence of locally cartesian closed categories and Martin-Löf type theories" (in en). Mathematical Structures in Computer Science 24 (6). doi:10.1017/S0960129513000881. ISSN 0960-1295. https://www.cambridge.org/core/journals/mathematical-structures-in-computer-science/article/biequivalence-of-locally-cartesian-closed-categories-and-martinlof-type-theories/6ECB295B1246A85D5DD92E5F38428D99. 
  2. Bengt Nordström; Kent Petersson; Jan M. Smith (1990). Programming in Martin-Löf's Type Theory. Oxford University Press, p. 90.
  3. Altenkirch, Thorsten, Thomas Anberrée, and Nuo Li. "Definable Quotients in Type Theory".
  4. Allen, S.F.; Bickford, M.; Constable, R.L.; Eaton, R.; Kreitz, C.; Lorigo, L.; Moran, E. (2006). "Innovations in computational type theory using Nuprl". Journal of Applied Logic 4 (4): 428–469. doi:10.1016/j.jal.2005.10.005. 
  5. Norell, Ulf (2009). "Dependently typed programming in Agda". Proceedings of the 4th international workshop on Types in language design and implementation. TLDI '09. New York, NY, USA: ACM. pp. 1–2. doi:10.1145/1481861.1481862. ISBN 9781605584201. 
  6. Brady, Edwin (2013). "Idris, a general-purpose dependently typed programming language: Design and implementation". Journal of Functional Programming 23 (5): 552–593. doi:10.1017/S095679681300018X. ISSN 0956-7968. https://www.cambridge.org/core/journals/journal-of-functional-programming/article/idris-a-generalpurpose-dependently-typed-programming-language-design-and-implementation/418409138B4452969AC0736DB0A2C238. 
  7. Per Martin-Löf, An intuitionistic theory of types, Twenty-five years of constructive type theory (Venice,1995), Oxford Logic Guides, v. 36, pp. 127--172, Oxford Univ. Press, New York, 1998
  8. Martin-Löf, Per (1975). "An intuitionistic theory of types: predicative part". Logic Colloquium '73 (Bristol, 1973). 80. Amsterdam: North-Holland. pp. 73–118. 
  9. Martin-Löf, Per (1982). "Constructive mathematics and computer programming". Logic, methodology and philosophy of science, VI (Hannover, 1979). 104. Amsterdam: North-Holland. pp. 153–175. 
  10. Per Martin-Löf, "Intuitionistic type theory", Studies in Proof Theory (lecture notes by Giovanni Sambin), vol. 1, pp. iv+91, 1984

References

Further reading

External links