Philosophy:Revision theory
Revision theory is a subfield of philosophical logic. It consists of a general theory of definitions, including (but not limited to) circular and interdependent concepts. A circular definition is one in which the concept being defined occurs in the statement defining it—for example, defining a G as being blue and to the left of a G. Revision theory provides formal semantics for defined expressions, and formal proof systems study the logic of circular expressions. Definitions are important in philosophy and logic. Although circular definitions have been regarded as logically incorrect or incoherent, revision theory demonstrates that they are meaningful and can be studied with mathematical and philosophical logic. It has been used to provide circular analyses of philosophical and logical concepts.
History
Revision theory is a generalization of the revision theories of truth developed by Anil Gupta, Hans Herzberger, and Nuel Belnap.[1] In the revision theories of Gupta and Herzberger, revision is supposed to reflect intuitive evaluations of sentences that use the truth predicate. Some sentences are stable in their evaluations, such as the truth-teller sentence,
- The truth-teller is true.
Assuming the truth-teller is true, it is true, and assuming that it is false, it is false. Neither status will change. On the other hand, some sentences oscillate, such as the liar,
- The liar sentence is not true.
On the assumption that the liar is true, one can show that it is false, and on the assumption that it is false, one can show that it is true. This instability is reflected in revision sequences for the liar.
The generalization to circular definitions was developed by Gupta, in collaboration with Belnap. Their book, The Revision Theory of Truth, presents an in-depth development of the theory of circular definitions, as well as an overview and critical discussion of philosophical views on truth and the relation between truth and definition.
Philosophical background
The philosophical background of revision theory is developed by Gupta and Belnap.[2] Other philosophers, such as Aladdin Yaqūb, have developed philosophical interpretations of revision theory in the context of theories of truth, but not in the general context of circular definitions.[3]
Gupta and Belnap maintain that circular concepts are meaningful and logically acceptable. Circular definitions are formally tractable, as demonstrated by the formal semantics of revision theory. As Gupta and Belnap put it, "the moral we draw from the paradoxes is that the domain of the meaningful is more extensive than it appears to be, that certain seemingly meaningless concepts are in fact meaningful."[4]
The meaning of a circular predicate is not an extension, as is often assigned to non-circular predicates. Its meaning, rather, is a rule of revision that determines how to generate a new hypothetical extension given an initial one. These new extensions are at least as good as the originals, in the sense that, given one extension, the new extension contains exactly the things that satisfy the definiens for a particular circular predicate. In general, there is no unique extension on which revision will settle.[5]
Revision theory offers an alternative to the standard theory of definitions. The standard theory maintains that good definitions have two features. First, defined symbols can always be eliminated, replaced by what defines them. Second, definitions should be conservative in the sense that adding a definition should not result in new consequences in the original language. Revision theory rejects the first but maintains the second, as demonstrated for both of the strong senses of validity presented below.
The logician Alfred Tarski presented two criteria for evaluating definitions as analyses of concepts: formal correctness and material adequacy. The criterion of formal correctness states that in a definition, the definiendum must not occur in the definiens. The criterion of material adequacy says that the definition must be faithful to the concept being analyzed. Gupta and Belnap recommend siding with material adequacy in cases in which the two criteria conflict.[6] To determine whether a circular definition provides a good analysis of a concept requires evaluating the material adequacy of the definition. Some circular definitions will be good analyses, while some will not. Either way, formal correctness, in Tarski's sense, will be violated.
Semantics for circular predicates
The central semantic idea of revision theory is that a definition, such as that of being a [math]\displaystyle{ G }[/math], provides a rule of revision that tells one what the new extension for the definiendum [math]\displaystyle{ G }[/math] should be, given a hypothetical extension of the definiendum and information concerning the undefined expressions. Repeated application of a rule of revision generates sequences of hypotheses, which can be used to define logics of circular concepts. In work on revision theory, it is common to use the symbol, [math]\displaystyle{ =_{df} }[/math], to indicate a definition, with the left-hand side being the definiendum and the right-hand side the definiens. The example
- Being a [math]\displaystyle{ G }[/math] is defined as being both blue and to the left of a [math]\displaystyle{ G }[/math]
can then be written as
- Being a [math]\displaystyle{ G=_{df} }[/math] being both blue and to the left of a [math]\displaystyle{ G }[/math].
Given a hypothesis about the extension of [math]\displaystyle{ G }[/math], one can obtain a new extension for [math]\displaystyle{ G }[/math] appealing to the meaning of the undefined expressions in the definition, namely blue and to the left of.
We begin with a ground language, [math]\displaystyle{ L }[/math], that is interpreted via a classical ground model [math]\displaystyle{ M }[/math], which is a pair of a domain [math]\displaystyle{ D }[/math] and an interpretation function [math]\displaystyle{ I }[/math].[7] Suppose that the set of definitions [math]\displaystyle{ \mathcal{D} }[/math] is the following,
- [math]\displaystyle{ \begin{align} G_1\overline{x} & =_{Df} A_{G_1}(\overline{x}) \\ & {}\,\,\,\vdots \\ G_n\overline{x} & =_{Df} A_{G_n}(\overline{x}) \\ & {}\,\,\,\vdots \end{align} }[/math]
where each [math]\displaystyle{ A_{G_i} }[/math] is a formula that may contain any of the definienda [math]\displaystyle{ G_j }[/math], including [math]\displaystyle{ G_i }[/math] itself. It is required that in the definitions, only the displayed variables, [math]\displaystyle{ \overline{x} }[/math], are free in the definientia, the formulas [math]\displaystyle{ A_{G_i} }[/math]. The language is expanded with these new predicates, [math]\displaystyle{ G_1,\ldots,G_n,\ldots }[/math], to form [math]\displaystyle{ L }[/math]+. When the set [math]\displaystyle{ \mathcal{D} }[/math] contains few defined predicates, it is common to use the notation, [math]\displaystyle{ G\overline{x}=_{Df} A(\overline{x},G) }[/math] to emphasize that [math]\displaystyle{ A }[/math] may contain [math]\displaystyle{ G }[/math].
A hypothesis [math]\displaystyle{ h }[/math] is a function from the definienda of to tuples of [math]\displaystyle{ \mathcal{D} }[/math]. The model [math]\displaystyle{ M+h }[/math] is just like the model [math]\displaystyle{ M }[/math] except that [math]\displaystyle{ h }[/math] interprets each definiendum according to the following biconditional, the left-hand side of which is read as “[math]\displaystyle{ G_i(\overline{t}) }[/math] is true in [math]\displaystyle{ M+h }[/math].”
- [math]\displaystyle{ M+h\models G_i(\overline{t}) \text{ iff } I(\overline{t})\in h(G_i) }[/math]
The set [math]\displaystyle{ \mathcal{D} }[/math] of definitions yields a rule of revision, or revision operator, [math]\displaystyle{ \delta_{M, \mathcal{D}} }[/math]. Revision operators obey the following equivalence for each definiendum, [math]\displaystyle{ G }[/math], in [math]\displaystyle{ \mathcal{D} }[/math].
- [math]\displaystyle{ M+\delta_{M, \mathcal{D}}(h) \models G(\overline{t}) \text{ iff } M+h\models A_G(\overline{t}) }[/math]
A tuple will satisfy a definiendum [math]\displaystyle{ G }[/math] after revision just in case it satisfies the definiens for [math]\displaystyle{ G }[/math], namely [math]\displaystyle{ A_G }[/math], prior to revision. This is to say that the tuples that satisfy [math]\displaystyle{ A_G }[/math] according to a hypothesis will be exactly those that satisfy [math]\displaystyle{ G }[/math] according to the revision of that hypothesis.
The classical connectives are evaluated in the usual, recursive way in [math]\displaystyle{ M+h }[/math]. Only the evaluation of a defined predicate appeals to the hypotheses.
Sequences
Revision sequences are sequences of hypotheses satisfying extra conditions.[8] We will focus here on sequences that are [math]\displaystyle{ \omega }[/math]-long, since transfinite revision sequences require the additional specification of what to do at limit stages.
Let [math]\displaystyle{ \mathcal{S} }[/math] be a sequence of hypotheses, and let [math]\displaystyle{ \mathcal{S}_{\alpha} }[/math] be the [math]\displaystyle{ \alpha }[/math]-th hypothesis in [math]\displaystyle{ \mathcal{S} }[/math]. An [math]\displaystyle{ \omega }[/math]-long sequence [math]\displaystyle{ \mathcal{S} }[/math] of hypotheses is a revision sequence just in case for all [math]\displaystyle{ n }[/math],
- [math]\displaystyle{ \mathcal{S}_{n+1}=\delta_{M, \mathcal{D}}(\mathcal{S}_{n}). }[/math]
Recursively define iteration as
- [math]\displaystyle{ \delta_{M, \mathcal{D}}^0(h)=h }[/math] and
- [math]\displaystyle{ \delta_{M, \mathcal{D}}^{n+1}(h)=\delta_{M, \mathcal{D}}^{n}(\delta_{M, \mathcal{D}}(h)). }[/math]
The [math]\displaystyle{ \omega }[/math]-long revision sequence starting from [math]\displaystyle{ h }[/math] can be written as follows.
- [math]\displaystyle{ h, \delta_{M, \mathcal{D}}(h), \delta_{M, \mathcal{D}}^2(h), \ldots }[/math]
One sense of validity, [math]\displaystyle{ S_0 }[/math] validity, can be defined as follows. A sentence [math]\displaystyle{ A }[/math] is valid in [math]\displaystyle{ S_0 }[/math] in [math]\displaystyle{ M }[/math] on [math]\displaystyle{ {D} }[/math] iff there exists an [math]\displaystyle{ n }[/math] such that for all [math]\displaystyle{ h }[/math] and for all [math]\displaystyle{ m\geq n }[/math], [math]\displaystyle{ M+\delta_{M, \mathcal{D}}^{m}(h)\models A }[/math]. A sentence [math]\displaystyle{ A }[/math] is valid on [math]\displaystyle{ D }[/math] just in case it is valid in all [math]\displaystyle{ M }[/math].
Validity in [math]\displaystyle{ S_0 }[/math] can be recast in terms of stability in [math]\displaystyle{ \omega }[/math]-long sequences. A sentence [math]\displaystyle{ A }[/math] is stably true in a revision sequence just in case there is an [math]\displaystyle{ {\alpha} }[/math] such that for all [math]\displaystyle{ \beta\geq\alpha }[/math], [math]\displaystyle{ M+{\mathcal{S}_\beta}\models A }[/math]. A sentence [math]\displaystyle{ A }[/math] is stably false in a revision sequence just in case there is an [math]\displaystyle{ {\alpha} }[/math] such that for all [math]\displaystyle{ \beta\geq\alpha }[/math], [math]\displaystyle{ M+{\mathcal{S}_\beta}\not\models A }[/math]. In these terms, a sentence [math]\displaystyle{ A }[/math] is valid in [math]\displaystyle{ S_0 }[/math] in [math]\displaystyle{ M }[/math] on just in case [math]\displaystyle{ A }[/math] is stably true in all [math]\displaystyle{ \omega }[/math]-long revision sequences on [math]\displaystyle{ M }[/math].
Examples
For the first example, let [math]\displaystyle{ \mathcal{D}_1 }[/math] be [math]\displaystyle{ Gx=_{Df} (x=a\ \&\ \sim Gx) \lor (x=b\ \&\ Gb). }[/math] Let the domain of the ground model [math]\displaystyle{ M }[/math] be {a, b} , and let [math]\displaystyle{ I(a)=a }[/math] and [math]\displaystyle{ I(b)=b }[/math]. There are then four possible hypotheses for [math]\displaystyle{ M }[/math]: [math]\displaystyle{ \emptyset }[/math], {a} , {b} , {a, b} . The first few steps of the revision sequences starting from those hypotheses are illustrated by the following table.
stage 0 | stage 1 | stage 2 | stage 3 |
---|---|---|---|
[math]\displaystyle{ \emptyset }[/math] | {a} | [math]\displaystyle{ \emptyset }[/math] | {a} |
{a} | [math]\displaystyle{ \emptyset }[/math] | {a} | [math]\displaystyle{ \emptyset }[/math] |
{b} | {a, b} | {b} | {a, b} |
{a, b} | {b} | {a, b} | {b} |
As can be seen in the table, [math]\displaystyle{ a }[/math] goes in and out of the extension of [math]\displaystyle{ G }[/math]. It never stabilizes. On the other hand, [math]\displaystyle{ b }[/math] either stays in or stays out. It is stable, but whether it is stably true or stably false depends on the initial hypothesis.
Next, let [math]\displaystyle{ \mathcal{D}_2 }[/math] be [math]\displaystyle{ Hx=_{Df} Hx\lor\sim Hx. }[/math] As shown in the following table, all hypotheses for the ground model of the previous example are revised to the set {a, b} .
stage 0 | stage 1 | stage 2 | stage 3 |
---|---|---|---|
[math]\displaystyle{ \emptyset }[/math] | {a, b} | {a, b} | {a, b} |
{a} | {a, b} | {a, b} | {a, b} |
{b} | {a, b} | {a, b} | {a, b} |
{a, b} | {a, b} | {a, b} | {a, b} |
For a slightly more complex revision pattern, let [math]\displaystyle{ {L} }[/math] contain [math]\displaystyle{ \lt }[/math] and all the numerals, [math]\displaystyle{ \overline{k} }[/math], and let the ground model be [math]\displaystyle{ \mathbb{N} }[/math], whose domain is the natural numbers, [math]\displaystyle{ \omega }[/math], with interpretation [math]\displaystyle{ I }[/math] such that [math]\displaystyle{ I(\overline{k})=k }[/math] for all numerals and [math]\displaystyle{ I(\lt ) }[/math] is the usual ordering on natural numbers. Let [math]\displaystyle{ \mathcal{D}_3 }[/math] be [math]\displaystyle{ Jx=_{Df} \forall y(y\lt x\supset Jy). }[/math] Let the initial hypothesis [math]\displaystyle{ h }[/math] be [math]\displaystyle{ \emptyset }[/math]. In this case, the sequence of extensions builds up stage by stage.
- [math]\displaystyle{ \varnothing,\ \{0\},\ \{0,1\},\ \{0,1,2,\},\ \ldots }[/math]
Although for every [math]\displaystyle{ n }[/math], [math]\displaystyle{ J\overline{n} }[/math] is valid in [math]\displaystyle{ \mathbb{N} }[/math], [math]\displaystyle{ \forall x Jx }[/math] is not valid in [math]\displaystyle{ \mathbb{N} }[/math].
Suppose the initial hypothesis contains 0, 2, and all the odd numbers. After one revision, the extension of [math]\displaystyle{ J }[/math] will be {0, 1, 2, 3, 4} . Subsequent revisions will build up the extension as with the previous example. More generally, if the extension of [math]\displaystyle{ J }[/math] is not all of [math]\displaystyle{ \mathbb{N} }[/math], then one revision will cut the extension of [math]\displaystyle{ J }[/math] down to a possibly empty initial segment of the natural numbers and subsequent revisions will build it back up.
Proof system
There is a Fitch-style natural deduction proof system, [math]\displaystyle{ C_0 }[/math], for circular definitions.[9] The system uses indexed formulas, [math]\displaystyle{ {A}^{i} }[/math], where [math]\displaystyle{ i }[/math] can be any integer. One can think of the indices as representing relative position in a revision sequence. The premises and conclusions of the rules for the classical connectives all have the same index. For example, here are the conjunction and negation introduction rules.
| [math]\displaystyle{ B^i }[/math] | [math]\displaystyle{ C^i }[/math] | [math]\displaystyle{ (B\&C)^i }[/math] [math]\displaystyle{ \& }[/math]In
| |__ [math]\displaystyle{ B^{i} }[/math] | | [math]\displaystyle{ \vdots }[/math] | | [math]\displaystyle{ \bot^{i} }[/math] | [math]\displaystyle{ \sim B^{i} }[/math] [math]\displaystyle{ \sim }[/math]In
For each definition, [math]\displaystyle{ G\overline{x}=_{Df} A_G(\overline{x}) }[/math], in [math]\displaystyle{ D }[/math], there is a pair of rules.
| [math]\displaystyle{ A_{G}(\overline{t})^{i} }[/math] | [math]\displaystyle{ G(\overline{t})^{i+1} }[/math] DfIn
| [math]\displaystyle{ G(\overline{t})^{i+1} }[/math] | [math]\displaystyle{ A_{G}(\overline{t})^{i} }[/math] DfElim
In these rules, it is assumed that [math]\displaystyle{ \overline{t} }[/math] are free for [math]\displaystyle{ \overline{x} }[/math] in [math]\displaystyle{ A_G }[/math].
Finally, for formulas [math]\displaystyle{ B }[/math] of [math]\displaystyle{ {L} }[/math], there is one more rule, the index shift rule.
| [math]\displaystyle{ B^{i} }[/math] | [math]\displaystyle{ B^{j} }[/math] IS
In this rule, [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] can be any distinct indices. This rule reflects the fact that formulas from the ground language do not change their interpretation throughout the revision process.
The system [math]\displaystyle{ C_0 }[/math] is sound and complete with respect to [math]\displaystyle{ S_0 }[/math] validity, meaning a sentence is valid in [math]\displaystyle{ S_0 }[/math] just in case it is derivable in [math]\displaystyle{ C_0 }[/math].
Recently Riccardo Bruni has developed a Hilbert-style axiom system and a sequent system that are both sound and complete with respect to [math]\displaystyle{ S_0 }[/math].[10]
Transfinite revision
For some definitions, [math]\displaystyle{ S_0 }[/math] validity is not strong enough.[11] For example, in definition [math]\displaystyle{ \mathcal{D}_3 }[/math], even though every number is eventually stably in the extension of [math]\displaystyle{ J }[/math], the universally quantified sentence [math]\displaystyle{ \forall x Jx }[/math] is not valid. The reason is that for any given sentence to be valid, it must stabilize to true after finitely many revisions. On the other hand, [math]\displaystyle{ \forall x Jx }[/math] needs infinitely many revisions, unless the initial hypothesis already assigns all the natural numbers as the extension of [math]\displaystyle{ J }[/math].
Natural strengthenings of [math]\displaystyle{ S_0 }[/math] validity, and alternatives to it, use transfinitely long revision sequences. Let [math]\displaystyle{ On }[/math] be the class of all ordinals. The definitions will focus on sequences of hypotheses that are [math]\displaystyle{ On }[/math]-long.
Suppose [math]\displaystyle{ \mathcal{S} }[/math] is an [math]\displaystyle{ On }[/math]-long sequence of hypotheses. A tuple [math]\displaystyle{ \overline{d} }[/math] is stably in the extension of a defined predicate [math]\displaystyle{ G }[/math] at a limit ordinal [math]\displaystyle{ \beta }[/math] in a sequence [math]\displaystyle{ \mathcal{S} }[/math] just in case there is an [math]\displaystyle{ \alpha\leq\beta }[/math] such that for all [math]\displaystyle{ \gamma }[/math] with [math]\displaystyle{ \alpha\leq \gamma\lt \beta }[/math], [math]\displaystyle{ \overline{d}\in \mathcal{S}_\gamma }[/math]. Similarly, a tuple [math]\displaystyle{ \overline{d} }[/math] is stably out of the extension of [math]\displaystyle{ G }[/math] at a limit ordinal [math]\displaystyle{ \beta }[/math] just in case there is a stage [math]\displaystyle{ \alpha }[/math] such that for all [math]\displaystyle{ \gamma }[/math] with [math]\displaystyle{ \alpha\leq\gamma\lt \beta }[/math], [math]\displaystyle{ \overline{d}\not\in\mathcal{S}_\gamma }[/math]. Otherwise [math]\displaystyle{ \overline{d} }[/math] is unstable at [math]\displaystyle{ \beta }[/math] in [math]\displaystyle{ \mathcal{S} }[/math]. Informally, a tuple is stably in an extension at a limit, just in case there's a stage after which the tuple is in the extension up until the limit, and a tuple is stably out just in case there's a stage after which it remains out going to the limit stage.
A hypothesis [math]\displaystyle{ h }[/math] coheres with [math]\displaystyle{ \mathcal{S} }[/math] at a limit ordinal [math]\displaystyle{ \beta }[/math] iff for all tuples [math]\displaystyle{ \overline{d} }[/math], if [math]\displaystyle{ \overline{d} }[/math] is stably in [stably out of] the extension of [math]\displaystyle{ G }[/math] at [math]\displaystyle{ \beta }[/math] in [math]\displaystyle{ \mathcal{S} }[/math], then [math]\displaystyle{ \overline{d}\in[\not\in] h(G) }[/math].
An [math]\displaystyle{ On }[/math]-long sequence [math]\displaystyle{ \mathcal{S} }[/math] of hypotheses is a revision sequence iff for all [math]\displaystyle{ \alpha }[/math],
- if [math]\displaystyle{ \alpha=\beta+1 }[/math], then [math]\displaystyle{ \mathcal{S}_{\alpha}=\delta_{M, \mathcal{D}}(\mathcal{S}_\beta) }[/math], and
- if [math]\displaystyle{ \alpha }[/math] is a limit, then [math]\displaystyle{ \mathcal{S}_{\alpha} }[/math] coheres with [math]\displaystyle{ \mathcal{S} }[/math] at [math]\displaystyle{ \alpha }[/math].
Just as with the [math]\displaystyle{ \omega }[/math] sequences, the successor stages of the sequence are generated by the revision operator. At limit stages, however, the only constraint is that the limit hypothesis cohere with what came before. The unstable elements are set according to a limit rule, the details of which are left open by the set of definitions.
Limit rules can be categorized into two classes, constant and non-constant, depending on whether they do different things at different limit stages. A constant limit rule does the same thing to unstable elements at each limit. One particular constant limit rule, the Herzberger rule, excludes all unstable elements from extensions. According to another constant rule, the Gupta rule, unstable elements are included in extensions just in case they were in [math]\displaystyle{ \mathcal{S}_0 }[/math]. Non-constant limit rules vary the treatment of unstable elements at limits.
Two senses of validity can be defined using [math]\displaystyle{ On }[/math]-long sequences. The first, [math]\displaystyle{ S^{*} }[/math] validity, is defined in terms of stability. A sentence [math]\displaystyle{ A }[/math] is valid in [math]\displaystyle{ S^{*} }[/math] in [math]\displaystyle{ M }[/math] on [math]\displaystyle{ \mathcal{D} }[/math] iff for all [math]\displaystyle{ On }[/math]-long revision sequences [math]\displaystyle{ {S} }[/math], there is a stage [math]\displaystyle{ \alpha }[/math] such that [math]\displaystyle{ A }[/math] is stably true in [math]\displaystyle{ \mathcal{S} }[/math] after stage [math]\displaystyle{ \alpha }[/math]. A sentence [math]\displaystyle{ A }[/math] is [math]\displaystyle{ S^{*} }[/math] valid on [math]\displaystyle{ \mathcal{D} }[/math] just in case for all classical ground models [math]\displaystyle{ M }[/math], [math]\displaystyle{ A }[/math] is [math]\displaystyle{ S^{*} }[/math] valid in [math]\displaystyle{ M }[/math] on [math]\displaystyle{ \mathcal{D} }[/math].
The second sense of validity, [math]\displaystyle{ S^{\#} }[/math] validity, uses near stability rather than stability. A sentence [math]\displaystyle{ {A} }[/math] is nearly stably true in a sequence [math]\displaystyle{ \mathcal{S} }[/math] iff there is an [math]\displaystyle{ \alpha }[/math] such that for all [math]\displaystyle{ \beta\geq\alpha }[/math], there is a natural number [math]\displaystyle{ n }[/math] such that for all [math]\displaystyle{ m\geq n }[/math], [math]\displaystyle{ M+\delta_{M, \mathcal{D}}^{m}(\mathcal{S}_{\beta})\models A. }[/math] A sentence [math]\displaystyle{ {A} }[/math] is nearly stably false in a sequence [math]\displaystyle{ \mathcal{S} }[/math] iff there is an [math]\displaystyle{ \alpha }[/math] such that for all [math]\displaystyle{ \beta\geq\alpha }[/math], there is a natural number [math]\displaystyle{ n }[/math] such that for all [math]\displaystyle{ m\geq n }[/math], [math]\displaystyle{ M+\delta_{M, \mathcal{D}}^{m}(\mathcal{S}_{\beta})\not\models A. }[/math] A nearly stable sentence may have finitely long periods of instability following limits, after which it settles down until the next limit.
A sentence [math]\displaystyle{ A }[/math] is valid in [math]\displaystyle{ S^{\#} }[/math] in [math]\displaystyle{ M }[/math] on iff for all [math]\displaystyle{ On }[/math]-long revision sequences [math]\displaystyle{ {S} }[/math], there is a stage [math]\displaystyle{ \alpha }[/math] such that [math]\displaystyle{ A }[/math] is nearly stably true in [math]\displaystyle{ \mathcal{S} }[/math] after stage [math]\displaystyle{ \alpha }[/math]. A sentence [math]\displaystyle{ A }[/math] is valid in [math]\displaystyle{ S^{\#} }[/math] in on just in case it is valid in [math]\displaystyle{ S^{\#} }[/math] in all ground models.
If a sentence is valid in [math]\displaystyle{ S^{*} }[/math], then it is valid in [math]\displaystyle{ S^{\#} }[/math], but not conversely. An example using [math]\displaystyle{ \mathcal{D}_3 }[/math] shows this for validity in a model. The sentence [math]\displaystyle{ \forall x Jx }[/math] is not valid in [math]\displaystyle{ \mathbb{N} }[/math] in [math]\displaystyle{ S_0 }[/math], but it is valid in [math]\displaystyle{ S^{\#} }[/math].
An attraction of [math]\displaystyle{ S^{\#} }[/math] validity is that it generates a simpler logic than [math]\displaystyle{ S^{*} }[/math]. The proof system [math]\displaystyle{ C_0 }[/math] is sound for [math]\displaystyle{ S^{\#} }[/math], but it is not, in general, complete. In light of the completeness of [math]\displaystyle{ C_0 }[/math], if a sentence is valid in [math]\displaystyle{ S_0 }[/math], then it is valid in [math]\displaystyle{ S^{\#} }[/math], but the converse does not hold in general. Validity in [math]\displaystyle{ S_0 }[/math] and in [math]\displaystyle{ S^{*} }[/math] are, in general, incomparable. Consequently, [math]\displaystyle{ C_0 }[/math] is not sound for [math]\displaystyle{ S^{*} }[/math].
Finite definitions
While [math]\displaystyle{ S^{\#} }[/math] validity outstrips [math]\displaystyle{ S_0 }[/math] validity, in general, there is a special case in which the two coincide, finite definitions. Loosely speaking, a definition is finite if all revision sequences stop producing new hypotheses after a finite number of revisions. To put it more precisely, we define a hypothesis [math]\displaystyle{ h }[/math] as reflexive just in case there is an [math]\displaystyle{ n\gt 0 }[/math] such that [math]\displaystyle{ h=\delta_{M, \mathcal{D}}^{n}(h) }[/math]. A definition is finite iff for all models [math]\displaystyle{ M }[/math], for all hypotheses [math]\displaystyle{ h }[/math], there is a natural number [math]\displaystyle{ n }[/math], such that [math]\displaystyle{ \delta_{M, \mathcal{D}}^{n}(h) }[/math] is reflexive. Gupta showed that if [math]\displaystyle{ \mathcal{D} }[/math] is finite, then [math]\displaystyle{ S^{\#} }[/math] validity and [math]\displaystyle{ S_0 }[/math] validity coincide.
There is no known syntactic characterization of the set of finite definitions, and finite definitions are not closed under standard logical operations, such as conjunction and disjunction. Maricarmen Martinez has identified some syntactic features under which the set of finite definitions is closed.[12] She has shown that if [math]\displaystyle{ {L} }[/math] contains only unary predicates, apart from identity, contains no function symbols, and the definienda of [math]\displaystyle{ \mathcal{D} }[/math] are all unary, then [math]\displaystyle{ \mathcal{D} }[/math] is finite.
While many standard logical operations do not preserve finiteness, it is preserved by the operation of self-composition.[13] For a definition [math]\displaystyle{ G\overline{x}=_{Df} A(\overline{x},G) }[/math], define self-composition recursively as follows.
- [math]\displaystyle{ A^0(\overline{x},G)= G\overline{x} }[/math] and
- [math]\displaystyle{ A^{n+1}(\overline{x},G)= A^{n}(\overline{x},G)[A(\overline{t},G)/G\overline{t}] }[/math].
The latter says that [math]\displaystyle{ A^{n+1} }[/math] is obtained by replacing all instances of [math]\displaystyle{ G\overline{t} }[/math] in [math]\displaystyle{ A^n }[/math], with [math]\displaystyle{ A(\overline{t},G) }[/math]. If [math]\displaystyle{ \mathcal{D} }[/math] is a finite definition and [math]\displaystyle{ \mathcal{D}^n }[/math] is the result of replacing each definiens [math]\displaystyle{ B }[/math] in [math]\displaystyle{ \mathcal{D} }[/math] with [math]\displaystyle{ B^n }[/math], then [math]\displaystyle{ \mathcal{D}^n }[/math] is a finite definition as well.
Notable formal features
Revision theory distinguishes material equivalence from definitional equivalence.[14] The sets of definitions use the latter. In general, definitional equivalence is not the same as material equivalence. Given a definition
- [math]\displaystyle{ Gx=_{Df} A(x,G), }[/math]
its material counterpart,
- [math]\displaystyle{ \forall x(Gx\equiv A(x,G)), }[/math]
will not, in general, be valid.[15] The definition
- [math]\displaystyle{ Gx=_{Df} \sim Gx }[/math]
illustrates the invalidity. Its definiens and definiendum will not have the same truth value after any revision, so the material biconditional will not be valid. For some definitions, the material counterparts of the defining clauses are valid. For example, if the definientia of contain only symbols from the ground language, then the material counterparts will be valid.
The definitions given above are for the classical scheme. The definitions can be adjusted to work with any semantic scheme.[16] This includes three-valued schemes, such as Strong Kleene, with exclusion negation, whose truth table is the following.
[math]\displaystyle{ \lnot }[/math] | |
---|---|
[math]\displaystyle{ \textbf{t} }[/math] | [math]\displaystyle{ \textbf{f} }[/math] |
[math]\displaystyle{ \textbf{n} }[/math] | [math]\displaystyle{ \textbf{f} }[/math] |
[math]\displaystyle{ \textbf{f} }[/math] | [math]\displaystyle{ \textbf{t} }[/math] |
Notably, many approaches to truth, such as Saul Kripke’s Strong Kleene theory, cannot be used with exclusion negation in the language.
Revision theory, while in some respects similar to the theory of inductive definitions, differs in several ways.[17] Most importantly, revision need not be monotonic, which is to say that extensions at later stages need not be supersets of extensions at earlier stages, as illustrated by the first example above. Relatedly, revision theory does not postulate any restrictions on the syntactic form of definitions. Inductive definitions require their definientia to be positive, in the sense that definienda can only appear in definientia under an even number of negations. (This assumes that negation, conjunction, disjunction, and the universal quantifier are the primitive logical connectives, and the remaining classical connectives are simply defined symbols.) The definition
- [math]\displaystyle{ Gx =_{Df} (x \text{ is even }\&\ Gx) \vee (x\text{ is odd }\&\ \sim Gx) }[/math]
is acceptable in revision theory, although not in the theory of inductive definitions.
Inductive definitions are semantically interpreted via fixed points, hypotheses [math]\displaystyle{ h }[/math] for which [math]\displaystyle{ h=\delta_{M, \mathcal{D}}(h) }[/math]. In general, revision sequences will not reach fixed points. If the definientia of [math]\displaystyle{ \mathcal{D} }[/math] are all positive, then revision sequences will reach fixed points, as long as the initial hypothesis has the feature that [math]\displaystyle{ h(G)\subseteq \delta_{M, \mathcal{D}}(h)(G) }[/math], for each [math]\displaystyle{ G }[/math]. In particular, given such a [math]\displaystyle{ \mathcal{D} }[/math], if the initial hypothesis assigns the empty extension to all definienda, then the revision sequence will reach the minimal fixed point.
The sets of valid sentences on some definitions can be highly complex, in particular [math]\displaystyle{ \Pi^1_2 }[/math]. This was shown by Philip Kremer and Aldo Antonelli.[18] There is, consequently, no proof system for [math]\displaystyle{ S^{\#} }[/math] validity.
Truth
The most famous application of revision theory is to the theory of truth, as developed in Gupta and Belnap (1993), for example. The circular definition of truth is the set of all the Tarski biconditionals, ‘[math]\displaystyle{ A }[/math]’ is true iff [math]\displaystyle{ A }[/math], where ‘iff’ is understood as definitional equivalence, [math]\displaystyle{ =_{Df} }[/math], rather than material equivalence. Each Tarski biconditional provides a partial definition of the concept of truth. The concept of truth is circular because some Tarski biconditionals use an ineliminable instance of ‘is true’ in their definiens. For example, suppose that [math]\displaystyle{ b }[/math] is the name of a truth-teller sentence, [math]\displaystyle{ b }[/math] is true. This sentence has as its Tarski biconditional: [math]\displaystyle{ b }[/math] is true iff [math]\displaystyle{ b }[/math] is true. The truth predicate on the right cannot be eliminated. This example depends on there being a truth-teller in the language. This and other examples show that truth, defined by the Tarski biconditionals, is a circular concept.
Some languages, such as the language of arithmetic, will have vicious self-reference. The liar and other pathological sentences are guaranteed to be in the language with truth. Other languages with truth can be defined that lack vicious self-reference.[19] In such a language, any revision sequence [math]\displaystyle{ {S} }[/math] for truth is bound to reach a stage where [math]\displaystyle{ {S}_{\alpha}={S}_{\alpha+1} }[/math], so the truth predicate behaves like a non-circular predicate.[20] The result is that, in such languages, truth has a stable extension that is defined over all sentences of the language. This is in contrast to many other theories of truth, for example the minimal Strong Kleene and minimal supervaluational theories. The extension and anti-extension of the truth predicate in these theories will not exhaust the set of sentences of the language.
The difference between [math]\displaystyle{ S^{\#} }[/math] and [math]\displaystyle{ S^{*} }[/math] is important when considering revision theories of truth. Part of the difference comes across in the semantical laws, which are the following equivalences, where T is a truth predicate.[21]
- [math]\displaystyle{ \forall A(T(\ulcorner\sim A\urcorner)\equiv \sim T(\ulcorner A\urcorner)) }[/math]
- [math]\displaystyle{ \forall A, B(T(\ulcorner{A\& B}\urcorner)\equiv T(\ulcorner{A}\urcorner)\& T(\ulcorner{B}\urcorner)) }[/math]
- [math]\displaystyle{ \forall A, B(T(\ulcorner{A\lor B}\urcorner)\equiv T(\ulcorner{A}\urcorner)\lor T(\ulcorner{B}\urcorner)) }[/math]
- [math]\displaystyle{ \forall A(T(\ulcorner\forall x A\urcorner)\equiv \forall t T(\ulcorner A[x/t]\urcorner)) }[/math]
These are all valid in [math]\displaystyle{ S^{\#} }[/math], although the last is valid only when the domain is countable and every element is named. In [math]\displaystyle{ S^{*} }[/math], however, none are valid. One can see why the negation law fails by considering the liar, [math]\displaystyle{ a=\ulcorner{\sim Ta}\urcorner }[/math]. The liar and all finite iterations of the truth predicate to it are unstable, so one can set [math]\displaystyle{ T\ulcorner{Ta}\urcorner }[/math] and [math]\displaystyle{ T\ulcorner{\sim Ta}\urcorner }[/math] to have the same truth value at some limits, which results in [math]\displaystyle{ \sim T\ulcorner{Ta}\urcorner }[/math] and [math]\displaystyle{ T\ulcorner{\sim Ta}\urcorner }[/math] having different truth values. This is corrected after revision, but the negation law will not be stably true. It is a consequence of a theorem of Vann McGee that the revision theory of truth in [math]\displaystyle{ S^{\#} }[/math] is [math]\displaystyle{ \omega }[/math]-inconsistent.[22] The [math]\displaystyle{ S^{*} }[/math] theory is not [math]\displaystyle{ \omega }[/math]-inconsistent.
There is an axiomatic theory of truth that is related to the [math]\displaystyle{ S^{\#} }[/math] theory in the language of arithmetic with truth. The Friedman-Sheard theory (FS) is obtained by adding to the usual axioms of Peano arithmetic
- the axiom [math]\displaystyle{ \forall s,t(T(\ulcorner{s=t}\urcorner)\equiv s=t) }[/math],
- the semantical laws,
- the induction axioms with the truth predicate, and
- the two rules
- if [math]\displaystyle{ \vdash A }[/math], then [math]\displaystyle{ \vdash T(\ulcorner A\urcorner) }[/math], and
- if [math]\displaystyle{ \vdash T(\ulcorner A\urcorner) }[/math], then [math]\displaystyle{ \vdash A }[/math].[23]
By McGee's theorem, this theory is [math]\displaystyle{ \omega }[/math]-inconsistent. FS does not, however, have as theorems any false purely arithmetical sentences.[24] FS has as a theorem global reflection for Peano arithmetic,
- [math]\displaystyle{ \forall x((\mathrm{Sent}(x)\ \&\ \mathrm{Bew}_{PA}(x))\supset Tx), }[/math]
where [math]\displaystyle{ \mathrm{Bew}_{PA} }[/math] is a provability predicate for Peano arithmetic and [math]\displaystyle{ \mathrm{Sent} }[/math] is a predicate true of all and only sentences of the language with truth. Consequently, it is a theorem of FS that Peano arithmetic is consistent.
FS is a subtheory of the theory of truth for arithmetic, the set of sentences valid in [math]\displaystyle{ S^{\#} }[/math]. A standard way to show that FS is consistent is to use an [math]\displaystyle{ \omega }[/math]-long revision sequence.[25] There has been some work done on axiomatizing the [math]\displaystyle{ S^{*} }[/math] theory of truth for arithmetic.[26]
Other applications
Revision theory has been used to study circular concepts apart from truth and to provide alternative analyses of concepts, such as rationality.
A non-well-founded set theory is a set theory that postulates the existence of a non-well-founded set, which is a set [math]\displaystyle{ x }[/math] that has an infinite descending chain along the membership relation,
- [math]\displaystyle{ \cdots x_{n+1}\in x_n\in \cdots \in x_1\in x. }[/math]
Antonelli has used revision theory to construct models of non-well-founded set theory.[27] One example is a set theory that postulates a set whose sole member is itself, [math]\displaystyle{ x=\{x\} }[/math].
Infinite-time Turing machines are models of computation that permit computations to go on for infinitely many steps. They generalize standard Turing machines used in the theory of computability. Benedikt Löwe has shown that there are close connections between computations of infinite-time Turing machines and revision processes.[28]
Rational choice in game theory has been analyzed as a circular concept. André Chapuis has argued that the reasoning agents use in rational choice exhibits an interdependence characteristic of circular concepts.[29]
Revision theory can be adapted to model other sorts of phenomena. For example, vagueness has been analyzed in revision-theoretic terms by Conrad Asmus.[30] To model a vague predicate on this approach, one specifies pairs of similar objects and which objects are non-borderline cases, and so are unrevisable. The borderline objects change their status with respect to a predicate depending on the status of the objects to which they are similar.
Revision theory has been used by Gupta to explicate the logical contribution of experience to one's beliefs.[31] According to this view, the contribution of experience is represented by a rule of revision that takes as input on an agent's view, or concepts and beliefs, and yields as output perceptual judgments. These judgments can be used to update the agent's view.
See also
References
- ↑ See, respectively, Gupta (1982), Herzberger (1982), and Belnap (1982).
- ↑ Gupta and Belnap (1993)
- ↑ Yaqūb (1993)
- ↑ Gupta and Belnap (1993, 278)
- ↑ This point is discussed further by Gupta and Belnap (1993, 121), Shapiro (2006), and Gupta (2011, 160-161).
- ↑ Gupta and Belnap (1993, 277)
- ↑ This section is based on Gupta and Belnap (1993).
- ↑ This section is based on Gupta and Belnap (1993) and Kremer (2014).
- ↑ A presentation of [math]\displaystyle{ C_0 }[/math] can be found in chapter 5 of Gupta and Belnap (1993).
- ↑ Bruni (2013)
- ↑ The definitions of this section are taken from Gupta and Belnap (1993).
- ↑ .Martinez (2001)
- ↑ This was shown by Gupta (2006b).
- ↑ This point is noted by Gupta and Belnap (1993).
- ↑ One can extend revision theory with a unary operator so that the definitional equivalence will be reflected in the object languages by a valid equivalence, [math]\displaystyle{ \forall x(Gx\equiv \Box A(x,G)) }[/math]. This was shown by Standefer (2015).
- ↑ See Gupta and Belnap (1993) for this point.
- ↑ This is shown by Gupta and Belnap (1993).
- ↑ See Kremer (1993) and Antonelli (1994a), respectively.
- ↑ See Gupta (1982) for an example.
- ↑ Gupta and Belnap (1993, 202-205)
- ↑ The corner quotes are used to indicate a generic naming device, e.g. quotation names or Gödel numbering.
- ↑ McGee (1985)
- ↑ The original presentation of FS used different axioms and rules. See Halbach (2011) for more details.
- ↑ Halbach (2011, 173)
- ↑ Halbach (2011, §14.1)
- ↑ Horsten et al. (2012)
- ↑ Antonelli (1994b)
- ↑ Löwe (2001)
- ↑ Chapuis (2003)
- ↑ Asmus (2013)
- ↑ Gupta (2006a)
- Antonelli, A. (1994a). The complexity of revision. Notre Dame Journal of Formal Logic, 35(1):67–72.
- Antonelli, A. (1994b). Non-well-founded sets via revision rules. Journal of Philosophical Logic, 23(6):633–679.
- Asmus, C. M. (2013). Vagueness and revision sequences. Synthese, 190(6):953–974.
- Belnap, N. (1982). Gupta's rule of revision theory of truth. Journal of Philosophical Logic, 11(1):103–116.
- Bruni, R. (2013). Analytic calculi for circular concepts by finite revision. Studia Logica, 101(5):915–932.
- Chapuis, A. (2003). An application of circular definitions: Rational decision. In Löwe, B., R ̈asch, T., and Malzkorn, W., editors, Foundations of the Formal Sciences II, pages 47–54. Kluwer.
- Gupta, A. (1982). Truth and paradox. Journal of Philosophical Logic, 11(1). A revised version, with a brief postscript, is reprinted in Martin (1984).
- Gupta, A. (2006a). Empiricism and Experience. Oxford University Press.
- Gupta, A. (2006b). Finite circular definitions. In Bolander, T., Hendricks, V. F., and Andersen, S. A., editors, Self-Reference, pages 79–93. CSLI Publications.
- Gupta, A. (2011). Truth, Meaning, Experience. Oxford University Press.
- Gupta, A. and Belnap, N. (1993). The Revision Theory of Truth. MIT Press.
- Halbach, V. (2011). Axiomatic Theories of Truth. Cambridge University Press.
- Herzberger, H. G. (1982). Notes on naive semantics. Journal of Philosophical Logic, 11(1):61–102. Reprinted in Martin (1984).
- Horsten, L., Leigh, G. E., Leitgeb, H., and Welch, P. (2012). Revision revisited. Review of Symbolic Logic, 5(4):642–665.
- Kremer, P. (1993). The Gupta-Belnap systems [math]\displaystyle{ S^{\#} }[/math] and [math]\displaystyle{ S^{*} }[/math] are not axiomatisable. Notre Dame Journal of Formal Logic, 34(4):583–596.
- Löwe, B. (2001). Revision sequences and computers with an infinite amount of time. Journal of Logic and Computation, 11(1):25–40. doi:10.1093/log- com/11.1.25.
- Martin, R. L., editor (1984). Recent Essays on Truth and the Liar Paradox. Oxford University Press.
- Martinez, M. (2001). Some closure properties of finite definitions. Studia Logica, 68(1):43–68.
- McGee, V. (1985). How truthlike can a predicate be? A negative result. Journal of Philosophical Logic, 14(4):399–410.
- Shapiro, L. (2006). The rationale behind revision-rule semantics. Philosophical Studies, 129(3):477–515.
- Standefer, S. (2015). Solovay-type theorems for circular definitions. Review of Symbolic Logic, pages 1–21. forthcoming
- Yaqūb, A. M. (1993). The Liar Speaks the Truth: A Defense of the Revision Theory of Truth. Oxford University Press.
External links
- Kremer, P. (2014) The Revision Theory of Truth. In Zalta, E. N., editor, The Stanford Encyclopedia of Philosophy. Summer 2014 edition.
Original source: https://en.wikipedia.org/wiki/Revision theory.
Read more |