Forcing (mathematics)
In the mathematical discipline of set theory, forcing is a technique for proving consistency and independence results. Intuitively, forcing can be thought of as a technique to expand the set theoretical universe [math]\displaystyle{ V }[/math] to a larger universe [math]\displaystyle{ V[G] }[/math] by introducing a new "generic" object [math]\displaystyle{ G }[/math].
Forcing was first used by Paul Cohen in 1963, to prove the independence of the axiom of choice and the continuum hypothesis from Zermelo–Fraenkel set theory. It has been considerably reworked and simplified in the following years, and has since served as a powerful technique, both in set theory and in areas of mathematical logic such as recursion theory. Descriptive set theory uses the notions of forcing from both recursion theory and set theory. Forcing has also been used in model theory, but it is common in model theory to define genericity directly without mention of forcing.
Intuition
Forcing is usually used to construct an expanded universe that satisfies some desired property. For example, the expanded universe might contain many new real numbers (at least [math]\displaystyle{ \aleph_2 }[/math] of them), identified with subsets of the set [math]\displaystyle{ \mathbb{N} }[/math] of natural numbers, that were not there in the old universe, and thereby violate the continuum hypothesis.
In order to intuitively justify such an expansion, it is best to think of the "old universe" as a model [math]\displaystyle{ M }[/math] of the set theory, which is itself a set in the "real universe" [math]\displaystyle{ V }[/math]. By the Löwenheim–Skolem theorem, [math]\displaystyle{ M }[/math] can be chosen to be a "bare bones" model that is externally countable, which guarantees that there will be many subsets (in [math]\displaystyle{ V }[/math]) of [math]\displaystyle{ \mathbb{N} }[/math] that are not in [math]\displaystyle{ M }[/math]. Specifically, there is an ordinal [math]\displaystyle{ \aleph_2^M }[/math] that "plays the role of the cardinal [math]\displaystyle{ \aleph_2 }[/math]" in [math]\displaystyle{ M }[/math], but is actually countable in [math]\displaystyle{ V }[/math]. Working in [math]\displaystyle{ V }[/math], it should be easy to find one distinct subset of [math]\displaystyle{ \mathbb{N} }[/math] per each element of [math]\displaystyle{ \aleph_2^M }[/math]. (For simplicity, this family of subsets can be characterized with a single subset [math]\displaystyle{ X \subseteq \aleph_2^M \times \mathbb{N} }[/math].)
However, in some sense, it may be desirable to "construct the expanded model [math]\displaystyle{ M[X] }[/math] within [math]\displaystyle{ M }[/math]". This would help ensure that [math]\displaystyle{ M[X] }[/math] "resembles" [math]\displaystyle{ M }[/math] in certain aspects, such as [math]\displaystyle{ \aleph_2^{M[X]} }[/math] being the same as [math]\displaystyle{ \aleph_2^M }[/math] (more generally, that cardinal collapse does not occur), and allow fine control over the properties of [math]\displaystyle{ M[X] }[/math]. More precisely, every member of [math]\displaystyle{ M[X] }[/math] should be given a (non-unique) name in [math]\displaystyle{ M }[/math]. The name can be thought as an expression in terms of [math]\displaystyle{ X }[/math], just like in a simple field extension [math]\displaystyle{ L = K(\theta) }[/math] every element of [math]\displaystyle{ L }[/math] can be expressed in terms of [math]\displaystyle{ \theta }[/math]. A major component of forcing is manipulating those names within [math]\displaystyle{ M }[/math], so sometimes it may help to directly think of [math]\displaystyle{ M }[/math] as "the universe", knowing that the theory of forcing guarantees that [math]\displaystyle{ M[X] }[/math] will correspond to an actual model.
A subtle point of forcing is that, if [math]\displaystyle{ X }[/math] is taken to be an arbitrary "missing subset" of some set in [math]\displaystyle{ M }[/math], then the [math]\displaystyle{ M[X] }[/math] constructed "within [math]\displaystyle{ M }[/math]" may not even be a model. This is because [math]\displaystyle{ X }[/math] may encode "special" information about [math]\displaystyle{ M }[/math] that is invisible within [math]\displaystyle{ M }[/math] (e.g. the countability of [math]\displaystyle{ M }[/math]), and thus prove the existence of sets that are "too complex for [math]\displaystyle{ M }[/math] to describe".[1] [2]
Forcing avoids such problems by requiring the newly introduced set [math]\displaystyle{ X }[/math] to be a generic set relative to [math]\displaystyle{ M }[/math].[1] Some statements are "forced" to hold for any generic [math]\displaystyle{ X }[/math]: For example, a generic [math]\displaystyle{ X }[/math] is "forced" to be infinite. Furthermore, any property (describable in [math]\displaystyle{ M }[/math]) of a generic set is "forced" to hold under some forcing condition. The concept of "forcing" can be defined within [math]\displaystyle{ M }[/math], and it gives [math]\displaystyle{ M }[/math] enough reasoning power to prove that [math]\displaystyle{ M[X] }[/math] is indeed a model that satisfies the desired properties.
Cohen's original technique, now called ramified forcing, is slightly different from the unramified forcing expounded here. Forcing is also equivalent to the method of Boolean-valued models, which some feel is conceptually more natural and intuitive, but usually much more difficult to apply.[3]
The role of the model
In order for the above approach to work smoothly, [math]\displaystyle{ M }[/math] must in fact be a standard transitive model in [math]\displaystyle{ V }[/math], so that membership and other elementary notions can be handled intuitively in both [math]\displaystyle{ M }[/math] and [math]\displaystyle{ V }[/math]. A standard transitive model can be obtained from any standard model through the Mostowski collapse lemma, but the existence of any standard model of [math]\displaystyle{ \mathsf{ZFC} }[/math] (or any variant thereof) is in itself a stronger assumption than the consistency of [math]\displaystyle{ \mathsf{ZFC} }[/math].
To get around this issue, a standard technique is to let [math]\displaystyle{ M }[/math] be a standard transitive model of an arbitrary finite subset of [math]\displaystyle{ \mathsf{ZFC} }[/math] (any axiomatization of [math]\displaystyle{ \mathsf{ZFC} }[/math] has at least one axiom schema, and thus an infinite number of axioms), the existence of which is guaranteed by the reflection principle. As the goal of a forcing argument is to prove consistency results, this is enough since any inconsistency in a theory must manifest with a derivation of a finite length, and thus involve only a finite number of axioms.
Forcing conditions and forcing posets
Each forcing condition can be regarded as a finite piece of information regarding the object [math]\displaystyle{ X }[/math] adjoined to the model. There are many different ways of providing information about an object, which give rise to different forcing notions. A general approach to formalizing forcing notions is to regard forcing conditions as abstract objects with a poset structure.
A forcing poset is an ordered triple, [math]\displaystyle{ (\mathbb{P},\leq,\mathbf{1}) }[/math], where [math]\displaystyle{ \leq }[/math] is a preorder on [math]\displaystyle{ \mathbb{P} }[/math], and [math]\displaystyle{ \mathbf{1} }[/math] is the largest element. Members of [math]\displaystyle{ \mathbb{P} }[/math] are the forcing conditions (or just conditions). The order relation [math]\displaystyle{ p \leq q }[/math] means "[math]\displaystyle{ p }[/math] is stronger than [math]\displaystyle{ q }[/math]". (Intuitively, the "smaller" condition provides "more" information, just as the smaller interval [math]\displaystyle{ [3.1415926,3.1415927] }[/math] provides more information about the number π than the interval [math]\displaystyle{ [3.1,3.2] }[/math] does.) Furthermore, the preorder [math]\displaystyle{ \leq }[/math] must be atomless, meaning that it must satisfy the splitting condition:
- For each [math]\displaystyle{ p \in \mathbb{P} }[/math], there are [math]\displaystyle{ q,r \in \mathbb{P} }[/math] such that [math]\displaystyle{ q,r \leq p }[/math], with no [math]\displaystyle{ s \in \mathbb{P} }[/math] such that [math]\displaystyle{ s \leq q,r }[/math].
In other words, it must be possible to strengthen any forcing condition [math]\displaystyle{ p }[/math] in at least two incompatible directions. Intuitively, this is because [math]\displaystyle{ p }[/math] is only a finite piece of information, whereas an infinite piece of information is needed to determine [math]\displaystyle{ X }[/math].
There are various conventions in use. Some authors require [math]\displaystyle{ \leq }[/math] to also be antisymmetric, so that the relation is a partial order. Some use the term partial order anyway, conflicting with standard terminology, while some use the term preorder. The largest element can be dispensed with. The reverse ordering is also used, most notably by Saharon Shelah and his co-authors.
Examples
Let [math]\displaystyle{ S }[/math] be any infinite set (such as [math]\displaystyle{ \mathbb{N} }[/math]), and let the generic object in question be a new subset [math]\displaystyle{ X \subseteq S }[/math]. In Cohen's original formulation of forcing, each forcing condition is a finite set of sentences, either of the form [math]\displaystyle{ a \in X }[/math] or [math]\displaystyle{ a \notin X }[/math], that are self-consistent (i.e. [math]\displaystyle{ a \in X }[/math] and [math]\displaystyle{ a \notin X }[/math] for the same value of [math]\displaystyle{ a }[/math] do not appear in the same condition). This forcing notion is usually called Cohen forcing.
The forcing poset for Cohen forcing can be formally written as [math]\displaystyle{ (\operatorname{Fin}(S,2),\supseteq,0) }[/math], the finite partial functions from [math]\displaystyle{ S }[/math] to [math]\displaystyle{ 2 ~ \stackrel{\text{df}}{=} ~ \{ 0,1 \} }[/math] under reverse inclusion. Cohen forcing satisfies the splitting condition because given any condition [math]\displaystyle{ p }[/math], one can always find an element [math]\displaystyle{ a \in S }[/math] not mentioned in [math]\displaystyle{ p }[/math], and add either the sentence [math]\displaystyle{ a \in X }[/math] or [math]\displaystyle{ a \notin X }[/math] to [math]\displaystyle{ p }[/math] to get two new forcing conditions, incompatible with each other.
Another instructive example of a forcing poset is [math]\displaystyle{ (\operatorname{Bor}(I),\subseteq,I) }[/math], where [math]\displaystyle{ I = [0,1] }[/math] and [math]\displaystyle{ \operatorname{Bor}(I) }[/math] is the collection of Borel subsets of [math]\displaystyle{ I }[/math] having non-zero Lebesgue measure. The generic object associated with this forcing poset is a random real number [math]\displaystyle{ r \in [0, 1] }[/math]. It can be shown that [math]\displaystyle{ r }[/math] falls in every Borel subset of [math]\displaystyle{ [0, 1] }[/math] with measure 1, provided that the Borel subset is "described" in the original unexpanded universe (this can be formalized with the concept of Borel codes). Each forcing condition can be regarded as a random event with probability equal to its measure. Due to the ready intuition this example can provide, probabilistic language is sometimes used with other divergent forcing posets.
Generic filters
Even though each individual forcing condition [math]\displaystyle{ p }[/math] cannot fully determine the generic object [math]\displaystyle{ X }[/math], the set [math]\displaystyle{ G \subseteq \mathbb{P} }[/math] of all true forcing conditions does determine [math]\displaystyle{ X }[/math]. In fact, without loss of generality, [math]\displaystyle{ G }[/math] is commonly considered to be the generic object adjoined to [math]\displaystyle{ M }[/math], so the expanded model is called [math]\displaystyle{ M[G] }[/math]. It is usually easy enough to show that the originally desired object [math]\displaystyle{ X }[/math] is indeed in the model [math]\displaystyle{ M[G] }[/math].
Under this convention, the concept of "generic object" can be described in a general way. Specifically, the set [math]\displaystyle{ G }[/math] should be a generic filter on [math]\displaystyle{ \mathbb{P} }[/math] relative to [math]\displaystyle{ M }[/math]. The "filter" condition means that it makes sense that [math]\displaystyle{ G }[/math] is a set of all true forcing conditions:
- [math]\displaystyle{ G \subseteq \mathbb{P}; }[/math]
- [math]\displaystyle{ \mathbf{1} \in G; }[/math]
- if [math]\displaystyle{ p \geq q \in G }[/math], then [math]\displaystyle{ p \in G; }[/math]
- if [math]\displaystyle{ p,q \in G }[/math], then there exists an [math]\displaystyle{ r \in G }[/math] such that [math]\displaystyle{ r \leq p,q. }[/math]
For [math]\displaystyle{ G }[/math] to be "generic relative to [math]\displaystyle{ M }[/math]" means:
- If [math]\displaystyle{ D \in M }[/math] is a "dense" subset of [math]\displaystyle{ \mathbb{P} }[/math] (that is, for each [math]\displaystyle{ p \in \mathbb{P} }[/math], there exists a [math]\displaystyle{ q \in D }[/math] such that [math]\displaystyle{ q \leq p }[/math]), then [math]\displaystyle{ G \cap D \neq \varnothing }[/math].
Given that [math]\displaystyle{ M }[/math] is a countable model, the existence of a generic filter [math]\displaystyle{ G }[/math] follows from the Rasiowa–Sikorski lemma. In fact, slightly more is true: Given a condition [math]\displaystyle{ p \in \mathbb{P} }[/math], one can find a generic filter [math]\displaystyle{ G }[/math] such that [math]\displaystyle{ p \in G }[/math]. Due to the splitting condition on [math]\displaystyle{ \mathbb{P} }[/math], if [math]\displaystyle{ G }[/math] is a filter, then [math]\displaystyle{ \mathbb{P} \setminus G }[/math] is dense. If [math]\displaystyle{ G \in M }[/math], then [math]\displaystyle{ \mathbb{P} \setminus G \in M }[/math] because [math]\displaystyle{ M }[/math] is a model of [math]\displaystyle{ \mathsf{ZFC} }[/math]. For this reason, a generic filter is never in [math]\displaystyle{ M }[/math].
P-names and interpretations
Associated with a forcing poset [math]\displaystyle{ \mathbb{P} }[/math] is the class [math]\displaystyle{ V^{(\mathbb{P})} }[/math] of [math]\displaystyle{ \mathbb{P} }[/math]-names. A [math]\displaystyle{ \mathbb{P} }[/math]-name is a set [math]\displaystyle{ A }[/math] of the form
- [math]\displaystyle{ A \subseteq \{ (u,p) \mid u ~ \text{is a} ~ \mathbb{P} \text{-name and} ~ p \in \mathbb{P} \}. }[/math]
Given any filter [math]\displaystyle{ G }[/math] on [math]\displaystyle{ \mathbb{P} }[/math], the interpretation or valuation map from [math]\displaystyle{ \mathbb{P} }[/math]-names is given by
- [math]\displaystyle{ \operatorname{val}(u,G) = \{ \operatorname{val}(v,G) \mid \exists p \in G: ~ (v,p) \in u \}. }[/math]
The [math]\displaystyle{ \mathbb{P} }[/math]-names are, in fact, an expansion of the universe. Given [math]\displaystyle{ x \in V }[/math], one defines [math]\displaystyle{ \check{x} }[/math] to be the [math]\displaystyle{ \mathbb{P} }[/math]-name
- [math]\displaystyle{ \check{x} = \{ (\check{y},\mathbf{1}) \mid y \in x \}. }[/math]
Since [math]\displaystyle{ \mathbf{1} \in G }[/math], it follows that [math]\displaystyle{ \operatorname{val}(\check{x},G) = x }[/math]. In a sense, [math]\displaystyle{ \check{x} }[/math] is a "name for [math]\displaystyle{ x }[/math]" that does not depend on the specific choice of [math]\displaystyle{ G }[/math].
This also allows defining a "name for [math]\displaystyle{ G }[/math]" without explicitly referring to [math]\displaystyle{ G }[/math]:
- [math]\displaystyle{ \underline{G} = \{ (\check{p},p) \mid p \in \mathbb{P} \} }[/math]
so that [math]\displaystyle{ \operatorname{val}(\underline{G},G) = \{\operatorname{val}(\check p, G) \mid p \in G\} = G }[/math].
Rigorous definitions
The concepts of [math]\displaystyle{ \mathbb{P} }[/math]-names, interpretations, and [math]\displaystyle{ \check{x} }[/math] are actually defined by transfinite recursion. With [math]\displaystyle{ \varnothing }[/math] the empty set, [math]\displaystyle{ \alpha + 1 }[/math] the successor ordinal to ordinal [math]\displaystyle{ \alpha }[/math], [math]\displaystyle{ \mathcal{P} }[/math] the power-set operator, and [math]\displaystyle{ \lambda }[/math] a limit ordinal, define the following hierarchy:
- [math]\displaystyle{ \begin{align} \operatorname{Name}(\varnothing) & = \varnothing, \\ \operatorname{Name}(\alpha + 1) & = \mathcal{P}(\operatorname{Name}(\alpha) \times \mathbb{P}), \\ \operatorname{Name}(\lambda) & = \bigcup \{ \operatorname{Name}(\alpha) \mid \alpha \lt \lambda \}. \end{align} }[/math]
Then the class of [math]\displaystyle{ \mathbb{P} }[/math]-names is defined as
- [math]\displaystyle{ V^{(\mathbb{P})} = \bigcup \{ \operatorname{Name}(\alpha) ~|~ \alpha ~ \text{is an ordinal} \}. }[/math]
The interpretation map and the map [math]\displaystyle{ x \mapsto \check{x} }[/math] can similarly be defined with a hierarchical construction.
Forcing
Given a generic filter [math]\displaystyle{ G \subseteq \mathbb{P} }[/math], one proceeds as follows. The subclass of [math]\displaystyle{ \mathbb{P} }[/math]-names in [math]\displaystyle{ M }[/math] is denoted [math]\displaystyle{ M^{(\mathbb{P})} }[/math]. Let
- [math]\displaystyle{ M[G] = \left\{ \operatorname{val}(u,G) ~ \Big| ~ u \in M^{(\mathbb{P})} \right\}. }[/math]
To reduce the study of the set theory of [math]\displaystyle{ M[G] }[/math] to that of [math]\displaystyle{ M }[/math], one works with the "forcing language", which is built up like ordinary first-order logic, with membership as the binary relation and all the [math]\displaystyle{ \mathbb{P} }[/math]-names as constants.
Define [math]\displaystyle{ p \Vdash_{M,\mathbb{P}} \varphi(u_1,\ldots,u_n) }[/math] (to be read as "[math]\displaystyle{ p }[/math] forces [math]\displaystyle{ \varphi }[/math] in the model [math]\displaystyle{ M }[/math] with poset [math]\displaystyle{ \mathbb{P} }[/math]"), where [math]\displaystyle{ p }[/math] is a condition, [math]\displaystyle{ \varphi }[/math] is a formula in the forcing language, and the [math]\displaystyle{ u_{i} }[/math]'s are [math]\displaystyle{ \mathbb{P} }[/math]-names, to mean that if [math]\displaystyle{ G }[/math] is a generic filter containing [math]\displaystyle{ p }[/math], then [math]\displaystyle{ M[G] \models \varphi(\operatorname{val}(u_1,G),\ldots,\operatorname{val}(u_{n},G)) }[/math]. The special case [math]\displaystyle{ \mathbf{1} \Vdash_{M,\mathbb{P}} \varphi }[/math] is often written as "[math]\displaystyle{ \mathbb{P} \Vdash_{M,\mathbb{P}} \varphi }[/math]" or simply "[math]\displaystyle{ \Vdash_{M,\mathbb{P}} \varphi }[/math]". Such statements are true in [math]\displaystyle{ M[G] }[/math], no matter what [math]\displaystyle{ G }[/math] is.
What is important is that this external definition of the forcing relation [math]\displaystyle{ p \Vdash_{M,\mathbb{P}} \varphi }[/math] is equivalent to an internal definition within [math]\displaystyle{ M }[/math], defined by transfinite induction (specifically [math]\displaystyle{ \in }[/math]-induction) over the [math]\displaystyle{ \mathbb{P} }[/math]-names on instances of [math]\displaystyle{ u \in v }[/math] and [math]\displaystyle{ u = v }[/math], and then by ordinary induction over the complexity of formulae. This has the effect that all the properties of [math]\displaystyle{ M[G] }[/math] are really properties of [math]\displaystyle{ M }[/math], and the verification of [math]\displaystyle{ \mathsf{ZFC} }[/math] in [math]\displaystyle{ M[G] }[/math] becomes straightforward. This is usually summarized as the following three key properties:
- Truth: [math]\displaystyle{ M[G] \models \varphi(\operatorname{val}(u_1,G),\ldots,\operatorname{val}(u_n,G)) }[/math] if and only if it is forced by [math]\displaystyle{ G }[/math], that is, for some condition [math]\displaystyle{ p \in G }[/math], we have [math]\displaystyle{ p \Vdash_{M,\mathbb{P}} \varphi(u_1,\ldots,u_n) }[/math].
- Definability: The statement "[math]\displaystyle{ p \Vdash_{M,\mathbb{P}} \varphi(u_1,\ldots,u_n) }[/math]" is definable in [math]\displaystyle{ M }[/math].
- Coherence: [math]\displaystyle{ p \Vdash_{M,\mathbb{P}} \varphi(u_1,\ldots,u_n) \land q \leq p \implies q \Vdash_{M,\mathbb{P}} \varphi(u_1,\ldots,u_n) }[/math].
Internal definition
There are many different but equivalent ways to define the forcing relation [math]\displaystyle{ \Vdash_{M,\mathbb{P}} }[/math] in [math]\displaystyle{ M }[/math].[4] One way to simplify the definition is to first define a modified forcing relation [math]\displaystyle{ \Vdash_{M,\mathbb{P}}^* }[/math] that is strictly stronger than [math]\displaystyle{ \Vdash_{M,\mathbb{P}} }[/math]. The modified relation [math]\displaystyle{ \Vdash_{M,\mathbb{P}}^* }[/math] still satisfies the three key properties of forcing, but [math]\displaystyle{ p \Vdash_{M,\mathbb{P}}^* \varphi }[/math] and [math]\displaystyle{ p \Vdash_{M,\mathbb{P}}^* \varphi' }[/math] are not necessarily equivalent even if the first-order formulae [math]\displaystyle{ \varphi }[/math] and [math]\displaystyle{ \varphi' }[/math] are equivalent. The unmodified forcing relation can then be defined as [math]\displaystyle{ p\Vdash_{M,\mathbb P} \varphi \iff p\Vdash_{M,\mathbb P}^* \neg \neg \varphi. }[/math] In fact, Cohen's original concept of forcing is essentially [math]\displaystyle{ \Vdash_{M,\mathbb{P}}^* }[/math] rather than [math]\displaystyle{ \Vdash_{M,\mathbb{P}} }[/math].[3]
The modified forcing relation [math]\displaystyle{ \Vdash_{M,\mathbb{P}}^* }[/math] can be defined recursively as follows:
- [math]\displaystyle{ p \Vdash_{M,\mathbb{P}}^* u \in v }[/math] means [math]\displaystyle{ (\exists (w, q) \in v) (q \ge p \wedge p \Vdash_{M,\mathbb{P}}^* w = u). }[/math]
- [math]\displaystyle{ p \Vdash_{M,\mathbb{P}}^* u \ne v }[/math] means [math]\displaystyle{ (\exists (w, q) \in v) (q \ge p \wedge p \Vdash_{M,\mathbb{P}}^* w \notin u) \vee (\exists (w, q) \in u) (q \ge p \wedge p \Vdash_{M,\mathbb{P}}^* w \notin v). }[/math]
- [math]\displaystyle{ p \Vdash_{M,\mathbb{P}}^* \neg \varphi }[/math] means [math]\displaystyle{ \neg (\exists q \le p) (q \Vdash_{M,\mathbb{P}}^* \varphi). }[/math]
- [math]\displaystyle{ p \Vdash_{M,\mathbb{P}}^* (\varphi \vee \psi) }[/math] means [math]\displaystyle{ (p \Vdash_{M,\mathbb{P}}^* \varphi) \vee (p \Vdash_{M,\mathbb{P}}^* \psi). }[/math]
- [math]\displaystyle{ p \Vdash_{M,\mathbb{P}}^* \exists x\, \varphi(x) }[/math] means [math]\displaystyle{ (\exists u \in M^{(\mathbb{P})}) (p \Vdash_{M,\mathbb{P}}^* \varphi(u)). }[/math]
Other symbols of the forcing language can be defined in terms of these symbols: For example, [math]\displaystyle{ u = v }[/math] means [math]\displaystyle{ \neg (u \ne v) }[/math], [math]\displaystyle{ \forall x\, \varphi(x) }[/math] means [math]\displaystyle{ \neg \exists x\, \neg \varphi(x) }[/math], etc. Cases 1 and 2 depend on each other and on case 3, but the recursion always refer to [math]\displaystyle{ \mathbb{P} }[/math]-names with lesser ranks, so transfinite induction allows the definition to go through.
By construction, [math]\displaystyle{ \Vdash_{M,\mathbb{P}}^* }[/math] (and thus [math]\displaystyle{ \Vdash_{M,\mathbb{P}} }[/math]) automatically satisfies Definability. The proof that [math]\displaystyle{ \Vdash_{M,\mathbb{P}}^* }[/math] also satisfies Truth and Coherence is by inductively inspecting each of the five cases above. Cases 4 and 5 are trivial (thanks to the choice of [math]\displaystyle{ \vee }[/math] and [math]\displaystyle{ \exists }[/math] as the elementary symbols[5]), cases 1 and 2 relies only on the assumption that [math]\displaystyle{ G }[/math] is a filter, and only case 3 requires [math]\displaystyle{ G }[/math] to be a generic filter.[3]
Formally, an internal definition of the forcing relation (such as the one presented above) is actually a transformation of an arbitrary formula [math]\displaystyle{ \varphi(x_1,\dots,x_n) }[/math] to another formula [math]\displaystyle{ p\Vdash_{\mathbb{P}}\varphi(u_1,\dots,u_n) }[/math] where [math]\displaystyle{ p }[/math] and [math]\displaystyle{ \mathbb{P} }[/math] are additional variables. The model [math]\displaystyle{ M }[/math] does not explicitly appear in the transformation (note that within [math]\displaystyle{ M }[/math], [math]\displaystyle{ u \in M^{(\mathbb{P})} }[/math] just means "[math]\displaystyle{ u }[/math] is a [math]\displaystyle{ \mathbb{P} }[/math]-name"), and indeed one may take this transformation as a "syntactic" definition of the forcing relation in the universe [math]\displaystyle{ V }[/math] of all sets regardless of any countable transitive model. However, if one wants to force over some countable transitive model [math]\displaystyle{ M }[/math], then the latter formula should be interpreted under [math]\displaystyle{ M }[/math] (i.e. with all quantifiers ranging only over [math]\displaystyle{ M }[/math]), in which case it is equivalent to the external "semantic" definition of [math]\displaystyle{ \Vdash_{M,\mathbb{P}} }[/math] described at the top of this section:
- For any formula [math]\displaystyle{ \varphi(x_1,\dots,x_n) }[/math] there is a theorem [math]\displaystyle{ T }[/math] of the theory [math]\displaystyle{ \mathsf{ZFC} }[/math] (for example conjunction of finite number of axioms) such that for any countable transitive model [math]\displaystyle{ M }[/math] such that [math]\displaystyle{ M\models T }[/math] and any atomless partial order [math]\displaystyle{ \mathbb{P}\in M }[/math] and any [math]\displaystyle{ \mathbb{P} }[/math]-generic filter [math]\displaystyle{ G }[/math] over [math]\displaystyle{ M }[/math] [math]\displaystyle{ (\forall a_1,\ldots,a_n\in M^{\mathbb{P}})(\forall p \in\mathbb{P})(p\Vdash_{M,\mathbb{P}} \varphi(a_1,\dots,a_n) \,\Leftrightarrow \, M\models p \Vdash_{\mathbb{P}}\varphi(a_1, \dots, a_n)). }[/math]
This the sense under which the forcing relation is indeed "definable in [math]\displaystyle{ M }[/math]".
Consistency
The discussion above can be summarized by the fundamental consistency result that, given a forcing poset [math]\displaystyle{ \mathbb{P} }[/math], we may assume the existence of a generic filter [math]\displaystyle{ G }[/math], not belonging to the universe [math]\displaystyle{ V }[/math], such that [math]\displaystyle{ V[G] }[/math] is again a set-theoretic universe that models [math]\displaystyle{ \mathsf{ZFC} }[/math]. Furthermore, all truths in [math]\displaystyle{ V[G] }[/math] may be reduced to truths in [math]\displaystyle{ V }[/math] involving the forcing relation.
Both styles, adjoining [math]\displaystyle{ G }[/math] to either a countable transitive model [math]\displaystyle{ M }[/math] or the whole universe [math]\displaystyle{ V }[/math], are commonly used. Less commonly seen is the approach using the "internal" definition of forcing, in which no mention of set or class models is made. This was Cohen's original method, and in one elaboration, it becomes the method of Boolean-valued analysis.
Cohen forcing
The simplest nontrivial forcing poset is [math]\displaystyle{ (\operatorname{Fin}(\omega,2),\supseteq,0) }[/math], the finite partial functions from [math]\displaystyle{ \omega }[/math] to [math]\displaystyle{ 2 ~ \stackrel{\text{df}}{=} ~ \{ 0,1 \} }[/math] under reverse inclusion. That is, a condition [math]\displaystyle{ p }[/math] is essentially two disjoint finite subsets [math]\displaystyle{ {p^{-1}}[1] }[/math] and [math]\displaystyle{ {p^{-1}}[0] }[/math] of [math]\displaystyle{ \omega }[/math], to be thought of as the "yes" and "no" parts of [math]\displaystyle{ p }[/math], with no information provided on values outside the domain of [math]\displaystyle{ p }[/math]. "[math]\displaystyle{ q }[/math] is stronger than [math]\displaystyle{ p }[/math]" means that [math]\displaystyle{ q \supseteq p }[/math], in other words, the "yes" and "no" parts of [math]\displaystyle{ q }[/math] are supersets of the "yes" and "no" parts of [math]\displaystyle{ p }[/math], and in that sense, provide more information.
Let [math]\displaystyle{ G }[/math] be a generic filter for this poset. If [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] are both in [math]\displaystyle{ G }[/math], then [math]\displaystyle{ p \cup q }[/math] is a condition because [math]\displaystyle{ G }[/math] is a filter. This means that [math]\displaystyle{ g = \bigcup G }[/math] is a well-defined partial function from [math]\displaystyle{ \omega }[/math] to [math]\displaystyle{ 2 }[/math] because any two conditions in [math]\displaystyle{ G }[/math] agree on their common domain.
In fact, [math]\displaystyle{ g }[/math] is a total function. Given [math]\displaystyle{ n \in \omega }[/math], let [math]\displaystyle{ D_{n} = \{ p \mid p(n) ~ \text{is defined} \} }[/math]. Then [math]\displaystyle{ D_{n} }[/math] is dense. (Given any [math]\displaystyle{ p }[/math], if [math]\displaystyle{ n }[/math] is not in [math]\displaystyle{ p }[/math]'s domain, adjoin a value for [math]\displaystyle{ n }[/math]—the result is in [math]\displaystyle{ D_{n} }[/math].) A condition [math]\displaystyle{ p \in G \cap D_{n} }[/math] has [math]\displaystyle{ n }[/math] in its domain, and since [math]\displaystyle{ p \subseteq g }[/math], we find that [math]\displaystyle{ g(n) }[/math] is defined.
Let [math]\displaystyle{ X = {g^{-1}}[1] }[/math], the set of all "yes" members of the generic conditions. It is possible to give a name for [math]\displaystyle{ X }[/math] directly. Let
- [math]\displaystyle{ \underline{X} = \left \{ \left (\check{n},p \right ) \mid p(n) = 1 \right \}. }[/math]
Then [math]\displaystyle{ \operatorname{val}(\underline{X},G) = X. }[/math] Now suppose that [math]\displaystyle{ A \subseteq \omega }[/math] in [math]\displaystyle{ V }[/math]. We claim that [math]\displaystyle{ X \neq A }[/math]. Let
- [math]\displaystyle{ D_{A} = \{ p \mid (\exists n)(n \in \operatorname{Dom}(p) \land (p(n) = 1 \iff n \notin A)) \}. }[/math]
Then [math]\displaystyle{ D_A }[/math] is dense. (Given any [math]\displaystyle{ p }[/math], find [math]\displaystyle{ n }[/math] that is not in its domain, and adjoin a value for [math]\displaystyle{ n }[/math] contrary to the status of "[math]\displaystyle{ n \in A }[/math]".) Then any [math]\displaystyle{ p \in G \cap D_A }[/math] witnesses [math]\displaystyle{ X \neq A }[/math]. To summarize, [math]\displaystyle{ X }[/math] is a "new" subset of [math]\displaystyle{ \omega }[/math], necessarily infinite.
Replacing [math]\displaystyle{ \omega }[/math] with [math]\displaystyle{ \omega \times \omega_{2} }[/math], that is, consider instead finite partial functions whose inputs are of the form [math]\displaystyle{ (n,\alpha) }[/math], with [math]\displaystyle{ n \lt \omega }[/math] and [math]\displaystyle{ \alpha \lt \omega_{2} }[/math], and whose outputs are [math]\displaystyle{ 0 }[/math] or [math]\displaystyle{ 1 }[/math], one gets [math]\displaystyle{ \omega_{2} }[/math] new subsets of [math]\displaystyle{ \omega }[/math]. They are all distinct, by a density argument: Given [math]\displaystyle{ \alpha \lt \beta \lt \omega_{2} }[/math], let
- [math]\displaystyle{ D_{\alpha,\beta} = \{ p \mid (\exists n)(p(n,\alpha) \neq p(n,\beta)) \}, }[/math]
then each [math]\displaystyle{ D_{\alpha,\beta} }[/math] is dense, and a generic condition in it proves that the αth new set disagrees somewhere with the [math]\displaystyle{ \beta }[/math]th new set.
This is not yet the falsification of the continuum hypothesis. One must prove that no new maps have been introduced which map [math]\displaystyle{ \omega }[/math] onto [math]\displaystyle{ \omega_{1} }[/math], or [math]\displaystyle{ \omega_{1} }[/math] onto [math]\displaystyle{ \omega_{2} }[/math]. For example, if one considers instead [math]\displaystyle{ \operatorname{Fin}(\omega,\omega_{1}) }[/math], finite partial functions from [math]\displaystyle{ \omega }[/math] to [math]\displaystyle{ \omega_{1} }[/math], the first uncountable ordinal, one gets in [math]\displaystyle{ V[G] }[/math] a bijection from [math]\displaystyle{ \omega }[/math] to [math]\displaystyle{ \omega_{1} }[/math]. In other words, [math]\displaystyle{ \omega_{1} }[/math] has collapsed, and in the forcing extension, is a countable ordinal.
The last step in showing the independence of the continuum hypothesis, then, is to show that Cohen forcing does not collapse cardinals. For this, a sufficient combinatorial property is that all of the antichains of the forcing poset are countable.
The countable chain condition
An (strong) antichain [math]\displaystyle{ A }[/math] of [math]\displaystyle{ \mathbb{P} }[/math] is a subset such that if [math]\displaystyle{ p,q \in A }[/math] and [math]\displaystyle{ p \ne q }[/math], then [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] are incompatible (written [math]\displaystyle{ p \perp q }[/math]), meaning there is no [math]\displaystyle{ r }[/math] in [math]\displaystyle{ \mathbb{P} }[/math] such that [math]\displaystyle{ r \leq p }[/math] and [math]\displaystyle{ r \leq q }[/math]. In the example on Borel sets, incompatibility means that [math]\displaystyle{ p \cap q }[/math] has zero measure. In the example on finite partial functions, incompatibility means that [math]\displaystyle{ p \cup q }[/math] is not a function, in other words, [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] assign different values to some domain input.
[math]\displaystyle{ \mathbb{P} }[/math] satisfies the countable chain condition (c.c.c.) if and only if every antichain in [math]\displaystyle{ \mathbb{P} }[/math] is countable. (The name, which is obviously inappropriate, is a holdover from older terminology. Some mathematicians write "c.a.c." for "countable antichain condition".)
It is easy to see that [math]\displaystyle{ \operatorname{Bor}(I) }[/math] satisfies the c.c.c. because the measures add up to at most [math]\displaystyle{ 1 }[/math]. Also, [math]\displaystyle{ \operatorname{Fin}(E,2) }[/math] satisfies the c.c.c., but the proof is more difficult.
Given an uncountable subfamily [math]\displaystyle{ W \subseteq \operatorname{Fin}(E,2) }[/math], shrink [math]\displaystyle{ W }[/math] to an uncountable subfamily [math]\displaystyle{ W_{0} }[/math] of sets of size [math]\displaystyle{ n }[/math], for some [math]\displaystyle{ n \lt \omega }[/math]. If [math]\displaystyle{ p(e_{1}) = b_{1} }[/math] for uncountably many [math]\displaystyle{ p \in W_{0} }[/math], shrink this to an uncountable subfamily [math]\displaystyle{ W_{1} }[/math] and repeat, getting a finite set [math]\displaystyle{ \{ (e_{1},b_{1}),\ldots,(e_{k},b_{k}) \} }[/math] and an uncountable family [math]\displaystyle{ W_{k} }[/math] of incompatible conditions of size [math]\displaystyle{ n - k }[/math] such that every [math]\displaystyle{ e }[/math] is in [math]\displaystyle{ \operatorname{Dom}(p) }[/math] for at most countable many [math]\displaystyle{ p \in W_{k} }[/math]. Now, pick an arbitrary [math]\displaystyle{ p \in W_{k} }[/math], and pick from [math]\displaystyle{ W_{k} }[/math] any [math]\displaystyle{ q }[/math] that is not one of the countably many members that have a domain member in common with [math]\displaystyle{ p }[/math]. Then [math]\displaystyle{ p \cup \{ (e_{1},b_{1}),\ldots,(e_{k},b_{k}) \} }[/math] and [math]\displaystyle{ q \cup \{ (e_{1},b_{1}),\ldots,(e_{k},b_{k}) \} }[/math] are compatible, so [math]\displaystyle{ W }[/math] is not an antichain. In other words, [math]\displaystyle{ \operatorname{Fin}(E,2) }[/math]-antichains are countable.[citation needed]
The importance of antichains in forcing is that for most purposes, dense sets and maximal antichains are equivalent. A maximal antichain [math]\displaystyle{ A }[/math] is one that cannot be extended to a larger antichain. This means that every element [math]\displaystyle{ p \in \mathbb{P} }[/math] is compatible with some member of [math]\displaystyle{ A }[/math]. The existence of a maximal antichain follows from Zorn's Lemma. Given a maximal antichain [math]\displaystyle{ A }[/math], let
- [math]\displaystyle{ D = \left \{ p \in \mathbb{P} \mid (\exists q \in A)(p \leq q) \right \}. }[/math]
Then [math]\displaystyle{ D }[/math] is dense, and [math]\displaystyle{ G \cap D \neq \varnothing }[/math] if and only if [math]\displaystyle{ G \cap A \neq \varnothing }[/math]. Conversely, given a dense set [math]\displaystyle{ D }[/math], Zorn's Lemma shows that there exists a maximal antichain [math]\displaystyle{ A \subseteq D }[/math], and then [math]\displaystyle{ G \cap D \neq \varnothing }[/math] if and only if [math]\displaystyle{ G \cap A \neq \varnothing }[/math].
Assume that [math]\displaystyle{ \mathbb{P} }[/math] satisfies the c.c.c. Given [math]\displaystyle{ x,y \in V }[/math], with [math]\displaystyle{ f: x \to y }[/math] a function in [math]\displaystyle{ V[G] }[/math], one can approximate [math]\displaystyle{ f }[/math] inside [math]\displaystyle{ V }[/math] as follows. Let [math]\displaystyle{ u }[/math] be a name for [math]\displaystyle{ f }[/math] (by the definition of [math]\displaystyle{ V[G] }[/math]) and let [math]\displaystyle{ p }[/math] be a condition that forces [math]\displaystyle{ u }[/math] to be a function from [math]\displaystyle{ x }[/math] to [math]\displaystyle{ y }[/math]. Define a function [math]\displaystyle{ F: x \to \mathcal{P}(y) }[/math], by
- [math]\displaystyle{ F(a) \stackrel{\text{df}}{=} \left \{ b \left | (\exists q \in \mathbb{P}) \left [(q \leq p) \land \left (q \Vdash ~ u \left (\check{a} \right ) = \check{b} \right ) \right ] \right \}. \right. }[/math]
By the definability of forcing, this definition makes sense within [math]\displaystyle{ V }[/math]. By the coherence of forcing, a different [math]\displaystyle{ b }[/math] come from an incompatible [math]\displaystyle{ p }[/math]. By c.c.c., [math]\displaystyle{ F(a) }[/math] is countable.
In summary, [math]\displaystyle{ f }[/math] is unknown in [math]\displaystyle{ V }[/math] as it depends on [math]\displaystyle{ G }[/math], but it is not wildly unknown for a c.c.c.-forcing. One can identify a countable set of guesses for what the value of [math]\displaystyle{ f }[/math] is at any input, independent of [math]\displaystyle{ G }[/math].
This has the following very important consequence. If in [math]\displaystyle{ V[G] }[/math], [math]\displaystyle{ f: \alpha \to \beta }[/math] is a surjection from one infinite ordinal onto another, then there is a surjection [math]\displaystyle{ g: \omega \times \alpha \to \beta }[/math] in [math]\displaystyle{ V }[/math], and consequently, a surjection [math]\displaystyle{ h: \alpha \to \beta }[/math] in [math]\displaystyle{ V }[/math]. In particular, cardinals cannot collapse. The conclusion is that [math]\displaystyle{ 2^{\aleph_{0}} \geq \aleph_{2} }[/math] in [math]\displaystyle{ V[G] }[/math].
Easton forcing
The exact value of the continuum in the above Cohen model, and variants like [math]\displaystyle{ \operatorname{Fin}(\omega \times \kappa,2) }[/math] for cardinals [math]\displaystyle{ \kappa }[/math] in general, was worked out by Robert M. Solovay, who also worked out how to violate [math]\displaystyle{ \mathsf{GCH} }[/math] (the generalized continuum hypothesis), for regular cardinals only, a finite number of times. For example, in the above Cohen model, if [math]\displaystyle{ \mathsf{CH} }[/math] holds in [math]\displaystyle{ V }[/math], then [math]\displaystyle{ 2^{\aleph_{0}} = \aleph_{2} }[/math] holds in [math]\displaystyle{ V[G] }[/math].
William B. Easton worked out the proper class version of violating the [math]\displaystyle{ \mathsf{GCH} }[/math] for regular cardinals, basically showing that the known restrictions, (monotonicity, Cantor's Theorem and König's Theorem), were the only [math]\displaystyle{ \mathsf{ZFC} }[/math]-provable restrictions (see Easton's Theorem).
Easton's work was notable in that it involved forcing with a proper class of conditions. In general, the method of forcing with a proper class of conditions fails to give a model of [math]\displaystyle{ \mathsf{ZFC} }[/math]. For example, forcing with [math]\displaystyle{ \operatorname{Fin}(\omega \times \mathbf{On},2) }[/math], where [math]\displaystyle{ \mathbf{On} }[/math] is the proper class of all ordinals, makes the continuum a proper class. On the other hand, forcing with [math]\displaystyle{ \operatorname{Fin}(\omega,\mathbf{On}) }[/math] introduces a countable enumeration of the ordinals. In both cases, the resulting [math]\displaystyle{ V[G] }[/math] is visibly not a model of [math]\displaystyle{ \mathsf{ZFC} }[/math].
At one time, it was thought that more sophisticated forcing would also allow an arbitrary variation in the powers of singular cardinals. However, this has turned out to be a difficult, subtle and even surprising problem, with several more restrictions provable in [math]\displaystyle{ \mathsf{ZFC} }[/math] and with the forcing models depending on the consistency of various large-cardinal properties. Many open problems remain.
Random reals
Random forcing can be defined as forcing over the set [math]\displaystyle{ P }[/math] of all compact subsets of [math]\displaystyle{ [0,1] }[/math] of positive measure ordered by relation [math]\displaystyle{ \subseteq }[/math] (smaller set in context of inclusion is smaller set in ordering and represents condition with more information). There are two types of important dense sets:
- For any positive integer [math]\displaystyle{ n }[/math] the set [math]\displaystyle{ D_n= \left \{p\in P : \operatorname{diam}(p)\lt \frac 1n \right \} }[/math] is dense, where [math]\displaystyle{ \operatorname{diam}(p) }[/math] is diameter of the set [math]\displaystyle{ p }[/math].
- For any Borel subset [math]\displaystyle{ B \subseteq [0,1] }[/math] of measure 1, the set [math]\displaystyle{ D_B=\{p\in P : p\subseteq B\} }[/math] is dense.
For any filter [math]\displaystyle{ G }[/math] and for any finitely many elements [math]\displaystyle{ p_1,\ldots,p_n\in G }[/math] there is [math]\displaystyle{ q\in G }[/math] such that holds [math]\displaystyle{ q\leq p_1,\ldots,p_n }[/math]. In case of this ordering, this means that any filter is set of compact sets with finite intersection property. For this reason, intersection of all elements of any filter is nonempty. If [math]\displaystyle{ G }[/math] is a filter intersecting the dense set [math]\displaystyle{ D_n }[/math] for any positive integer [math]\displaystyle{ n }[/math], then the filter [math]\displaystyle{ G }[/math] contains conditions of arbitrarily small positive diameter. Therefore, the intersection of all conditions from [math]\displaystyle{ G }[/math] has diameter 0. But the only nonempty sets of diameter 0 are singletons. So there is exactly one real number [math]\displaystyle{ r_G }[/math] such that [math]\displaystyle{ r_G\in\bigcap G }[/math].
Let [math]\displaystyle{ B\subseteq[0,1] }[/math] be any Borel set of measure 1. If [math]\displaystyle{ G }[/math] intersects [math]\displaystyle{ D_B }[/math], then [math]\displaystyle{ r_G\in B }[/math].
However, a generic filter over a countable transitive model [math]\displaystyle{ V }[/math] is not in [math]\displaystyle{ V }[/math]. The real [math]\displaystyle{ r_G }[/math] defined by [math]\displaystyle{ G }[/math] is provably not an element of [math]\displaystyle{ V }[/math]. The problem is that if [math]\displaystyle{ p\in P }[/math], then [math]\displaystyle{ V\models }[/math] "[math]\displaystyle{ p }[/math] is compact", but from the viewpoint of some larger universe [math]\displaystyle{ U\supset V }[/math], [math]\displaystyle{ p }[/math] can be non-compact and the intersection of all conditions from the generic filter [math]\displaystyle{ G }[/math] is actually empty. For this reason, we consider the set [math]\displaystyle{ C=\{\bar p : p \in G \} }[/math] of topological closures of conditions from G (i.e, [math]\displaystyle{ \bar p = p \cup \{\inf(p) \} \cup \{\sup(p) \} }[/math]). Because of [math]\displaystyle{ \bar p\supseteq p }[/math] and the finite intersection property of [math]\displaystyle{ G }[/math], the set [math]\displaystyle{ C }[/math] also has the finite intersection property. Elements of the set [math]\displaystyle{ C }[/math] are bounded closed sets as closures of bounded sets.[clarification needed] Therefore, [math]\displaystyle{ C }[/math] is a set of compact sets[clarification needed] with the finite intersection property and thus has nonempty intersection. Since [math]\displaystyle{ \operatorname{diam}(\bar p) = \operatorname{diam}(p) }[/math] and the ground model [math]\displaystyle{ V }[/math] inherits a metric from the universe [math]\displaystyle{ U }[/math], the set [math]\displaystyle{ C }[/math] has elements of arbitrarily small diameter. Finally, there is exactly one real that belongs to all members of the set [math]\displaystyle{ C }[/math]. The generic filter [math]\displaystyle{ G }[/math] can be reconstructed from [math]\displaystyle{ r_G }[/math] as [math]\displaystyle{ G=\{p\in P : r_G\in\bar p\} }[/math].
If [math]\displaystyle{ a }[/math] is name of [math]\displaystyle{ r_G }[/math],[clarification needed] and for [math]\displaystyle{ B\in V }[/math] holds [math]\displaystyle{ V\models }[/math]"[math]\displaystyle{ B }[/math] is Borel set of measure 1", then holds
- [math]\displaystyle{ V[G]\models \left (p\Vdash_{\mathbb{P}}a\in\check{B} \right ) }[/math]
for some [math]\displaystyle{ p\in G }[/math]. There is name [math]\displaystyle{ a }[/math] such that for any generic filter [math]\displaystyle{ G }[/math] holds
- [math]\displaystyle{ \operatorname{val}(a,G)\in\bigcup_{p\in G}\bar p. }[/math]
Then
- [math]\displaystyle{ V[G]\models \left (p\Vdash_{\mathbb{P}}a\in\check{B} \right ) }[/math]
holds for any condition [math]\displaystyle{ p }[/math].
Every Borel set can, non-uniquely, be built up, starting from intervals with rational endpoints and applying the operations of complement and countable unions, a countable number of times. The record of such a construction is called a Borel code. Given a Borel set [math]\displaystyle{ B }[/math] in [math]\displaystyle{ V }[/math], one recovers a Borel code, and then applies the same construction sequence in [math]\displaystyle{ V[G] }[/math], getting a Borel set [math]\displaystyle{ B^* }[/math]. It can be proven that one gets the same set independent of the construction of [math]\displaystyle{ B }[/math], and that basic properties are preserved. For example, if [math]\displaystyle{ B \subseteq C }[/math], then [math]\displaystyle{ B^* \subseteq C^* }[/math]. If [math]\displaystyle{ B }[/math] has measure zero, then [math]\displaystyle{ B^* }[/math] has measure zero. This mapping [math]\displaystyle{ B\mapsto B^* }[/math] is injective.
For any set [math]\displaystyle{ B\subseteq[0,1] }[/math] such that [math]\displaystyle{ B\in V }[/math] and [math]\displaystyle{ V\models }[/math]"[math]\displaystyle{ B }[/math] is a Borel set of measure 1" holds [math]\displaystyle{ r\in B^* }[/math].
This means that [math]\displaystyle{ r }[/math] is "infinite random sequence of 0s and 1s" from the viewpoint of [math]\displaystyle{ V }[/math], which means that it satisfies all statistical tests from the ground model [math]\displaystyle{ V }[/math].
So given [math]\displaystyle{ r }[/math], a random real, one can show that
- [math]\displaystyle{ G = \left \{ B ~ (\text{in } V) \mid r \in B^* ~ (\text{in } V[G]) \right \}. }[/math]
Because of the mutual inter-definability between [math]\displaystyle{ r }[/math] and [math]\displaystyle{ G }[/math], one generally writes [math]\displaystyle{ V[r] }[/math] for [math]\displaystyle{ V[G] }[/math].
A different interpretation of reals in [math]\displaystyle{ V[G] }[/math] was provided by Dana Scott. Rational numbers in [math]\displaystyle{ V[G] }[/math] have names that correspond to countably-many distinct rational values assigned to a maximal antichain of Borel sets – in other words, a certain rational-valued function on [math]\displaystyle{ I = [0,1] }[/math]. Real numbers in [math]\displaystyle{ V[G] }[/math] then correspond to Dedekind cuts of such functions, that is, measurable functions.
Boolean-valued models
Perhaps more clearly, the method can be explained in terms of Boolean-valued models. In these, any statement is assigned a truth value from some complete atomless Boolean algebra, rather than just a true/false value. Then an ultrafilter is picked in this Boolean algebra, which assigns values true/false to statements of our theory. The point is that the resulting theory has a model that contains this ultrafilter, which can be understood as a new model obtained by extending the old one with this ultrafilter. By picking a Boolean-valued model in an appropriate way, we can get a model that has the desired property. In it, only statements that must be true (are "forced" to be true) will be true, in a sense (since it has this extension/minimality property).
Meta-mathematical explanation
In forcing, we usually seek to show that some sentence is consistent with [math]\displaystyle{ \mathsf{ZFC} }[/math] (or optionally some extension of [math]\displaystyle{ \mathsf{ZFC} }[/math]). One way to interpret the argument is to assume that [math]\displaystyle{ \mathsf{ZFC} }[/math] is consistent and then prove that [math]\displaystyle{ \mathsf{ZFC} }[/math] combined with the new sentence is also consistent.
Each "condition" is a finite piece of information – the idea is that only finite pieces are relevant for consistency, since, by the compactness theorem, a theory is satisfiable if and only if every finite subset of its axioms is satisfiable. Then we can pick an infinite set of consistent conditions to extend our model. Therefore, assuming the consistency of [math]\displaystyle{ \mathsf{ZFC} }[/math], we prove the consistency of [math]\displaystyle{ \mathsf{ZFC} }[/math] extended by this infinite set.
Logical explanation
By Gödel's second incompleteness theorem, one cannot prove the consistency of any sufficiently strong formal theory, such as [math]\displaystyle{ \mathsf{ZFC} }[/math], using only the axioms of the theory itself, unless the theory is inconsistent. Consequently, mathematicians do not attempt to prove the consistency of [math]\displaystyle{ \mathsf{ZFC} }[/math] using only the axioms of [math]\displaystyle{ \mathsf{ZFC} }[/math], or to prove that [math]\displaystyle{ \mathsf{ZFC} + H }[/math] is consistent for any hypothesis [math]\displaystyle{ H }[/math] using only [math]\displaystyle{ \mathsf{ZFC} + H }[/math]. For this reason, the aim of a consistency proof is to prove the consistency of [math]\displaystyle{ \mathsf{ZFC} + H }[/math] relative to the consistency of [math]\displaystyle{ \mathsf{ZFC} }[/math]. Such problems are known as problems of relative consistency, one of which proves
[math]\displaystyle{ \mathsf{ZFC} \vdash \operatorname{Con}(\mathsf{ZFC}) \rightarrow \operatorname{Con}(\mathsf{ZFC} + H). }[/math] |
|
( ) |
The general schema of relative consistency proofs follows. As any proof is finite, it uses only a finite number of axioms:
- [math]\displaystyle{ \mathsf{ZFC} + \lnot \operatorname{Con}(\mathsf{ZFC} + H) \vdash (\exists T)(\operatorname{Fin}(T) \land T \subseteq \mathsf{ZFC} \land (T \vdash \lnot H)). }[/math]
For any given proof, [math]\displaystyle{ \mathsf{ZFC} }[/math] can verify the validity of this proof. This is provable by induction on the length of the proof.
- [math]\displaystyle{ \mathsf{ZFC} \vdash (\forall T)((T \vdash \lnot H) \rightarrow (\mathsf{ZFC} \vdash (T \vdash \lnot H))). }[/math]
Then resolve
- [math]\displaystyle{ \mathsf{ZFC} + \lnot \operatorname{Con}(\mathsf{ZFC} + H) \vdash (\exists T)(\operatorname{Fin}(T) \land T \subseteq \mathsf{ZFC} \land (\mathsf{ZFC} \vdash (T \vdash \lnot H))). }[/math]
By proving the following
[math]\displaystyle{ \mathsf{ZFC} \vdash (\forall T)(\operatorname{Fin}(T) \land T \subseteq \mathsf{ZFC} \rightarrow (\mathsf{ZFC} \vdash \operatorname{Con}(T + H))), }[/math] |
|
( ) |
it can be concluded that
- [math]\displaystyle{ \mathsf{ZFC} + \lnot \operatorname{Con}(\mathsf{ZFC} + H) \vdash (\exists T)(\operatorname{Fin}(T) \land T \subseteq \mathsf{ZFC} \land (\mathsf{ZFC} \vdash (T \vdash \lnot H)) \land (\mathsf{ZFC} \vdash \operatorname{Con}(T + H))), }[/math]
which is equivalent to
- [math]\displaystyle{ \mathsf{ZFC} + \lnot \operatorname{Con}(\mathsf{ZFC} + H) \vdash \lnot \operatorname{Con}(\mathsf{ZFC}), }[/math]
which gives (*). The core of the relative consistency proof is proving (**). A [math]\displaystyle{ \mathsf{ZFC} }[/math] proof of [math]\displaystyle{ \operatorname{Con}(T + H) }[/math] can be constructed for any given finite subset [math]\displaystyle{ T }[/math] of the [math]\displaystyle{ \mathsf{ZFC} }[/math] axioms (by [math]\displaystyle{ \mathsf{ZFC} }[/math] instruments of course). (No universal proof of [math]\displaystyle{ \operatorname{Con}(T + H) }[/math] of course.)
In [math]\displaystyle{ \mathsf{ZFC} }[/math], it is provable that for any condition [math]\displaystyle{ p }[/math], the set of formulas (evaluated by names) forced by [math]\displaystyle{ p }[/math] is deductively closed. Furthermore, for any [math]\displaystyle{ \mathsf{ZFC} }[/math] axiom, [math]\displaystyle{ \mathsf{ZFC} }[/math] proves that this axiom is forced by [math]\displaystyle{ \mathbf{1} }[/math]. Then it suffices to prove that there is at least one condition that forces [math]\displaystyle{ H }[/math].
In the case of Boolean-valued forcing, the procedure is similar: proving that the Boolean value of [math]\displaystyle{ H }[/math] is not [math]\displaystyle{ \mathbf{0} }[/math].
Another approach uses the Reflection Theorem. For any given finite set of [math]\displaystyle{ \mathsf{ZFC} }[/math] axioms, there is a [math]\displaystyle{ \mathsf{ZFC} }[/math] proof that this set of axioms has a countable transitive model. For any given finite set [math]\displaystyle{ T }[/math] of [math]\displaystyle{ \mathsf{ZFC} }[/math] axioms, there is a finite set [math]\displaystyle{ T' }[/math] of [math]\displaystyle{ \mathsf{ZFC} }[/math] axioms such that [math]\displaystyle{ \mathsf{ZFC} }[/math] proves that if a countable transitive model [math]\displaystyle{ M }[/math] satisfies [math]\displaystyle{ T' }[/math], then [math]\displaystyle{ M[G] }[/math] satisfies [math]\displaystyle{ T }[/math]. By proving that there is finite set [math]\displaystyle{ T'' }[/math] of [math]\displaystyle{ \mathsf{ZFC} }[/math] axioms such that if a countable transitive model [math]\displaystyle{ M }[/math] satisfies [math]\displaystyle{ T'' }[/math], then [math]\displaystyle{ M[G] }[/math] satisfies the hypothesis [math]\displaystyle{ H }[/math]. Then, for any given finite set [math]\displaystyle{ T }[/math] of [math]\displaystyle{ \mathsf{ZFC} }[/math] axioms, [math]\displaystyle{ \mathsf{ZFC} }[/math] proves [math]\displaystyle{ \operatorname{Con}(T + H) }[/math].
Sometimes in (**), a stronger theory [math]\displaystyle{ S }[/math] than [math]\displaystyle{ \mathsf{ZFC} }[/math] is used for proving [math]\displaystyle{ \operatorname{Con}(T + H) }[/math]. Then we have proof of the consistency of [math]\displaystyle{ \mathsf{ZFC} + H }[/math] relative to the consistency of [math]\displaystyle{ S }[/math]. Note that [math]\displaystyle{ \mathsf{ZFC} \vdash \operatorname{Con}(\mathsf{ZFC}) \leftrightarrow \operatorname{Con}(\mathsf{ZFL}) }[/math], where [math]\displaystyle{ \mathsf{ZFL} }[/math] is [math]\displaystyle{ \mathsf{ZF} + V = L }[/math] (the axiom of constructibility).
See also
Notes
- ↑ 1.0 1.1 1.2 Cohen 2008, p. 111.
- ↑ As a concrete example, note that [math]\displaystyle{ \alpha_0 }[/math], the order type of all ordinals in [math]\displaystyle{ M }[/math], is a countable ordinal (in [math]\displaystyle{ V }[/math]) that is not in [math]\displaystyle{ M }[/math]. If [math]\displaystyle{ X }[/math] is taken to be a well-ordering of [math]\displaystyle{ \mathbb{N} }[/math] (as a relation over [math]\displaystyle{ \mathbb{N} }[/math], i.e. a subset of [math]\displaystyle{ \mathbb{N} \times \mathbb{N} }[/math]), then any [math]\displaystyle{ \mathsf{ZFC} }[/math] universe containing [math]\displaystyle{ X }[/math] must also contain [math]\displaystyle{ \alpha_0 }[/math] (thanks to the axiom of replacement).[1] (Such a universe would also not resemble [math]\displaystyle{ M }[/math] in the sense that it would collapse all infinite cardinals of [math]\displaystyle{ M }[/math].)
- ↑ 3.0 3.1 3.2 Shoenfield 1971.
- ↑ Kunen 1980.
- ↑ Notably, if defining [math]\displaystyle{ \Vdash_{M,\mathbb{P}} }[/math] directly instead of [math]\displaystyle{ \Vdash_{M,\mathbb{P}}^* }[/math], one would need to replace the [math]\displaystyle{ \vee }[/math] with [math]\displaystyle{ \wedge }[/math] in case 4 and [math]\displaystyle{ \exists }[/math] with [math]\displaystyle{ \forall }[/math] in case 5 (in addition to making cases 1 and 2 more complicated) to make this internal definition agree with the external definition. However, then when trying to prove Truth inductively, case 4 will require the fact that [math]\displaystyle{ G }[/math], as a filter, is downward directed, and case 5 will outright break down.
References
- Bell, John Lane (1985). Boolean-Valued Models and Independence Proofs in Set Theory. Oxford: Oxford University Press. ISBN 9780198532415.
- Cohen, Paul Joseph (2008). Set theory and the continuum hypothesis. Mineola, New York City: Dover Publications. pp. 151. ISBN 978-0-486-46921-8.
- Hazewinkel, Michiel, ed. (2001), "Forcing Method", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4, https://www.encyclopediaofmath.org/index.php?title=Main_Page
- Jech, Thomas J. (2013). Set Theory: The Third Millennium Edition. Springer Verlag. ISBN 9783642078996.
- Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. North-Holland Publishing Company. ISBN 978-0-444-85401-8.
- Shoenfield, J. R. (1971). "Unramified forcing". Axiomatic Set Theory. Proc. Sympos. Pure Math.. XIII, Part I. Providence, R.I.: Amer. Math. Soc.. pp. 357–381.
Bibliography
- Chow, Timothy (2008). "A Beginner's Guide to Forcing". arXiv:0712.1320v2.
A good introduction to the concepts of forcing that avoids a lot of technical detail. It includes a section on Boolean-valued models.
- Cohen, Paul Joseph (December 1963). "The independence of the continuum hypothesis". Proceedings of the National Academy of Sciences of the United States of America 50 (6): 1143–1148. doi:10.1073/pnas.50.6.1143. PMID 16578557. Bibcode: 1963PNAS...50.1143C.
- Cohen, Paul Joseph (January 1964). "The independence of the continuum hypothesis, II". Proceedings of the National Academy of Sciences of the United States of America 51 (1): 105–110. doi:10.1073/pnas.51.1.105. PMID 16591132. Bibcode: 1964PNAS...51..105C.
- Cohen, Paul Joseph (2002). "The Discovery of Forcing". Rocky Mountain J. Math. 32 (4): 1071–1100. doi:10.1216/rmjm/1181070010. "A historical lecture about how he developed his independence proof.".
- Easwaran, Kenny (2007). "A Cheerful Introduction to Forcing and the Continuum Hypothesis". arXiv:0712.2279 [math.LO].
The article is also aimed at the beginner but includes more technical details than (Chow 2008)
- Gunther, Emmanuel; Pagano, Miguel; Sánchez Terraf, Pedro; Steinberg, Matías (May 2020). "Formalization of Forcing in Isabelle/ZF". Archive of Formal Proofs. https://www.isa-afp.org/entries/Forcing.html. Retrieved 20 August 2023.
- Kanamori, Akihiro (2007). "Set theory from Cantor to Cohen". http://math.bu.edu/people/aki/16.pdf.
- Weaver, Nik (2014). Forcing for Mathematicians. World Scientific Publishing Co.. pp. 153. doi:10.1142/8962. ISBN 978-9814566001. https://www.worldscientific.com/worldscibooks/10.1142/8962. "Written for mathematicians who want to learn the basic machinery of forcing. No background in logic is assumed, beyond the facility with formal syntax which should be second nature to any well-trained mathematician."
- Weisstein, Eric W.. "Forcing". http://mathworld.wolfram.com/Forcing.html.
Original source: https://en.wikipedia.org/wiki/Forcing (mathematics).
Read more |