Pure inductive logic

From HandWiki
Revision as of 00:58, 10 May 2022 by imported>Raymond Straus (url)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Pure inductive logic (PIL) is the area of mathematical logic concerned with the philosophical and mathematical foundations of probabilistic inductive reasoning. It combines classical predicate logic and probability theory (Bayesian inference). Probability values are assigned to sentences of a first-order relational language to represent degrees of belief that should be held by a rational agent. Conditional probability values represent degrees of belief based on the assumption of some received evidence.

PIL studies prior probability functions on the set of sentences and evaluates the rationality of such prior probability functions through principles that such functions should arguably satisfy. Each of the principles directs the function to assign probability values and conditional probability values to sentences in some respect rationally. Not all desirable principles of PIL are compatible, so no prior probability function exists that satisfies them all. Some prior probability functions however are distinguished through satisfying an important collection of principles.

History

Inductive logic started to take a clearer shape in the early 20th century in the work of William Ernest Johnson and John Maynard Keynes, and was further developed by Rudolf Carnap. Carnap introduced the distinction between pure and applied inductive logic,[1] and the modern Pure Inductive Logic evolves along the lines of the pure, uninterpreted approach envisaged by Carnap.

Framework

General case

In its basic form, PIL uses first-order logic without equality, with the usual connectives [math]\displaystyle{ \wedge, \vee, \neg, \to }[/math] (and, or, not and implies respectively), quantifiers [math]\displaystyle{ \exist, \forall, }[/math] finitely many predicate (relation) symbols, and countably many constant symbols [math]\displaystyle{ a_1, a_2, a_3, \ldots \, }[/math].

There are no function symbols. The predicate symbols can be unary, binary or of higher arities. The finite set of predicate symbols may vary while the rest of the language is fixed. It is a convention to refer to the language as [math]\displaystyle{ L }[/math] and write

[math]\displaystyle{ L = \{R_1, R_2, \ldots, R_q\} }[/math]

where the [math]\displaystyle{ R_i }[/math] list the predicate symbols. The set of all sentences is denoted [math]\displaystyle{ SL }[/math]. If a sentence is written with constants appearing in it listed then it is assumed that the list includes at least all those that appear. [math]\displaystyle{ {\cal T}L }[/math] is the set of structures for [math]\displaystyle{ L }[/math] with universe [math]\displaystyle{ \{a_1, a_2, a_3, \ldots\} }[/math] and with each constant symbol [math]\displaystyle{ a_i }[/math] interpreted as itself.

A probability function for sentences of [math]\displaystyle{ L }[/math] is a function [math]\displaystyle{ w }[/math] with domain [math]\displaystyle{ SL }[/math] and values in the unit interval [math]\displaystyle{ [0,1] }[/math] satisfying the following conditions:

– any logically valid sentence [math]\displaystyle{ \theta }[/math] has probability [math]\displaystyle{ 1\!:\, }[/math] [math]\displaystyle{ w(\theta)=1 }[/math]
– if sentences [math]\displaystyle{ \theta }[/math] and [math]\displaystyle{ \phi }[/math] are mutually exclusive then [math]\displaystyle{ w(\theta \vee \phi)= w(\theta) + w(\phi) }[/math]
– for a formula [math]\displaystyle{ \psi(x) }[/math] with one free variable the probability of [math]\displaystyle{ \exists x \, \psi(x) }[/math] is the limit of probabilities of [math]\displaystyle{ \psi(a_1) \vee \psi(a_2) \vee \ldots \vee \psi(a_n) }[/math] as [math]\displaystyle{ n }[/math] tends to [math]\displaystyle{ \infty }[/math].

This last condition, which goes beyond the standard Kolmogorov axioms (for finite additivity) is referred to as Gaifman's Axiom and it is intended to capture the idea that the [math]\displaystyle{ a_i }[/math] exhaust the universe.

For a probability function [math]\displaystyle{ w }[/math] and a sentence [math]\displaystyle{ \phi }[/math] with [math]\displaystyle{ w(\phi)\gt 0 }[/math], the corresponding conditional probability function [math]\displaystyle{ w(\,. |\, \phi) }[/math] is defined by

[math]\displaystyle{ w(\theta \mid \phi) = \frac{w(\theta \wedge \varphi)}{w(\varphi)} \quad\ (\theta \in SL). }[/math]

Unlike belief functions in many valued logics, it is not the case that the probability value of a compound sentence is determined by the probability values of its components. Probability respects the classical semantics: logically equivalent sentences must be given the same probability. Hence logically equivalent sentences are often identified.

A state description for a finite set of constants is a conjunction of atomic sentences (predicates or their negations) instantiated exclusively by these constants, such that for any eligible atomic sentence either it or its negation (but not both) appears in the conjunction.

Any probability function is uniquely determined by its values on state descriptions. To define a probability function, it suffices to specify nonnegative values of all state descriptions for [math]\displaystyle{ a_1, \ldots,a_n }[/math] (for all [math]\displaystyle{ n }[/math]) so that the values of all state descriptions for [math]\displaystyle{ a_1, \ldots,a_n, a_{n+1} }[/math] extending a given state description for [math]\displaystyle{ a_1, \ldots,a_n }[/math] sum to the value of the state description they all extend, with the convention that the (only) state description for no constants is a tautology and that has value [math]\displaystyle{ 1 }[/math].

If [math]\displaystyle{ \Theta }[/math] is a state description for a set of constants including [math]\displaystyle{ a_i,a_j }[/math] then it is said that [math]\displaystyle{ a_i,a_j }[/math] are indistinguishable in [math]\displaystyle{ \Theta }[/math], [math]\displaystyle{ a_i \sim_\Theta a_j }[/math], just when upon adding equality to the language (and axioms of equality to the logic) the sentence [math]\displaystyle{ \Theta \wedge a_i=a_j }[/math] is consistent. [math]\displaystyle{ \,\sim_\Theta }[/math] is an equivalence relation.

Unary case

In the special case of Unary PIL, all the predicates [math]\displaystyle{ R_1, \ldots, R_q }[/math] are unary. Formulae of the form

[math]\displaystyle{ ~~~~~~~~~~~~\beta(x) = \pm R_1(x)\wedge \pm R_2(x) \wedge \ldots \wedge \pm R_q(x) }[/math]

where [math]\displaystyle{ \pm R }[/math] stands for one of [math]\displaystyle{ R }[/math], [math]\displaystyle{ \neg R }[/math], are called atoms. It is assumed that they are listed in some fixed order as [math]\displaystyle{ \beta_1, \beta_2,\ldots, \beta_{2^q} }[/math].

A state description specifies an atom for each constant involved in it, and it can be written as a conjunction of these atoms instantiated by the corresponding constants. Two constants are indistinguishable in the state description if it specifies the same atom for both of them.

Central question

Assume a rational agent inhabits a structure in [math]\displaystyle{ {\cal T}L }[/math] but knows nothing about which one it is. What probability function [math]\displaystyle{ w }[/math] should s/he adopt when [math]\displaystyle{ w(\theta) }[/math] is to represent his/her degree of belief that a sentence [math]\displaystyle{ \theta }[/math] is true in this ambient structure?

Rational principles

General rational principles

The following principles have been proposed as desirable properties of a rational prior probability function [math]\displaystyle{ w }[/math] for [math]\displaystyle{ L }[/math].

The constant exchangeability principle, Ex. The probability of a sentence [math]\displaystyle{ \theta(a_1,a_2, \ldots, a_m) }[/math] does not change when the [math]\displaystyle{ a_1, a_2, \ldots, a_m }[/math] in it are replaced by any other [math]\displaystyle{ m }[/math]-tuple of (distinct) constants.

The principle of predicate exchangeability, Px. If [math]\displaystyle{ R,R' }[/math] are predicates of the same arity then for a sentence [math]\displaystyle{ \theta }[/math],

[math]\displaystyle{ w(\theta)=w(\theta') }[/math]

where [math]\displaystyle{ \theta' }[/math] is the result of simultaneously replacing [math]\displaystyle{ R }[/math] by [math]\displaystyle{ R' }[/math] and [math]\displaystyle{ R' }[/math] by [math]\displaystyle{ R }[/math] throughout [math]\displaystyle{ \theta }[/math].

The strong negation principle, SN. For a predicate [math]\displaystyle{ R }[/math] and sentence [math]\displaystyle{ \theta }[/math],

[math]\displaystyle{ w(\theta)=w(\theta') }[/math]

where [math]\displaystyle{ \theta' }[/math] is the result of simultaneously replacing [math]\displaystyle{ R }[/math] by [math]\displaystyle{ \neg R }[/math] and [math]\displaystyle{ \neg R }[/math] by [math]\displaystyle{ R }[/math] throughout [math]\displaystyle{ \theta }[/math].

The principle of regularity, Reg. If a quantifier-free sentence [math]\displaystyle{ \theta }[/math] is satisfiable then [math]\displaystyle{ w(\theta) \gt 0 }[/math].

The principle of super regularity (universal certainty), SReg. If a sentence [math]\displaystyle{ \theta }[/math] is satisfiable then [math]\displaystyle{ w(\theta) \gt 0 }[/math].

The constant irrelevance principle, IP. If sentences [math]\displaystyle{ \theta, \phi }[/math] have no constants in common then [math]\displaystyle{ w(\theta \wedge \phi) = w(\theta) \cdot w(\phi) }[/math].

The weak irrelevance principle, WIP. If sentences [math]\displaystyle{ \theta, \phi }[/math] have no constants nor predicates in common then [math]\displaystyle{ w(\theta \wedge \phi) = w(\theta) \cdot w(\phi) }[/math].

Language invariance principle, Li. There is a family of probability functions [math]\displaystyle{ w^{J} }[/math], one on each language [math]\displaystyle{ J }[/math], all satisfying Px and Ex, and such that [math]\displaystyle{ w^L=w }[/math] and if all predicates of [math]\displaystyle{ J }[/math] belong also to [math]\displaystyle{ K }[/math] then [math]\displaystyle{ w^J }[/math] and [math]\displaystyle{ w^K }[/math] agree on sentences of [math]\displaystyle{ J }[/math].

The (strong) counterpart principle, CP. If [math]\displaystyle{ \theta, \theta' }[/math] are sentences such that [math]\displaystyle{ \theta' }[/math] is the result of replacing some constant/relation symbols in [math]\displaystyle{ \theta }[/math] by new constant/relation symbols of the same arity not occurring in [math]\displaystyle{ \theta }[/math] then

[math]\displaystyle{ w(\theta \mid \theta') \geq w(\theta). }[/math]

(SCP) If moreover [math]\displaystyle{ \theta'' }[/math] is the result of replacing the same and possibly also additional constant/relation symbols in [math]\displaystyle{ \theta }[/math] by new constant/relation symbols of the same arity not occurring in [math]\displaystyle{ \theta }[/math] then

[math]\displaystyle{ w(\theta \mid \theta') \geq w(\theta \mid \theta'') \geq w(\theta). }[/math]

The Invariance Principle, INV. If [math]\displaystyle{ F }[/math] is an isomorphism of the Lindenbaum-Tarski algebra of sentences of [math]\displaystyle{ L }[/math] supported by some permutation [math]\displaystyle{ \mu }[/math] of [math]\displaystyle{ {\cal T} L }[/math] in the sense that for sentences [math]\displaystyle{ \theta, \phi }[/math],

[math]\displaystyle{ F([\theta]) = [\phi]~ }[/math] just when [math]\displaystyle{ ~ M \models \theta \Longleftrightarrow \mu(M) \models \phi }[/math]

then [math]\displaystyle{ w(\theta) = w(\phi) }[/math].

The Permutation Invariance Principle, PIP. As INV except that [math]\displaystyle{ F }[/math] is additionally required to map (equivalence classes of) state descriptions to (equivalence classes of) state descriptions.

The Spectrum Exchangeability Principle, Sx. The probability [math]\displaystyle{ w(\Theta) }[/math] of a state description [math]\displaystyle{ \Theta }[/math] depends only on the spectrum of [math]\displaystyle{ \Theta }[/math], that is, on the multiset of sizes of equivalence classes with respect to the equivalence relation [math]\displaystyle{ \sim_\Theta }[/math].

Li with Sx. As the Language Invariance Principle but all the probability functions in the family also satisfy Spectrum Exchangeability.

The Principle of Induction, PI. Let [math]\displaystyle{ \Theta }[/math] be a state description and [math]\displaystyle{ a_k }[/math] a constant not appearing in [math]\displaystyle{ \Theta }[/math]. Let [math]\displaystyle{ \Phi }[/math], [math]\displaystyle{ \Psi }[/math] be state descriptions extending [math]\displaystyle{ \Theta }[/math] to include (just) [math]\displaystyle{ a_k }[/math]. If [math]\displaystyle{ a_k }[/math] is [math]\displaystyle{ \sim_\Phi }[/math]-equivalent to some and at least as many constants as it is [math]\displaystyle{ \sim_\Psi }[/math]-equivalent to then [math]\displaystyle{ w(\Phi\mid \Theta) \geq w(\Psi \mid \Theta) }[/math].

Further rational principles for unary PIL

The Principle of Instantial Relevance, PIR. For a sentence [math]\displaystyle{ \theta }[/math], atom [math]\displaystyle{ \beta }[/math] and constants [math]\displaystyle{ a_k,a_m }[/math] not appearing in [math]\displaystyle{ \theta }[/math],

[math]\displaystyle{ w(\beta(a_k) \mid \beta(a_m) \wedge \theta) \geq w(\beta(a_k) \mid \theta) }[/math].

The Generalized Principle of Instantial Relevance, GPIR. For quantifier-free sentences [math]\displaystyle{ \psi(a_k), \phi(a_m), \theta }[/math] with constants [math]\displaystyle{ a_k,a_m }[/math] not appearing in [math]\displaystyle{ \theta }[/math], if [math]\displaystyle{ \psi(x) \models \phi(x) }[/math] then

[math]\displaystyle{ w( \psi(a_{k}) \mid \phi(a_{m}) \wedge \theta) \geq w( \psi(a_{k}) \mid \theta). }[/math]

Johnson Sufficientness Principle, JSP. For a state description [math]\displaystyle{ \Theta }[/math] for [math]\displaystyle{ n }[/math] constants, atom [math]\displaystyle{ \beta }[/math] and constant [math]\displaystyle{ a_k }[/math] not appearing in [math]\displaystyle{ \Theta }[/math], the probability

[math]\displaystyle{ w(\beta(a_k)\mid \Theta) }[/math]

depends only on [math]\displaystyle{ n }[/math] and on the number of constants for which [math]\displaystyle{ \Theta }[/math] specifies [math]\displaystyle{ \beta }[/math].

The Principle of Atom Exchangeability, Ax. If [math]\displaystyle{ \tau }[/math] is a permutation of [math]\displaystyle{ \{1,2, \ldots, 2^q\} }[/math] and [math]\displaystyle{ \Theta }[/math] is a state description expressed as a conjunction of instantiated atoms then [math]\displaystyle{ w(\Theta)=w(\Theta') }[/math] where [math]\displaystyle{ \Theta' }[/math] obtains from [math]\displaystyle{ \Theta }[/math] upon replacing each [math]\displaystyle{ \beta_i }[/math] by [math]\displaystyle{ \beta_{\tau(i)} }[/math].

Reichenbach's Axiom, RA. Let [math]\displaystyle{ \beta_{h_i} }[/math] for [math]\displaystyle{ i=1,2,3,\ldots }[/math] be an infinite sequence of atoms and [math]\displaystyle{ \beta }[/math] an atom. Then as [math]\displaystyle{ n }[/math] tends to [math]\displaystyle{ \infty }[/math], the difference between the conditional probability

[math]\displaystyle{ w(\beta(a_{n+1}) \mid \beta_{h_1}(a_1) \wedge \beta_{h_2}(a_2) \wedge \ldots \wedge \beta_{h_n}(a_n)) }[/math]

and the proportion of occurrences of [math]\displaystyle{ \beta }[/math] amongst the [math]\displaystyle{ \beta_{h_1}, \beta_{h_2}, \ldots ,\beta_{h_n} }[/math] tends to [math]\displaystyle{ 0 }[/math].

Principle of Induction for Unary languages, UPI. For a state description [math]\displaystyle{ \Theta }[/math], atoms [math]\displaystyle{ \beta_i, \beta_j }[/math] and constant [math]\displaystyle{ a_k }[/math] not appearing in [math]\displaystyle{ \Theta }[/math], if [math]\displaystyle{ \Theta }[/math] specifies [math]\displaystyle{ \beta_i }[/math] for at least as many constants as [math]\displaystyle{ \beta_j }[/math] then

[math]\displaystyle{ w(\beta_i(a_k)\mid \Theta) \geq w(\beta_j(a_k)\mid \Theta). }[/math]

Recovery. Whenever [math]\displaystyle{ \Psi(a_1,a_2, \ldots, a_n) }[/math] is a state description then there is another state description [math]\displaystyle{ \Phi(a_{n+1}, a_{n+2}, \ldots, a_{h}) }[/math] such that [math]\displaystyle{ w(\Phi \wedge \Psi) \neq 0 }[/math] and for any quantifier-free sentence [math]\displaystyle{ \theta(a_{h+1}, a_{h+2}, \ldots, a_{h+g}) }[/math],

[math]\displaystyle{ w(\theta(a_{h+1}, a_{h+2}, \ldots, a_{h+g})\,|\,\Phi \wedge \Psi) = w(\theta(a_{h+1}, a_{h+2}, \ldots, a_{h+g})). }[/math]

Unary Language Invariance Principle, ULi. As Li, but with the languages restricted to the unary ones.

ULi with Ax. As ULi but with all the probability functions in the family also satisfying Atom Exchangeability.

Relationships between principles

General Case

Sx implies Ex, Px and SN.

PIP + Ex implies Sx.

INV implies PIP and Ex.

Li implies CP and SCP.

Li with Sx implies PI.

Unary case

Ex implies PIR.

Ax is equivalent to PIP.

Ax+Ex implies UPI.

Ax+Ex is equivalent to Sx.

ULi with Ax implies Li with Sx.

Important probability functions

General probability functions

Functions [math]\displaystyle{ V_M }[/math]. For a given structure [math]\displaystyle{ M \in {\cal T} L }[/math] and [math]\displaystyle{ \theta \in SL }[/math],

[math]\displaystyle{ V_M(\theta)= \left\{ \begin{array}{ll} 1& {\rm if}~ M\models \theta,\\ 0&{\rm otherwise}.\end{array} \right. }[/math]

Functions [math]\displaystyle{ \omega^{\Psi} }[/math]. For a given state description [math]\displaystyle{ \Psi(a_1,a_2, \ldots, a_K) }[/math], [math]\displaystyle{ \,\omega^{\Psi} }[/math] is defined via specifying its values for state descriptions as follows. [math]\displaystyle{ \,\omega^{\Psi}(\Theta(a_1,a_2, \ldots, a_n)) }[/math] is the probability that when [math]\displaystyle{ a_{h_1},a_{h_2}, \ldots, a_{h_n} }[/math] are randomly picked from [math]\displaystyle{ \{a_1, \ldots,a_K\} }[/math], with replacement and according to the uniform distribution, then [math]\displaystyle{ \Psi(a_1, \ldots, a_K) \models \Theta(a_{h_1}, a_{h_2}, \ldots, a_{h_n}). }[/math]

Functions [math]\displaystyle{ ^\circ \! (\omega^\Psi) }[/math]. As above but employing a non-standard universe (starting with a possibly non-standard state description [math]\displaystyle{ \Psi }[/math]) to obtain the standard [math]\displaystyle{ ^\circ \! (\omega^\Psi) }[/math].

[math]\displaystyle{ \bullet }[/math] The [math]\displaystyle{ ^\circ \! (\omega^{\Psi}) }[/math] are the only probability functions that satisfy Ex and IP.

Functions [math]\displaystyle{ u^{\overline{p}} }[/math]. For a given infinite sequence [math]\displaystyle{ \overline{p} = \langle p_0,p_1,p_2,p_3, \ldots \rangle }[/math] of non-negative real numbers such that

[math]\displaystyle{ p_1 \geq p_2 \geq p_3 \geq \ldots \geq 0\, \, }[/math] and [math]\displaystyle{ ~\sum_{i=0}^\infty p_i = 1 }[/math],

[math]\displaystyle{ u^{\overline{p}} }[/math] is defined via specifying its values for state descriptions as follows:

For a sequence [math]\displaystyle{ \vec{c} = \langle c_1,c_2, \ldots, c_n\rangle }[/math] of natural numbers and a state description [math]\displaystyle{ \Theta(a_{1}, a_{2}, \ldots, a_{n}) }[/math], [math]\displaystyle{ \Theta }[/math] is consistent with [math]\displaystyle{ \vec{c} }[/math] if whenever [math]\displaystyle{ c_s=c_t \neq 0 }[/math] then [math]\displaystyle{ a_{s} \sim_\Theta a_{t} }[/math]. [math]\displaystyle{ C(\vec{c}) }[/math] is the number of state descriptions for [math]\displaystyle{ a_{1}, a_{2}, \ldots, a_{n} }[/math] consistent with [math]\displaystyle{ \vec{c} }[/math]. [math]\displaystyle{ \,u^{\overline{p}}(\Theta) }[/math] is the sum over those [math]\displaystyle{ \vec{c} }[/math] with which [math]\displaystyle{ \Theta }[/math] is compatible, of

[math]\displaystyle{ C(\vec{c})^{-1} \prod_{s=1}^n p_{c_s}. }[/math]

[math]\displaystyle{ \bullet }[/math] The [math]\displaystyle{ u^{\overline{p}} }[/math] are the only probability functions that satisfy WIP and Li with Sx. (The language invariant family witnessing Li with Sx consists of the functions [math]\displaystyle{ u^{\overline{p}, J} }[/math] with fixed [math]\displaystyle{ \overline{p} }[/math], where [math]\displaystyle{ u^{\overline{p}, J} }[/math] is as [math]\displaystyle{ u^{\overline{p}} }[/math] but defined with language [math]\displaystyle{ J }[/math].)

Further probability functions (unary PIL)

Functions [math]\displaystyle{ w }[/math][math]\displaystyle{ \vec{c} }[/math]. For a vector [math]\displaystyle{ \vec{c} = \langle c_1,c_2, \ldots, c_{2^q}\rangle }[/math] of non-negative real numbers summing to one, [math]\displaystyle{ w }[/math][math]\displaystyle{ \vec{c} }[/math] is defined via specifying its values for state descriptions as follows:

[math]\displaystyle{ w }[/math][math]\displaystyle{ \vec{c} }[/math][math]\displaystyle{ (\Theta )= \prod_{j=1}^{2^q} c_{j}^{m_j} }[/math]

where [math]\displaystyle{ m_j }[/math] the is number of constants for which [math]\displaystyle{ \Theta }[/math] specifies [math]\displaystyle{ \beta_j }[/math].

[math]\displaystyle{ \bullet }[/math] The [math]\displaystyle{ w }[/math][math]\displaystyle{ \vec{c} }[/math] are the only probability functions that satisfy Ex and IP (they are also expressible as [math]\displaystyle{ ^\circ \! (w^{\Psi}) }[/math] ).

Carnap continuum functions [math]\displaystyle{ c_{\lambda}.\, }[/math] For [math]\displaystyle{ \lambda\gt 0 }[/math], the probability function [math]\displaystyle{ c_\lambda }[/math] is uniquely determined by the values

[math]\displaystyle{ c_\lambda(\beta_j(a_{n+1}) \mid \Theta) = \frac{m_j + \lambda2^{-q}}{n + \lambda} }[/math]

where [math]\displaystyle{ \Theta }[/math] is a state description for [math]\displaystyle{ n }[/math] constants not including [math]\displaystyle{ a_k }[/math] and [math]\displaystyle{ m_j }[/math] is the number of constants for which [math]\displaystyle{ \Theta }[/math] specifies [math]\displaystyle{ \beta_j }[/math].

Furthermore, [math]\displaystyle{ c_\infty }[/math] is the probability function that assigns [math]\displaystyle{ 2^{-nq} }[/math] to every state description for [math]\displaystyle{ n }[/math] constants and [math]\displaystyle{ c_0 }[/math] is the probability function that assigns [math]\displaystyle{ 2^{-q} }[/math] to any state description in which all constants are indistinguishable, [math]\displaystyle{ 0 }[/math] to any other state description.

[math]\displaystyle{ \bullet }[/math] The [math]\displaystyle{ c_\lambda }[/math] are the only probability functions that satisfy Ex and JSP.

[math]\displaystyle{ \bullet }[/math] They also satisfy Li – the functions [math]\displaystyle{ c^{J}_\lambda }[/math] with fixed [math]\displaystyle{ \lambda }[/math], where [math]\displaystyle{ c^{J}_\lambda }[/math] is as [math]\displaystyle{ c_\lambda }[/math] but defined with language [math]\displaystyle{ J }[/math] provide the unary language-invariant family members.

Functions [math]\displaystyle{ w^{\delta} }[/math]. For [math]\displaystyle{ -(2^q-1)^{-1} \leq \delta \leq 1 }[/math], [math]\displaystyle{ w^{\delta} }[/math] is the average of the [math]\displaystyle{ 2^q }[/math] functions [math]\displaystyle{ w }[/math][math]\displaystyle{ \vec{c} }[/math] where [math]\displaystyle{ \vec{c} }[/math] has all but one coordinate equal to each other with the odd coordinate differing from them by [math]\displaystyle{ \delta }[/math], so

[math]\displaystyle{ w^\delta= 2^{-q} \sum_{i=1}^{2^q} }[/math][math]\displaystyle{ w }[/math][math]\displaystyle{ \vec{e_i} }[/math]

where [math]\displaystyle{ \vec{e_i} = \langle \gamma, \gamma, \ldots, \gamma, \gamma + \delta, \gamma, \ldots, \gamma \rangle ~ }[/math], ([math]\displaystyle{ \gamma+\delta }[/math] in [math]\displaystyle{ i }[/math]th place) and [math]\displaystyle{ \gamma = 2^{-q}(1-\delta) }[/math].

For [math]\displaystyle{ 0\leq \delta \leq 1 }[/math], the [math]\displaystyle{ w^{\delta} }[/math] are equal to [math]\displaystyle{ u^{\bar{p}} }[/math] for

[math]\displaystyle{ \bar{p} = \langle 1-\delta, \delta, 0,0,0,\ldots \rangle }[/math]

and as such they satisfy Li.

[math]\displaystyle{ \bullet }[/math] The [math]\displaystyle{ w^{\delta} }[/math] are the only functions that satisfy GPIR, Ex, Ax and Reg.

[math]\displaystyle{ \bullet }[/math] The [math]\displaystyle{ w^{\delta} }[/math] with [math]\displaystyle{ 0\leq\delta \lt 1 }[/math] are the only functions that satisfy Recovery, Reg and ULi with Ax.

Representation theorems

A representation theorem for a class of probability functions provides means of expressing every probability function in the class in terms of generic, relatively simple probability functions from the same class.

Representation Theorem for all probability functions. Every probability function [math]\displaystyle{ w }[/math] for [math]\displaystyle{ L }[/math] can be represented as

[math]\displaystyle{ w= \int_{{\cal T} L} V_M \,d\mu(M) }[/math]

where [math]\displaystyle{ \mu }[/math] is a [math]\displaystyle{ \sigma }[/math]-additive measure on the [math]\displaystyle{ \sigma }[/math]-algebra of subsets of [math]\displaystyle{ {\cal T} L }[/math] generated by the sets

[math]\displaystyle{ \{\, M \in {\cal T} L \mid M \vDash \theta\,\} ~ ~~~ (\theta \in SL). }[/math]

Representation Theorem for Ex (employing non-standard analysis and Loeb Integration Theory[2]). Every probability function [math]\displaystyle{ w }[/math] for [math]\displaystyle{ L }[/math] satisfying Ex can be represented as

[math]\displaystyle{ w = \int_A \,^\circ\!(\omega^{\Psi}) \, d\mu(\Psi) }[/math]

where [math]\displaystyle{ A }[/math] is an internal set of state descriptions for [math]\displaystyle{ a_1, a_2, \ldots, a_\nu }[/math] (with [math]\displaystyle{ \nu }[/math] a fixed infinite natural number) and [math]\displaystyle{ \mu }[/math] is a [math]\displaystyle{ \sigma }[/math]-additive measure on a [math]\displaystyle{ \sigma }[/math]-algebra of subsets of [math]\displaystyle{ A }[/math] .

Representation Theorem for Li with Sx. Every probability function [math]\displaystyle{ w }[/math] for [math]\displaystyle{ L }[/math] satisfying Li with Sx can be represented as

[math]\displaystyle{ w = \int_{\mathbb B} \,u^{\overline{p}}\, d\mu(\overline{p}) }[/math]

where [math]\displaystyle{ {\mathbb B} }[/math] is the set of sequences

[math]\displaystyle{ \overline{p} = \langle p_0,p_1,p_2,p_3, \ldots \rangle }[/math]

of non-negative reals summing to [math]\displaystyle{ 1 }[/math] and such that [math]\displaystyle{ p_1 \geq p_2 \geq p_3 \geq \ldots \,\geq 0 \, }[/math] and [math]\displaystyle{ \mu }[/math] is a [math]\displaystyle{ \sigma }[/math]-additive measure on the Borel subsets of [math]\displaystyle{ {\mathbb B} }[/math] in the product topology.

de Finetti's Representation Theorem (unary). In the unary case (where [math]\displaystyle{ L }[/math] is a language containing [math]\displaystyle{ q }[/math] unary predicates), the representation theorem for Ex is equivalent to:

Every probability function [math]\displaystyle{ w }[/math] for [math]\displaystyle{ L }[/math] satisfying Ex can be represented as

[math]\displaystyle{ w= \int_{\mathbb D} w_{\vec{x}}\, d\mu(\vec{x}). }[/math]

where [math]\displaystyle{ {\mathbb D} }[/math] is the set of vectors [math]\displaystyle{ \vec{x} = \langle x_1,x_2, \ldots, x_{2^q}\rangle }[/math] of non-negative real numbers summing to one and [math]\displaystyle{ \mu }[/math] is a [math]\displaystyle{ \sigma }[/math]-additive measure on [math]\displaystyle{ {\mathbb D} }[/math].

Notes

  1. Rudolf Carnap (1971). A Basic System of Inductive Logic, in Studies in Inductive Logic and Probability, Volume 1, pp 69-70.
  2. Cutland, N.J., Loeb measure theory, in Developments in Nonstandard Mathematics, Eds. N.J.Cutland, F.Oliveira, V.Neves, J.Sousa-Pinto, Pitman Research Notes in Mathematics Series, Vol. 336, Longman Press, 1995, pp151-177.

References