Regular conditional probability

From HandWiki

In probability theory, regular conditional probability is a concept that formalizes the notion of conditioning on the outcome of a random variable. The resulting conditional probability distribution is a parametrized family of probability measures called a Markov kernel.

Definition

Conditional probability distribution

Consider two random variables [math]\displaystyle{ X, Y : \Omega \to \mathbb{R} }[/math]. The conditional probability distribution of Y given X is a two variable function [math]\displaystyle{ \kappa_{Y\mid X}: \mathbb{R} \times \mathcal{B}(\mathbb{R}) \to [0,1] }[/math]

If the random variable X is discrete

[math]\displaystyle{ \kappa_{Y\mid X}(x, A) = P(Y \in A \mid X = x) = \begin{cases} \frac{P(Y \in A, X = x)}{P(X=x)} & \text{ if } P(X = x) \gt 0 \\[3pt] \text{arbitrary value} & \text{ otherwise}. \end{cases} }[/math]

If the random variables X, Y are continuous with density [math]\displaystyle{ f_{X,Y}(x,y) }[/math].

[math]\displaystyle{ \kappa_{Y\mid X}(x, A) = \begin{cases} \frac{\int_A f_{X,Y}(x, y) \, \mathrm{d}y}{\int_\mathbb{R} f_{X,Y}(x, y) \mathrm{d}y} & \text{ if } \int_\mathbb{R} f_{X,Y}(x, y) \, \mathrm{d}y \gt 0 \\[3pt] \text{arbitrary value} & \text{ otherwise}. \end{cases} }[/math]

A more general definition can be given in terms of conditional expectation. Consider a function [math]\displaystyle{ e_{Y \in A} : \mathbb{R} \to [0,1] }[/math] satisfying

[math]\displaystyle{ e_{Y \in A}(X(\omega)) = \operatorname E[1_{Y \in A} \mid X](\omega) }[/math]

for almost all [math]\displaystyle{ \omega }[/math]. Then the conditional probability distribution is given by

[math]\displaystyle{ \kappa_{Y\mid X}(x, A) = e_{Y \in A}(x). }[/math]

As with conditional expectation, this can be further generalized to conditioning on a sigma algebra [math]\displaystyle{ \mathcal{F} }[/math]. In that case the conditional distribution is a function [math]\displaystyle{ \Omega \times \mathcal{B}(\mathbb{R}) \to [0, 1] }[/math]:

[math]\displaystyle{ \kappa_{Y\mid\mathcal{F}}(\omega, A) = \operatorname E[1_{Y \in A} \mid \mathcal{F}] }[/math]

Regularity

For working with [math]\displaystyle{ \kappa_{Y\mid X} }[/math], it is important that it be regular, that is:

  1. For almost all x, [math]\displaystyle{ A \mapsto \kappa_{Y\mid X}(x, A) }[/math] is a probability measure
  2. For all A, [math]\displaystyle{ x \mapsto \kappa_{Y\mid X}(x, A) }[/math] is a measurable function

In other words [math]\displaystyle{ \kappa_{Y\mid X} }[/math] is a Markov kernel.

The second condition holds trivially, but the proof of the first is more involved. It can be shown that if Y is a random element [math]\displaystyle{ \Omega \to S }[/math] in a Radon space S, there exists a [math]\displaystyle{ \kappa_{Y\mid X} }[/math] that satisfies the first condition.[1] It is possible to construct more general spaces where a regular conditional probability distribution does not exist.[2]

Relation to conditional expectation

For discrete and continuous random variables, the conditional expectation can be expressed as

[math]\displaystyle{ \begin{aligned} \operatorname E[Y\mid X=x] &= \sum_y y \, P(Y=y\mid X=x) \\ \operatorname E[Y\mid X=x] &= \int y \, f_{Y\mid X}(x, y) \, \mathrm{d}y \end{aligned} }[/math]

where [math]\displaystyle{ f_{Y\mid X}(x, y) }[/math] is the conditional density of Y given X.

This result can be extended to measure theoretical conditional expectation using the regular conditional probability distribution:

[math]\displaystyle{ \operatorname E[Y\mid X](\omega) = \int y \, \kappa_{Y\mid\sigma(X)}(\omega, \mathrm{d}y). }[/math]

Formal definition

Let [math]\displaystyle{ (\Omega, \mathcal F, P) }[/math] be a probability space, and let [math]\displaystyle{ T:\Omega\rightarrow E }[/math] be a random variable, defined as a Borel-measurable function from [math]\displaystyle{ \Omega }[/math] to its state space [math]\displaystyle{ (E, \mathcal E) }[/math]. One should think of [math]\displaystyle{ T }[/math] as a way to "disintegrate" the sample space [math]\displaystyle{ \Omega }[/math] into [math]\displaystyle{ \{ T^{-1}(x) \}_{x \in E} }[/math]. Using the disintegration theorem from the measure theory, it allows us to "disintegrate" the measure [math]\displaystyle{ P }[/math] into a collection of measures, one for each [math]\displaystyle{ x \in E }[/math]. Formally, a regular conditional probability is defined as a function [math]\displaystyle{ \nu:E \times\mathcal F \rightarrow [0,1], }[/math] called a "transition probability", where:

  • For every [math]\displaystyle{ x \in E }[/math], [math]\displaystyle{ \nu(x, \cdot) }[/math] is a probability measure on [math]\displaystyle{ \mathcal F }[/math]. Thus we provide one measure for each [math]\displaystyle{ x \in E }[/math].
  • For all [math]\displaystyle{ A\in\mathcal F }[/math], [math]\displaystyle{ \nu(\cdot, A) }[/math] (a mapping [math]\displaystyle{ E \to [0,1] }[/math]) is [math]\displaystyle{ \mathcal E }[/math]-measurable, and
  • For all [math]\displaystyle{ A\in\mathcal F }[/math] and all [math]\displaystyle{ B\in\mathcal E }[/math][3]
[math]\displaystyle{ P\big(A\cap T^{-1}(B)\big) = \int_B \nu(x,A) \,(P\circ T^{-1})(\mathrm{d}x) }[/math]

where [math]\displaystyle{ P\circ T^{-1} }[/math] is the pushforward measure [math]\displaystyle{ T_*P }[/math] of the distribution of the random element [math]\displaystyle{ T }[/math], [math]\displaystyle{ x\in\operatorname{supp}T, }[/math] i.e. the support of the [math]\displaystyle{ T_* P }[/math]. Specifically, if we take [math]\displaystyle{ B=E }[/math], then [math]\displaystyle{ A \cap T^{-1}(E) = A }[/math], and so

[math]\displaystyle{ P(A) = \int_E \nu(x,A) \, (P\circ T^{-1})(\mathrm{d}x), }[/math]

where [math]\displaystyle{ \nu(x, A) }[/math] can be denoted, using more familiar terms [math]\displaystyle{ P(A\ |\ T=x) }[/math].

Alternate definition

Consider a Radon space [math]\displaystyle{ \Omega }[/math] (that is a probability measure defined on a Radon space endowed with the Borel sigma-algebra) and a real-valued random variable T. As discussed above, in this case there exists a regular conditional probability with respect to T. Moreover, we can alternatively define the regular conditional probability for an event A given a particular value t of the random variable T in the following manner:

[math]\displaystyle{ P (A\mid T=t) = \lim_{U\supset \{T= t\}} \frac {P(A\cap U)}{P(U)}, }[/math]

where the limit is taken over the net of open neighborhoods U of t as they become smaller with respect to set inclusion. This limit is defined if and only if the probability space is Radon, and only in the support of T, as described in the article. This is the restriction of the transition probability to the support of T. To describe this limiting process rigorously:

For every [math]\displaystyle{ \varepsilon \gt 0, }[/math] there exists an open neighborhood U of the event {T = t}, such that for every open V with [math]\displaystyle{ \{T=t\} \subset V \subset U, }[/math]

[math]\displaystyle{ \left|\frac {P(A\cap V)}{P(V)}-L\right| \lt \varepsilon, }[/math]

where [math]\displaystyle{ L = P (A\mid T=t) }[/math] is the limit.

See also

References

  1. Klenke, Achim. Probability theory : a comprehensive course (Second ed.). London. ISBN 978-1-4471-5361-0. 
  2. Faden, A.M., 1985. The existence of regular conditional probabilities: necessary and sufficient conditions. The Annals of Probability, 13(1), pp. 288–298.
  3. D. Leao Jr. et al. Regular conditional probability, disintegration of probability and Radon spaces. Proyecciones. Vol. 23, No. 1, pp. 15–29, May 2004, Universidad Católica del Norte, Antofagasta, Chile PDF

External links