Borel–Kolmogorov paradox
In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and Andrey Kolmogorov.
A great circle puzzle
Suppose that a random variable has a uniform distribution on a unit sphere. What is its conditional distribution on a great circle? Because of the symmetry of the sphere, one might expect that the distribution is uniform and independent of the choice of coordinates. However, two analyses give contradictory results. First, note that choosing a point uniformly on the sphere is equivalent to choosing the longitude [math]\displaystyle{ \lambda }[/math] uniformly from [math]\displaystyle{ [-\pi,\pi] }[/math] and choosing the latitude [math]\displaystyle{ \varphi }[/math] from [math]\displaystyle{ [-\frac{\pi}{2},\frac{\pi}{2}] }[/math] with density [math]\displaystyle{ \frac{1}{2} \cos \varphi }[/math].[1] Then we can look at two different great circles:
- If the coordinates are chosen so that the great circle is an equator (latitude [math]\displaystyle{ \varphi = 0 }[/math]), the conditional density for a longitude [math]\displaystyle{ \lambda }[/math] defined on the interval [math]\displaystyle{ [-\pi,\pi] }[/math] is [math]\displaystyle{ f(\lambda\mid\varphi=0) = \frac{1}{2\pi}. }[/math]
- If the great circle is a line of longitude with [math]\displaystyle{ \lambda = 0 }[/math], the conditional density for [math]\displaystyle{ \varphi }[/math] on the interval [math]\displaystyle{ [-\frac{\pi}{2},\frac{\pi}{2}] }[/math] is [math]\displaystyle{ f(\varphi\mid\lambda=0) = \frac{1}{2} \cos \varphi. }[/math]
One distribution is uniform on the circle, the other is not. Yet both seem to be referring to the same great circle in different coordinate systems.
Many quite futile arguments have raged — between otherwise competent probabilists — over which of these results is 'correct'.— E.T. Jaynes[1]
Explanation and implications
In case (1) above, the conditional probability that the longitude λ lies in a set E given that φ = 0 can be written P(λ ∈ E | φ = 0). Elementary probability theory suggests this can be computed as P(λ ∈ E and φ = 0)/P(φ = 0), but that expression is not well-defined since P(φ = 0) = 0. Measure theory provides a way to define a conditional probability, using the family of events Rab = {φ : a < φ < b} which are horizontal rings consisting of all points with latitude between a and b.
The resolution of the paradox is to notice that in case (2), P(φ ∈ F | λ = 0) is defined using the events Lab = {λ : a < λ < b}, which are lunes (vertical wedges), consisting of all points whose longitude varies between a and b. So although P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) each provide a probability distribution on a great circle, one of them is defined using rings, and the other using lunes. Thus it is not surprising after all that P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) have different distributions.
The concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible. For we can obtain a probability distribution for [the latitude] on the meridian circle only if we regard this circle as an element of the decomposition of the entire spherical surface onto meridian circles with the given poles
… the term 'great circle' is ambiguous until we specify what limiting operation is to produce it. The intuitive symmetry argument presupposes the equatorial limit; yet one eating slices of an orange might presuppose the other.— E.T. Jaynes[1]
Mathematical explication
Measure theoretic perspective
To understand the problem we need to recognize that a distribution on a continuous random variable is described by a density f only with respect to some measure μ. Both are important for the full description of the probability distribution. Or, equivalently, we need to fully define the space on which we want to define f.
Let Φ and Λ denote two random variables taking values in Ω1 = [math]\displaystyle{ \left[-\frac{\pi}{2}, \frac{\pi}{2}\right] }[/math] respectively Ω2 = [−π, π]. An event {Φ = φ, Λ = λ} gives a point on the sphere S(r) with radius r. We define the coordinate transform
- [math]\displaystyle{ \begin{align} x &= r \cos \varphi \cos \lambda \\ y &= r \cos \varphi \sin \lambda \\ z &= r \sin \varphi \end{align} }[/math]
for which we obtain the volume element
- [math]\displaystyle{ \omega_r(\varphi,\lambda) = \left\| {\partial (x,y,z) \over \partial \varphi} \times {\partial (x,y,z) \over \partial \lambda} \right\| = r^2 \cos \varphi \ . }[/math]
Furthermore, if either φ or λ is fixed, we get the volume elements
- [math]\displaystyle{ \begin{align} \omega_r(\lambda) &= \left\| {\partial (x,y,z) \over \partial \varphi } \right\| = r \ , \quad\text{respectively} \\[3pt] \omega_r(\varphi) &= \left\| {\partial (x,y,z) \over \partial \lambda } \right\| = r \cos \varphi\ . \end{align} }[/math]
Let
- [math]\displaystyle{ \mu_{\Phi,\Lambda}(d\varphi, d\lambda) = f_{\Phi,\Lambda}(\varphi,\lambda) \omega_r(\varphi,\lambda) \, d\varphi \, d\lambda }[/math]
denote the joint measure on [math]\displaystyle{ \mathcal{B}(\Omega_1 \times \Omega_2) }[/math], which has a density [math]\displaystyle{ f_{\Phi,\Lambda} }[/math] with respect to [math]\displaystyle{ \omega_r(\varphi,\lambda) \, d\varphi \, d\lambda }[/math] and let
- [math]\displaystyle{ \begin{align} \mu_\Phi(d\varphi) &= \int_{\lambda \in \Omega_2} \mu_{\Phi,\Lambda}(d\varphi, d\lambda)\ ,\\ \mu_\Lambda (d\lambda) &= \int_{\varphi \in \Omega_1} \mu_{\Phi,\Lambda}(d\varphi, d\lambda)\ . \end{align} }[/math]
If we assume that the density [math]\displaystyle{ f_{\Phi,\Lambda} }[/math] is uniform, then
- [math]\displaystyle{ \begin{align} \mu_{\Phi \mid \Lambda}(d\varphi \mid \lambda) &= {\mu_{\Phi,\Lambda}(d\varphi, d\lambda) \over \mu_\Lambda(d\lambda)} = \frac{1}{2r} \omega_r(\varphi) \, d\varphi \ , \quad\text{and} \\[3pt] \mu_{\Lambda \mid \Phi}(d\lambda \mid \varphi) &= {\mu_{\Phi,\Lambda}(d\varphi, d\lambda) \over \mu_\Phi(d\varphi)} = \frac{1}{2r\pi} \omega_r(\lambda) \, d\lambda \ . \end{align} }[/math]
Hence, [math]\displaystyle{ \mu_{\Phi \mid \Lambda} }[/math] has a uniform density with respect to [math]\displaystyle{ \omega_r(\varphi) \, d\varphi }[/math] but not with respect to the Lebesgue measure. On the other hand, [math]\displaystyle{ \mu_{\Lambda \mid \Phi} }[/math] has a uniform density with respect to [math]\displaystyle{ \omega_r(\lambda) \, d\lambda }[/math] and the Lebesgue measure.
Proof of contradiction
Consider a random vector [math]\displaystyle{ (X,Y,Z) }[/math] that is uniformly distributed on the unit sphere [math]\displaystyle{ S^2 }[/math].
We begin by parametrizing the sphere with the usual spherical polar coordinates:
- [math]\displaystyle{ \begin{aligned} x &= \cos(\varphi) \cos (\theta) \\ y &= \cos(\varphi) \sin (\theta) \\ z &= \sin(\varphi) \end{aligned} }[/math]
where [math]\displaystyle{ -\frac{\pi}{2} \le \varphi \le \frac{\pi}{2} }[/math] and [math]\displaystyle{ -\pi \le \theta \le \pi }[/math].
We can define random variables [math]\displaystyle{ \Phi }[/math], [math]\displaystyle{ \Theta }[/math] as the values of [math]\displaystyle{ (X, Y, Z) }[/math] under the inverse of this parametrization, or more formally using the arctan2 function:
- [math]\displaystyle{ \begin{align} \Phi &= \arcsin(Z) \\ \Theta &= \arctan_2\left(\frac{Y}{\sqrt{1 - Z^2}}, \frac{X}{\sqrt{1 - Z^2}}\right) \end{align} }[/math]
Using the formulas for the surface area spherical cap and the spherical wedge, the surface of a spherical cap wedge is given by
- [math]\displaystyle{ \operatorname{Area}(\Theta \le \theta, \Phi \le \varphi) = (1 + \sin(\varphi)) (\theta + \pi) }[/math]
Since [math]\displaystyle{ (X,Y,Z) }[/math] is uniformly distributed, the probability is proportional to the surface area, giving the joint cumulative distribution function
- [math]\displaystyle{ F_{\Phi, \Theta}(\varphi, \theta) = P(\Theta \le \theta, \Phi \le \varphi) = \frac{1}{4\pi}(1 + \sin(\varphi)) (\theta + \pi) }[/math]
The joint probability density function is then given by
- [math]\displaystyle{ f_{\Phi, \Theta}(\varphi, \theta) = \frac{\partial^2}{\partial \varphi \partial \theta} F_{\Phi, \Theta}(\varphi, \theta) = \frac{1}{4\pi} \cos(\varphi) }[/math]
Note that [math]\displaystyle{ \Phi }[/math] and [math]\displaystyle{ \Theta }[/math] are independent random variables.
For simplicity, we won't calculate the full conditional distribution on a great circle, only the probability that the random vector lies in the first octant. That is to say, we will attempt to calculate the conditional probability [math]\displaystyle{ \mathbb{P}(A|B) }[/math] with
- [math]\displaystyle{ \begin{aligned} A &= \left\{ 0 \lt \Theta \lt \frac{\pi}{4} \right\} &&= \{ 0 \lt X \lt 1, 0 \lt Y \lt X \}\\ B &= \{ \Phi = 0 \} &&= \{ Z = 0 \} \end{aligned} }[/math]
We attempt to evaluate the conditional probability as a limit of conditioning on the events
- [math]\displaystyle{ B_\varepsilon = \{ | \Phi | \lt \varepsilon \} }[/math]
As [math]\displaystyle{ \Phi }[/math] and [math]\displaystyle{ \Theta }[/math] are independent, so are the events [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B_\varepsilon }[/math], therefore
- [math]\displaystyle{ P(A \mid B) \mathrel{\stackrel{?}{=}} \lim_{\varepsilon \to 0} \frac{P(A \cap B_\varepsilon)}{P(B_\varepsilon)} = \lim_{\varepsilon \to 0} P(A) = P \left(0 \lt \Theta \lt \frac{\pi}{4}\right) = \frac{1}{8}. }[/math]
Now we repeat the process with a different parametrization of the sphere:
- [math]\displaystyle{ \begin{align} x &= \sin(\varphi) \\ y &= \cos(\varphi) \sin(\theta) \\ z &= -\cos(\varphi) \cos(\theta) \end{align} }[/math]
This is equivalent to the previous parametrization rotated by 90 degrees around the y axis.
Define new random variables
- [math]\displaystyle{ \begin{align} \Phi' &= \arcsin(X) \\ \Theta' &= \arctan_2\left(\frac{Y}{\sqrt{1 - X^2}}, \frac{-Z}{\sqrt{1 - X^2}}\right). \end{align} }[/math]
Rotation is measure preserving so the density of [math]\displaystyle{ \Phi' }[/math] and [math]\displaystyle{ \Theta' }[/math] is the same:
- [math]\displaystyle{ f_{\Phi', \Theta'}(\varphi, \theta) = \frac{1}{4\pi} \cos(\varphi) }[/math].
The expressions for A and B are:
- [math]\displaystyle{ \begin{align} A &= \left\{ 0 \lt \Theta \lt \frac{\pi}{4} \right\} &&= \{ 0 \lt X \lt 1,\ 0 \lt Y \lt X \} &&= \left\{ 0 \lt \Theta' \lt \pi,\ 0 \lt \Phi' \lt \frac{\pi}{2},\ \sin(\Theta') \lt \tan(\Phi') \right\} \\ B &= \{ \Phi = 0 \} &&= \{ Z = 0 \} &&= \left\{ \Theta' = -\frac{\pi}{2} \right\} \cup \left\{ \Theta' = \frac{\pi}{2} \right\}. \end{align} }[/math]
Attempting again to evaluate the conditional probability as a limit of conditioning on the events
- [math]\displaystyle{ B^\prime_\varepsilon = \left\{ \left|\Theta' + \frac{\pi}{2}\right| \lt \varepsilon \right\} \cup \left\{ \left|\Theta'-\frac{\pi}{2}\right| \lt \varepsilon \right\}. }[/math]
Using L'Hôpital's rule and differentiation under the integral sign:
- [math]\displaystyle{ \begin{align} P(A \mid B) &\mathrel{\stackrel{?}{=}} \lim_{\varepsilon \to 0} \frac{P(A \cap B^\prime_\varepsilon )}{P(B^\prime_\varepsilon )}\\ &= \lim_{\varepsilon \to 0} \frac{1}{\frac{4\varepsilon}{2\pi}}P\left( \frac{\pi}{2} - \varepsilon \lt \Theta' \lt \frac{\pi}{2} + \varepsilon,\ 0 \lt \Phi' \lt \frac{\pi}{2},\ \sin(\Theta') \lt \tan(\Phi') \right)\\ &= \frac{\pi}{2} \lim_{\varepsilon \to 0} \frac{\partial}{\partial \varepsilon} \int_{{\pi}/{2}-\epsilon}^{{\pi}/{2}+\epsilon} \int_0^{{\pi}/{2}} 1_{\sin(\theta) \lt \tan(\varphi)} f_{\Phi', \Theta'}(\varphi, \theta) \mathrm{d}\varphi \mathrm{d}\theta \\ &= \pi \int_0^{{\pi}/{2}} 1_{1 \lt \tan(\varphi)} f_{\Phi', \Theta'}\left(\varphi, \frac{\pi}{2}\right) \mathrm{d}\varphi \\ &= \pi \int_{\pi/4}^{\pi/2} \frac{1}{4 \pi} \cos(\varphi) \mathrm{d}\varphi \\ &= \frac{1}{4} \left( 1 - \frac{1}{\sqrt{2}} \right) \neq \frac{1}{8} \end{align} }[/math]
This shows that the conditional density cannot be treated as conditioning on an event of probability zero, as explained in Conditional probability.
See also
- Disintegration theorem – Theorem in measure theory
Notes
- ↑ 1.0 1.1 1.2 Jaynes 2003, pp. 1514–1517
- ↑ Originally Kolmogorov (1933), translated in Kolmogorov (1956). Sourced from Pollard (2002)
References
- Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge University Press. pp. 467–470. ISBN 0-521-59271-2.
- Kolmogorov, Andrey (1933) (in de). Grundbegriffe der Wahrscheinlichkeitsrechnung. Berlin: Julius Springer.
- Translation: Kolmogorov, Andrey (1956). "Chapter V, §2. Explanation of a Borel Paradox". Foundations of the Theory of Probability (2nd ed.). New York: Chelsea. pp. 50–51. ISBN 0-8284-0023-7. http://www.mathematik.com/Kolmogorov/0029.html. Retrieved 2009-03-12.
- Pollard, David (2002). "Chapter 5. Conditioning, Example 17.". A User's Guide to Measure Theoretic Probability. Cambridge University Press. pp. 122–123. ISBN 0-521-00289-3.
- Mosegaard, Klaus; Tarantola, Albert (2002). "16 Probabilistic approach to inverse problems". International Handbook of Earthquake and Engineering Seismology. International Geophysics. 81. pp. 237–265. doi:10.1016/S0074-6142(02)80219-4. ISBN 9780124406520.
Original source: https://en.wikipedia.org/wiki/Borel–Kolmogorov paradox.
Read more |