Chain rule (probability)

From HandWiki
Revision as of 20:33, 6 February 2024 by Nautica (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Probability theory concept

In probability theory, the chain rule[1] (also called the general product rule[2][3]) describes how to calculate the probability of the intersection of, not necessarily independent, events or the joint distribution of random variables respectively, using conditional probabilities. The rule is notably used in the context of discrete stochastic processes and in applications, e.g. the study of Bayesian networks, which describe a probability distribution in terms of conditional probabilities.

Chain rule for events

Two events

For two events [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math], the chain rule states that

[math]\displaystyle{ \mathbb P(A \cap B) = \mathbb P(B \mid A) \mathbb P(A) }[/math],

where [math]\displaystyle{ \mathbb P(B \mid A) }[/math] denotes the conditional probability of [math]\displaystyle{ B }[/math] given [math]\displaystyle{ A }[/math].

Example

An Urn A has 1 black ball and 2 white balls and another Urn B has 1 black ball and 3 white balls. Suppose we pick an urn at random and then select a ball from that urn. Let event [math]\displaystyle{ A }[/math] be choosing the first urn, i.e. [math]\displaystyle{ \mathbb P(A) = \mathbb P(\overline{A}) = 1/2 }[/math], where [math]\displaystyle{ \overline A }[/math] is the complementary event of [math]\displaystyle{ A }[/math]. Let event [math]\displaystyle{ B }[/math] be the chance we choose a white ball. The chance of choosing a white ball, given that we have chosen the first urn, is [math]\displaystyle{ \mathbb P(B|A) = 2/3. }[/math] The intersection [math]\displaystyle{ A \cap B }[/math] then describes choosing the first urn and a white ball from it. The probability can be calculated by the chain rule as follows:

[math]\displaystyle{ \mathbb P(A \cap B) = \mathbb P(B \mid A) \mathbb P(A) = \frac 23 \cdot \frac 12 = \frac 13. }[/math]

Finitely many events

For events [math]\displaystyle{ A_1,\ldots,A_n }[/math] whose intersection has not probability zero, the chain rule states

[math]\displaystyle{ \begin{align} \mathbb P\left(A_1 \cap A_2 \cap \ldots \cap A_n\right) &= \mathbb P\left(A_n \mid A_1 \cap \ldots \cap A_{n-1}\right) \mathbb P\left(A_1 \cap \ldots \cap A_{n-1}\right) \\ &= \mathbb P\left(A_n \mid A_1 \cap \ldots \cap A_{n-1}\right) \mathbb P\left(A_{n-1} \mid A_1 \cap \ldots \cap A_{n-2}\right) \mathbb P\left(A_1 \cap \ldots \cap A_{n-2}\right) \\ &= \mathbb P\left(A_n \mid A_1 \cap \ldots \cap A_{n-1}\right) \mathbb P\left(A_{n-1} \mid A_1 \cap \ldots \cap A_{n-2}\right) \cdot \ldots \cdot \mathbb P(A_3 \mid A_1 \cap A_2) \mathbb P(A_2 \mid A_1) \mathbb P(A_1)\\ &= \mathbb P(A_1) \mathbb P(A_2 \mid A_1) \mathbb P(A_3 \mid A_1 \cap A_2) \cdot \ldots \cdot \mathbb P(A_n \mid A_1 \cap \dots \cap A_{n-1})\\ &= \prod_{k=1}^n \mathbb P(A_k \mid A_1 \cap \dots \cap A_{k-1})\\ &= \prod_{k=1}^n \mathbb P\left(A_k \,\Bigg|\, \bigcap_{j=1}^{k-1} A_j\right). \end{align} }[/math]

Example 1

For [math]\displaystyle{ n=4 }[/math], i.e. four events, the chain rule reads

[math]\displaystyle{ \begin{align} \mathbb P(A_1 \cap A_2 \cap A_3 \cap A_4) &= \mathbb P(A_4 \mid A_3 \cap A_2 \cap A_1)\mathbb P(A_3 \cap A_2 \cap A_1) \\ &= \mathbb P(A_4 \mid A_3 \cap A_2 \cap A_1)\mathbb P(A_3 \mid A_2 \cap A_1)\mathbb P(A_2 \cap A_1) \\ &= \mathbb P(A_4 \mid A_3 \cap A_2 \cap A_1)\mathbb P(A_3 \mid A_2 \cap A_1)\mathbb P(A_2 \mid A_1)\mathbb P(A_1) \end{align} }[/math].

Example 2

We randomly draw 4 cards without replacement from deck with 52 cards. What is the probability that we have picked 4 aces?

First, we set [math]\displaystyle{ A_n := \left\{ \text{draw an ace in the } n^{\text{th}} \text{ try} \right\} }[/math]. Obviously, we get the following probabilities

[math]\displaystyle{ \mathbb P(A_1) = \frac 4{52}, \qquad \mathbb P(A_2 \mid A_1) = \frac 3{51}, \qquad \mathbb P(A_3 \mid A_1 \cap A_2) = \frac 2{50}, \qquad \mathbb P(A_4 \mid A_1 \cap A_2 \cap A_3) = \frac 1{49} }[/math].

Applying the chain rule,

[math]\displaystyle{ \mathbb P(A_1 \cap A_2 \cap A_3 \cap A_4) = \frac 4{52} \cdot \frac 3{51} \cdot \frac 2{50} \cdot \frac 1{49} }[/math].

Statement of the theorem and proof

Let [math]\displaystyle{ (\Omega, \mathcal A, \mathbb P) }[/math] be a probability space. Recall that the conditional probability of an [math]\displaystyle{ A \in \mathcal A }[/math] given [math]\displaystyle{ B \in \mathcal A }[/math] is defined as

[math]\displaystyle{ \begin{align} \mathbb P(A \mid B) := \begin{cases} \frac{\mathbb P(A \cap B)}{\mathbb P(B)}, & \mathbb P(B) \gt 0,\\ 0 & \mathbb P(B) = 0. \end{cases} \end{align} }[/math]

Then we have the following theorem.

Chain rule —  Let [math]\displaystyle{ (\Omega, \mathcal A, \mathbb P) }[/math] be a probability space. Let [math]\displaystyle{ A_1, ..., A_n \in \mathcal A }[/math]. Then

[math]\displaystyle{ \begin{align} \mathbb P\left(A_1 \cap A_2 \cap \ldots \cap A_n\right) &= \mathbb P(A_1) \mathbb P(A_2 \mid A_1) \mathbb P(A_3 \mid A_1 \cap A_2) \cdot \ldots \cdot \mathbb P(A_n \mid A_1 \cap \dots \cap A_{n-1})\\ &= \mathbb P(A_1) \prod_{j=2}^n \mathbb P(A_j \mid A_1 \cap \dots \cap A_{j-1}). \end{align} }[/math]

Chain rule for discrete random variables

Two random variables

For two discrete random variables [math]\displaystyle{ X,Y }[/math], we use the events[math]\displaystyle{ A := \{X = x\} }[/math]and [math]\displaystyle{ B := \{Y = y\} }[/math]in the definition above, and find the joint distribution as

[math]\displaystyle{ \mathbb P(X = x,Y = y) = \mathbb P(X = x\mid Y = y) \mathbb P(Y = y), }[/math]

or

[math]\displaystyle{ \mathbb P_{(X,Y)}(x,y) = \mathbb P_{X \mid Y}(x\mid y) \mathbb P_Y(y), }[/math]

where [math]\displaystyle{ \mathbb P_X(x) := \mathbb P(X = x) }[/math]is the probability distribution of [math]\displaystyle{ X }[/math] and [math]\displaystyle{ \mathbb P_{X \mid Y}(x\mid y) }[/math] conditional probability distribution of [math]\displaystyle{ X }[/math] given [math]\displaystyle{ Y }[/math].

Finitely many random variables

Let [math]\displaystyle{ X_1, \ldots , X_n }[/math] be random variables and [math]\displaystyle{ x_1, \dots, x_n \in \mathbb R }[/math]. By the definition of the conditional probability,

[math]\displaystyle{ \mathbb P\left(X_n=x_n, \ldots , X_1=x_1\right) = \mathbb P\left(X_n=x_n | X_{n-1}=x_{n-1}, \ldots , X_1=x_1\right) \mathbb P\left(X_{n-1}=x_{n-1}, \ldots , X_1=x_1\right) }[/math]

and using the chain rule, where we set [math]\displaystyle{ A_k := \{X_k = x_k\} }[/math], we can find the joint distribution as

[math]\displaystyle{ \begin{align} \mathbb P\left(X_1 = x_1, \ldots X_n = x_n\right) &= \mathbb P\left(X_1 = x_1 \mid X_2 = x_2, \ldots, X_n = x_n\right) \mathbb P\left(X_2 = x_2, \ldots, X_n = x_n\right) \\ &= \mathbb P(X_1 = x_1) \mathbb P(X_2 = x_2 \mid X_1 = x_1) \mathbb P(X_3 = x_3 \mid X_1 = x_1, X_2 = x_2) \cdot \ldots \\ &\qquad \cdot \mathbb P(X_n = x_n \mid X_1 = x_1, \dots, X_{n-1} = x_{n-1})\\ \end{align} }[/math]

Example

For [math]\displaystyle{ n=3 }[/math], i.e. considering three random variables. Then, the chain rule reads

[math]\displaystyle{ \begin{align} \mathbb P_{(X_1,X_2,X_3)}(x_1,x_2,x_3) &= \mathbb P(X_1=x_1, X_2 = x_2, X_3 = x_3)\\ &= \mathbb P(X_3=x_3 \mid X_2 = x_2, X_1 = x_1) \mathbb P(X_2 = x_2, X_1 = x_1) \\ &= \mathbb P(X_3=x_3 \mid X_2 = x_2, X_1 = x_1) \mathbb P(X_2 = x_2 \mid X_1 = x_1) \mathbb P(X_1 = x_1) \\ &= \mathbb P_{X_3\mid X_2, X_1}(x_3 \mid x_2, x_1) \mathbb P_{X_2\mid X_1}(x_2 \mid x_1) \mathbb P_{X_1}(x_1). \end{align} }[/math]

Bibliography

References

  1. Schilling, René L. (2021). Measure, Integral, Probability & Processes - Probab(ilistical)ly the Theoretical Minimum. Technische Universität Dresden, Germany. p. 136ff. ISBN 979-8-5991-0488-9. 
  2. Schum, David A. (1994). The Evidential Foundations of Probabilistic Reasoning. Northwestern University Press. p. 49. ISBN 978-0-8101-1821-8. 
  3. Klugh, Henry E. (2013). Statistics: The Essentials for Research (3rd ed.). Psychology Press. p. 149. ISBN 978-1-134-92862-0.