Kolmogorov's two-series theorem

From HandWiki

In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.

Statement of the theorem

Let [math]\displaystyle{ \left( X_n \right)_{n=1}^{\infty} }[/math] be independent random variables with expected values [math]\displaystyle{ \mathbf{E} \left[ X_n \right] = \mu_n }[/math] and variances [math]\displaystyle{ \mathbf{Var} \left( X_n \right) = \sigma_n^2 }[/math], such that [math]\displaystyle{ \sum_{n=1}^{\infty} \mu_n }[/math] converges in ℝ and [math]\displaystyle{ \sum_{n=1}^{\infty} \sigma_n^2 }[/math] converges in ℝ. Then [math]\displaystyle{ \sum_{n=1}^{\infty} X_n }[/math] converges in ℝ almost surely.

Proof

Assume WLOG [math]\displaystyle{ \mu_n = 0 }[/math]. Set [math]\displaystyle{ S_N = \sum_{n=1}^N X_n }[/math], and we will see that [math]\displaystyle{ \limsup_N S_N - \liminf_NS_N = 0 }[/math] with probability 1.

For every [math]\displaystyle{ m \in \mathbb{N} }[/math], [math]\displaystyle{ \limsup_{N \to \infty} S_N - \liminf_{N \to \infty} S_N = \limsup_{N \to \infty} \left( S_N - S_m \right) - \liminf_{N \to \infty} \left( S_N - S_m \right) \leq 2 \max_{k \in \mathbb{N} } \left| \sum_{i=1}^{k} X_{m+i} \right| }[/math]

Thus, for every [math]\displaystyle{ m \in \mathbb{N} }[/math] and [math]\displaystyle{ \epsilon \gt 0 }[/math], [math]\displaystyle{ \begin{align} \mathbb{P} \left( \limsup_{N \to \infty} \left( S_N - S_m \right) - \liminf_{N \to \infty} \left( S_N - S_m \right) \geq \epsilon \right) &\leq \mathbb{P} \left( 2 \max_{k \in \mathbb{N} } \left| \sum_{i=1}^{k} X_{m+i} \right| \geq \epsilon \ \right) \\ &= \mathbb{P} \left( \max_{k \in \mathbb{N} } \left| \sum_{i=1}^{k} X_{m+i} \right| \geq \frac{\epsilon}{2} \ \right) \\ &\leq \limsup_{N \to \infty} 4\epsilon^{-2} \sum_{i=m+1}^{m+N} \sigma_i^2 \\ &= 4\epsilon^{-2} \lim_{N \to \infty} \sum_{i=m+1}^{m+N} \sigma_i^2 \end{align} }[/math]

While the second inequality is due to Kolmogorov's inequality.

By the assumption that [math]\displaystyle{ \sum_{n=1}^{\infty} \sigma_n^2 }[/math] converges, it follows that the last term tends to 0 when [math]\displaystyle{ m \to \infty }[/math], for every arbitrary [math]\displaystyle{ \epsilon \gt 0 }[/math].

References

  • Durrett, Rick. Probability: Theory and Examples. Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60–69.
  • M. Loève, Probability theory, Princeton Univ. Press (1963) pp. Sect. 16.3
  • W. Feller, An introduction to probability theory and its applications, 2, Wiley (1971) pp. Sect. IX.9