Kolmogorov's inequality

From HandWiki

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound.

Statement of the inequality

Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,

[math]\displaystyle{ \Pr \left(\max_{1\leq k\leq n} | S_k |\geq\lambda\right)\leq \frac{1}{\lambda^2} \operatorname{Var} [S_n] \equiv \frac{1}{\lambda^2}\sum_{k=1}^n \operatorname{Var}[X_k]=\frac{1}{\lambda^2}\sum_{k=1}^{n}\text{E}[X_k^2], }[/math]

where Sk = X1 + ... + Xk.

The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.

Proof

The following argument employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence [math]\displaystyle{ S_1, S_2, \dots, S_n }[/math] is a martingale. Define [math]\displaystyle{ (Z_i)_{i=0}^n }[/math] as follows. Let [math]\displaystyle{ Z_0 = 0 }[/math], and

[math]\displaystyle{ Z_{i+1} = \left\{ \begin{array}{ll} S_{i+1} & \text{ if } \displaystyle \max_{1 \leq j \leq i} | S_j | \lt \lambda \\ Z_i & \text{ otherwise} \end{array} \right. }[/math]

for all [math]\displaystyle{ i }[/math]. Then [math]\displaystyle{ (Z_i)_{i=0}^n }[/math] is also a martingale.

For any martingale [math]\displaystyle{ M_i }[/math] with [math]\displaystyle{ M_0 = 0 }[/math], we have that

[math]\displaystyle{ \begin{align} \sum_{i=1}^n \text{E}[ (M_i - M_{i-1})^2] &= \sum_{i=1}^n \text{E}[ M_i^2 - 2 M_i M_{i-1} + M_{i-1}^2 ] \\ &= \sum_{i=1}^n \text{E}\left[ M_i^2 - 2 (M_{i-1} + M_{i} - M_{i-1}) M_{i-1} + M_{i-1}^2 \right] \\ &= \sum_{i=1}^n \text{E}\left[ M_i^2 - M_{i-1}^2 \right] - 2\text{E}\left[ M_{i-1} (M_{i}-M_{i-1})\right]\\ &= \text{E}[M_n^2] - \text{E}[M_0^2] = \text{E}[M_n^2]. \end{align} }[/math]

Applying this result to the martingale [math]\displaystyle{ (S_i)_{i=0}^n }[/math], we have

[math]\displaystyle{ \begin{align} \text{Pr}\left( \max_{1 \leq i \leq n} |S_i| \geq \lambda\right) &= \text{Pr}[|Z_n| \geq \lambda] \\ &\leq \frac{1}{\lambda^2} \text{E}[Z_n^2] =\frac{1}{\lambda^2} \sum_{i=1}^n \text{E}[(Z_i - Z_{i-1})^2] \\ &\leq \frac{1}{\lambda^2} \sum_{i=1}^n \text{E}[(S_i - S_{i-1})^2] =\frac{1}{\lambda^2} \text{E}[S_n^2] = \frac{1}{\lambda^2} \text{Var}[S_n] \end{align} }[/math]

where the first inequality follows by Chebyshev's inequality.


This inequality was generalized by Hájek and Rényi in 1955.

See also

References

  • Billingsley, Patrick (1995). Probability and Measure. New York: John Wiley & Sons, Inc.. ISBN 0-471-00710-2.  (Theorem 22.4)
  • Feller, William (1968). An Introduction to Probability Theory and its Applications, Vol 1 (Third ed.). New York: John Wiley & Sons, Inc.. p. xviii+509. ISBN 0-471-25708-7. 
  • Kahane, Jean-Pierre (1985). Some random series of functions (Second ed.). Cambridge: Cambridge University Press. p. 29-30.