Concentration inequality

From HandWiki
Revision as of 17:50, 6 February 2024 by Rjetedi (talk | contribs) (url)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Mathematical inequality explaining concentration of random variables

In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value (typically, its expected value).

The law of large numbers of classical probability theory states that sums of independent random variables, under mild conditions, concentrate around their expectation with a high probability. Such sums are the most basic examples of random variables concentrated around their mean.

Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them.[citation needed]

Markov's inequality

Main page: Markov's inequality

Let [math]\displaystyle{ X }[/math] be a random variable that is non-negative (almost surely). Then, for every constant [math]\displaystyle{ a \gt 0 }[/math],

[math]\displaystyle{ \Pr(X \geq a) \leq \frac{\operatorname{E}(X)}{a}. }[/math]

Note the following extension to Markov's inequality: if [math]\displaystyle{ \Phi }[/math] is a strictly increasing and non-negative function, then

[math]\displaystyle{ \Pr(X \geq a) = \Pr(\Phi (X) \geq \Phi (a)) \leq \frac{\operatorname{E}(\Phi(X))}{\Phi (a)}. }[/math]

Chebyshev's inequality

Main page: Chebyshev's inequality

Chebyshev's inequality requires the following information on a random variable [math]\displaystyle{ X }[/math]:

  • The expected value [math]\displaystyle{ \operatorname{E}[X] }[/math] is finite.
  • The variance [math]\displaystyle{ \operatorname{Var}[X] = \operatorname{E}[(X - \operatorname{E}[X] )^2] }[/math] is finite.

Then, for every constant [math]\displaystyle{ a \gt 0 }[/math],

[math]\displaystyle{ \Pr(|X-\operatorname{E}[X]| \geq a) \leq \frac{\operatorname{Var}[X]}{a^2}, }[/math]

or equivalently,

[math]\displaystyle{ \Pr(|X-\operatorname{E}[X]| \geq a\cdot \operatorname{Std}[X]) \leq \frac{1}{a^2}, }[/math]

where [math]\displaystyle{ \operatorname{Std}[X] }[/math] is the standard deviation of [math]\displaystyle{ X }[/math].

Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality applied to the random variable [math]\displaystyle{ |X-\operatorname{E}[X]| }[/math] with [math]\displaystyle{ \Phi(x) = x^2 }[/math].

Vysochanskij–Petunin inequality

Main page: Vysochanskij–Petunin inequalityLet X be a random variable with unimodal distribution, mean μ and finite, non-zero variance σ2. Then, for any [math]\displaystyle{ \lambda \gt \sqrt{\frac{8}{3}} = 1.63299\ldots, }[/math]
[math]\displaystyle{ \text{Pr}(\left|X-\mu\right|\geq \lambda\sigma)\leq\frac{4}{9\lambda^2}. }[/math]

(For a relatively elementary proof see e.g.[1]).

One-sided Vysochanskij–Petunin inequality

Main page: Vysochanskij–Petunin inequality

For a unimodal random variable [math]\displaystyle{ X }[/math] and [math]\displaystyle{ r\geq0 }[/math], the one-sided Vysochanskij-Petunin inequality[2] holds as follows:

[math]\displaystyle{ \text{Pr}(X-E[X]\geq r)\leq \begin{cases} \dfrac{4}{9}\dfrac{\operatorname{Var}(X)}{r^2 + \operatorname{Var}(X)} & \text{for }r^2 \geq\dfrac{5}{3} \operatorname{Var}(X),\\[5pt] \dfrac{4}{3}\dfrac{\operatorname{Var}(X)}{r^2 + \operatorname{Var}(X)}-\dfrac{1}{3} & \text{otherwise.} \end{cases} }[/math]

Paley–Zygmund inequality

Main page: Paley–Zygmund inequality

In contrast to most commonly used concentration inequalities, the Paley-Zygmund inequality provides a lower bound on the deviation probability.

Cantelli's inequality

Main page: Cantelli's inequality

Gauss's inequality

Main page: Gauss's inequality

Chernoff bounds

Main page: Chernoff bound

The generic Chernoff bound[3]:63–65 requires the moment generating function of [math]\displaystyle{ X }[/math], defined as [math]\displaystyle{ M_X(t):=\operatorname{E}\!\left[e^{tX}\right]. }[/math] It always exists, but may be infinite. From Markov's inequality, for every [math]\displaystyle{ t\gt 0 }[/math]:

[math]\displaystyle{ \Pr(X \geq a) \leq \frac{\operatorname{E}[e^{tX}]}{e^{ta}}, }[/math]

and for every [math]\displaystyle{ t\lt 0 }[/math]:

[math]\displaystyle{ \Pr(X \leq a) \leq \frac{\operatorname{E}[e^{tX}]}{e^{ta}}. }[/math]

There are various Chernoff bounds for different distributions and different values of the parameter [math]\displaystyle{ t }[/math]. See [4]:5–7 for a compilation of more concentration inequalities.

Bounds on sums of independent bounded variables

Main pages: Hoeffding's inequality, Azuma's inequality, McDiarmid's inequality, Bennett's inequality, and Bernstein inequalities (probability theory)

Let [math]\displaystyle{ X_1, X_2,\dots,X_n }[/math] be independent random variables such that, for all i:

[math]\displaystyle{ a_i\leq X_i\leq b_i }[/math] almost surely.
[math]\displaystyle{ c_i := b_i-a_i }[/math]
[math]\displaystyle{ \forall i: c_i \leq C }[/math]

Let [math]\displaystyle{ S_n }[/math] be their sum, [math]\displaystyle{ E_n }[/math] its expected value and [math]\displaystyle{ V_n }[/math] its variance:

[math]\displaystyle{ S_n := \sum_{i=1}^n X_i }[/math]
[math]\displaystyle{ E_n := \operatorname{E}[S_n] = \sum_{i=1}^n \operatorname{E}[X_i] }[/math]
[math]\displaystyle{ V_n := \operatorname{Var}[S_n] = \sum_{i=1}^n \operatorname{Var}[X_i] }[/math]

It is often interesting to bound the difference between the sum and its expected value. Several inequalities can be used.

1. Hoeffding's inequality says that:

[math]\displaystyle{ \Pr\left[|S_n-E_n|\gt t\right] \le 2 \exp \left(-\frac{2t^2}{\sum_{i=1}^n c_i^2} \right) \le 2 \exp \left(-\frac{2t^2}{n C^2} \right) }[/math]

2. The random variable [math]\displaystyle{ S_n-E_n }[/math] is a special case of a martingale, and [math]\displaystyle{ S_0-E_0=0 }[/math]. Hence, the general form of Azuma's inequality can also be used and it yields a similar bound:

[math]\displaystyle{ \Pr\left[|S_n-E_n|\gt t\right] \lt 2 \exp \left(-\frac{2t^2}{\sum_{i=1}^n c_i^2}\right)\lt 2 \exp \left(-\frac{2t^2}{n C^2} \right) }[/math]

This is a generalization of Hoeffding's since it can handle other types of martingales, as well as supermartingales and submartingales. See Fan et al. (2015).[5] Note that if the simpler form of Azuma's inequality is used, the exponent in the bound is worse by a factor of 4.

3. The sum function, [math]\displaystyle{ S_n=f(X_1,\dots,X_n) }[/math], is a special case of a function of n variables. This function changes in a bounded way: if variable i is changed, the value of f changes by at most [math]\displaystyle{ b_i-a_i\lt C }[/math]. Hence, McDiarmid's inequality can also be used and it yields a similar bound:

[math]\displaystyle{ \Pr\left[|S_n-E_n|\gt t\right] \lt 2 \exp \left(-\frac{2t^2}{\sum_{i=1}^n c_i^2} \right)\lt 2 \exp \left(-\frac{2t^2}{n C^2} \right) }[/math]

This is a different generalization of Hoeffding's since it can handle other functions besides the sum function, as long as they change in a bounded way.

4. Bennett's inequality offers some improvement over Hoeffding's when the variances of the summands are small compared to their almost-sure bounds C. It says that:

[math]\displaystyle{ \Pr\left[|S_n-E_n| \gt t \right] \leq 2\exp\left[ - \frac{V_n}{C^2} h\left(\frac{C t}{V_n} \right)\right], }[/math] where [math]\displaystyle{ h(u) = (1+u)\log(1+u)-u }[/math]

5. The first of Bernstein's inequalities says that:

[math]\displaystyle{ \Pr\left[|S_n-E_n|\gt t\right] \lt 2 \exp \left(-\frac{t^2/2}{V_n + C\cdot t/3} \right) }[/math]

This is a generalization of Hoeffding's since it can handle random variables with not only almost-sure bound but both almost-sure bound and variance bound.

6. Chernoff bounds have a particularly simple form in the case of sum of independent variables, since [math]\displaystyle{ \operatorname{E}[e^{t\cdot S_n}] = \prod_{i=1}^n {\operatorname{E}[e^{t\cdot X_i}]} }[/math].

For example,[6] suppose the variables [math]\displaystyle{ X_i }[/math] satisfy [math]\displaystyle{ X_i \geq E(X_i)-a_i-M }[/math], for [math]\displaystyle{ 1 \leq i \leq n }[/math]. Then we have lower tail inequality:

[math]\displaystyle{ \Pr[S_n - E_n \lt -\lambda]\leq \exp\left(-\frac{\lambda^2}{2(V_n+\sum_{i=1}^n a_i^2+M\lambda/3)}\right) }[/math]

If [math]\displaystyle{ X_i }[/math] satisfies [math]\displaystyle{ X_i \leq E(X_i)+a_i+M }[/math], we have upper tail inequality:

[math]\displaystyle{ \Pr[S_n - E_n \gt \lambda]\leq \exp\left(-\frac{\lambda^2}{2(V_n + \sum_{i=1}^n a_i^2+M\lambda/3)}\right) }[/math]

If [math]\displaystyle{ X_i }[/math] are i.i.d., [math]\displaystyle{ |X_i| \leq 1 }[/math] and [math]\displaystyle{ \sigma^2 }[/math] is the variance of [math]\displaystyle{ X_i }[/math], a typical version of Chernoff inequality is:

[math]\displaystyle{ \Pr[|S_n| \geq k\sigma]\leq 2e^{-k^2/4n} \text{ for } 0 \leq k\leq 2\sigma. }[/math]

7. Similar bounds can be found in: Rademacher distribution

Efron–Stein inequality

The Efron–Stein inequality (or influence inequality, or MG bound on variance) bounds the variance of a general function.

Suppose that [math]\displaystyle{ X_1 \dots X_n }[/math], [math]\displaystyle{ X_1' \dots X_n' }[/math] are independent with [math]\displaystyle{ X_i' }[/math] and [math]\displaystyle{ X_i }[/math] having the same distribution for all [math]\displaystyle{ i }[/math].

Let [math]\displaystyle{ X = (X_1,\dots , X_n), X^{(i)} = (X_1, \dots , X_{i-1}, X_i',X_{i+1}, \dots , X_n). }[/math] Then

[math]\displaystyle{ \mathrm{Var}(f(X)) \leq \frac{1}{2} \sum_{i=1}^{n} E[(f(X)-f(X^{(i)}))^2]. }[/math]

A proof may be found in e.g.,.[7]

Bretagnolle–Huber–Carol inequality

Bretagnolle–Huber–Carol Inequality bounds the difference between a vector of multinomially distributed random variables and a vector of expected values.[8][9] A simple proof appears in [10](Appendix Section).

If a random vector [math]\displaystyle{ (Z_1, Z_2, Z_3, \ldots , Z_n) }[/math] is multinomially distributed with parameters [math]\displaystyle{ (p_1, p_2, \ldots , p_ n) }[/math] and satisfies [math]\displaystyle{ Z_1 + Z_2 + \dots + Z_n = M, }[/math] then

[math]\displaystyle{ \Pr\left( \sum_{i=1}^n |Z_i -M p_i| \geq 2M \varepsilon \right) \leq 2^n e^{-2M\varepsilon^2}. }[/math]

This inequality is used to bound the total variation distance.

Mason and van Zwet inequality

The Mason and van Zwet inequality[11] for multinomial random vectors concerns a slight modification of the classical chi-square statistic.

Let the random vector [math]\displaystyle{ (N_1, \ldots ,N_k) }[/math] be multinomially distributed with parameters [math]\displaystyle{ n }[/math] and [math]\displaystyle{ (p_1,\ldots , p_k) }[/math] such that [math]\displaystyle{ p_i \gt 0 }[/math] for [math]\displaystyle{ i \lt k. }[/math] Then for every [math]\displaystyle{ C \gt 0 }[/math] and [math]\displaystyle{ \delta \gt 0 }[/math] there exist constants [math]\displaystyle{ a, b, c \gt 0, }[/math] such that for all [math]\displaystyle{ n \geq 1 }[/math] and [math]\displaystyle{ \lambda ,p_1, \ldots , p_{k-1} }[/math] satisfying [math]\displaystyle{ \lambda \gt Cn \min \{p_i | 1 \leq i \leq k-1 \} }[/math] and [math]\displaystyle{ \sum_{i=1}^{k-1} p_i \leq 1 -\delta, }[/math] we have

[math]\displaystyle{ \Pr\left( \sum_{i=1}^{k-1} \frac{(N_i-np_i)^2}{np_i}\gt \lambda \right) \leq a e^{bk-c\lambda}. }[/math]

Dvoretzky–Kiefer–Wolfowitz inequality

Main page: Dvoretzky–Kiefer–Wolfowitz inequality

The Dvoretzky–Kiefer–Wolfowitz inequality bounds the difference between the real and the empirical cumulative distribution function.

Given a natural number [math]\displaystyle{ n }[/math], let [math]\displaystyle{ X_1, X_2,\dots,X_n }[/math] be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let [math]\displaystyle{ F_n }[/math] denote the associated empirical distribution function defined by

[math]\displaystyle{ F_n(x) = \frac1n \sum_{i=1}^n \mathbf{1}_{\{X_i\leq x\}},\qquad x\in\mathbb{R}. }[/math]

So [math]\displaystyle{ F(x) }[/math] is the probability that a single random variable [math]\displaystyle{ X }[/math] is smaller than [math]\displaystyle{ x }[/math], and [math]\displaystyle{ F_n(x) }[/math] is the average number of random variables that are smaller than [math]\displaystyle{ x }[/math].

Then

[math]\displaystyle{ \Pr\left(\sup_{x\in\mathbb R} \bigl(F_n(x) - F(x)\bigr) \gt \varepsilon \right) \le e^{-2n\varepsilon^2} \text{ for every } \varepsilon \geq \sqrt{\tfrac 1 {2n} \ln2}. }[/math]

Anti-concentration inequalities

Anti-concentration inequalities, on the other hand, provide an upper bound on how much a random variable can concentrate around a quantity.

For example, Rao and Yehudayoff[12] show that there exists some [math]\displaystyle{ C \gt 0 }[/math] such that, for most directions of the hypercube [math]\displaystyle{ x \in \{\pm 1\}^n }[/math], the following is true:[clarification needed]

[math]\displaystyle{ \Pr\left(\langle x, Y\rangle = k\right) \le \frac{C}{\sqrt{n}}, }[/math]

where [math]\displaystyle{ Y }[/math] is drawn uniformly from a subset [math]\displaystyle{ B \subseteq \{\pm 1\}^n }[/math] of large enough size.

Such inequalities are of importance in several fields, including communication complexity (e.g., in proofs of the gap Hamming problem[13]) and graph theory.[14]

An interesting anti-concentration inequality for weighted sums of independent Rademacher random variables can be obtained using the Paley–Zygmund and the Khintchine inequalities.[15]

References

  1. Pukelsheim, F., 1994. The Three Sigma Rule. The American Statistician, 48(2), pp. 88–91
  2. Mercadier, Mathieu; Strobel, Frank (2021-11-16). "A one-sided Vysochanskii-Petunin inequality with financial applications" (in en). European Journal of Operational Research 295 (1): 374–377. doi:10.1016/j.ejor.2021.02.041. ISSN 0377-2217. https://www.sciencedirect.com/science/article/pii/S0377221721001545. 
  3. Mitzenmacher, Michael; Upfal, Eli (2005). Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press. ISBN 0-521-83540-2. https://books.google.com/books?id=0bAYl6d7hvkC. 
  4. Slagle, N.P. (2012). "One Hundred Statistics and Probability Inequalities". 
  5. Fan, X.; Grama, I.; Liu, Q. (2015). "Exponential inequalities for martingales with applications". Electronic Journal of Probability (Electron. J. Probab. 20) 20: 1–22. doi:10.1214/EJP.v20-3496. http://projecteuclid.org/euclid.ejp/1465067107. 
  6. Chung, Fan; Lu, Linyuan (2010). "Old and new concentration inequalities". Complex Graphs and Networks. American Mathematical Society. http://www.math.ucsd.edu/~fan/complex/ch2.pdf. Retrieved August 14, 2018. 
  7. Boucheron, St{\'e}phane; Lugosi, G{\'a}bor; Bousquet, Olivier (2004). "Concentration inequalities". Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, February 2–14, 2003, T{\"u}bingen, Germany, August 4–16, 2003, Revised Lectures (Springer): 208–240. 
  8. Bretagnolle, Jean; Huber, Catherine (1978). Lois empiriques et distance de Prokhorov. Lecture Notes in Mathematics. 649. pp. 332–341. doi:10.1007/BFb0064609. ISBN 978-3-540-08761-8. http://www.numdam.org/item/SPS_1978__12__332_0/. 
  9. van der Vaart, A.W.; Wellner, J.A. (1996). Weak convergence and empirical processes: With applications to statistics. Springer Science & Business Media. 
  10. Yuto Ushioda; Masato Tanaka; Tomomi Matsui (2022). "Monte Carlo Methods for the Shapley–Shubik Power Index". Games 13 (3): 44. doi:10.3390/g13030044. 
  11. Mason, David M.; Willem R. Van Zwet (1987). "A Refinement of the KMT Inequality for the Uniform Empirical Process". The Annals of Probability 15 (3): 871–884. doi:10.1214/aop/1176992070. 
  12. Rao, Anup; Yehudayoff, Amir (2018). "Anti-concentration in most directions". Electronic Colloquium on Computational Complexity. https://eccc.weizmann.ac.il/report/2018/194/. 
  13. Sherstov, Alexander A. (2012). "The Communication Complexity of Gap Hamming Distance". Theory of Computing. https://theoryofcomputing.org/articles/v008a008/. 
  14. Matthew Kwan; Benny Sudakov; Tuan Tran (2018). "Anticoncentration for subgraph statistics". Journal of the London Mathematical Society 99 (3): 757–777. doi:10.1112/jlms.12192. Bibcode2018arXiv180705202K. 
  15. Veraar, Mark (2009). "On Khintchine inequalities with a weight". arXiv:0909.2586v1 [math.PR].

External links