McDiarmid's inequality

From HandWiki
Short description: Probability and computer science concept

In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

Statement

A function [math]\displaystyle{ f: \mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_n \rightarrow \mathbb{R} }[/math] satisfies the bounded differences property if substituting the value of the [math]\displaystyle{ i }[/math]th coordinate [math]\displaystyle{ x_i }[/math] changes the value of [math]\displaystyle{ f }[/math] by at most [math]\displaystyle{ c_i }[/math]. More formally, if there are constants [math]\displaystyle{ c_1, c_2, \dots, c_n }[/math] such that for all [math]\displaystyle{ i\in[n] }[/math], and all [math]\displaystyle{ x_1\in \mathcal{X}_1,\,x_2\in \mathcal{X}_2,\, \ldots,\, x_n \in \mathcal{X}_n }[/math],

[math]\displaystyle{ \sup_{x_i' \in \mathcal{X}_i} \left|f(x_1, \dots, x_{i-1}, x_i, x_{i+1}, \ldots, x_n) - f(x_1, \dots, x_{i-1}, x_i', x_{i+1}, \ldots, x_n)\right| \leq c_i. }[/math]

McDiarmid's Inequality[1] — Let [math]\displaystyle{ f: \mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_n \rightarrow \mathbb{R} }[/math] satisfy the bounded differences property with bounds [math]\displaystyle{ c_1, c_2, \dots, c_n }[/math].

Consider independent random variables [math]\displaystyle{ X_1, X_2, \dots, X_n }[/math] where [math]\displaystyle{ X_i \in \mathcal{X}_i }[/math] for all [math]\displaystyle{ i }[/math]. Then, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math],

[math]\displaystyle{ \text{P}\left(f(X_1, X_2, \ldots, X_n) - \mathbb{E}[f(X_1, X_2, \ldots, X_n)] \geq \varepsilon\right) \leq \exp \left(-\frac{2 \varepsilon^2}{\sum_{i=1}^{n} c_i^2} \right), }[/math]
[math]\displaystyle{ \text{P}(f(X_1, X_2, \ldots, X_n) - \mathbb{E}[f(X_1, X_2, \ldots, X_n)] \leq -\varepsilon) \leq \exp \left(-\frac{2 \varepsilon^2}{\sum_{i=1}^{n} c_i^2}\right), }[/math]

and as an immediate consequence,

[math]\displaystyle{ \text{P}(|f(X_1, X_2, \ldots, X_n) - \mathbb{E}[f(X_1, X_2, \ldots, X_n)]| \geq \varepsilon) \leq 2 \exp \left(-\frac{2 \varepsilon^2}{\sum_{i=1}^{n} c_i^2}\right). }[/math]

Extensions

Unbalanced distributions

A stronger bound may be given when the arguments to the function are sampled from unbalanced distributions, such that resampling a single argument rarely causes a large change to the function value.

McDiarmid's Inequality (unbalanced)[2][3] — Let [math]\displaystyle{ f: \mathcal{X}^n \rightarrow \mathbb{R} }[/math] satisfy the bounded differences property with bounds [math]\displaystyle{ c_1, c_2, \dots, c_n }[/math].

Consider independent random variables [math]\displaystyle{ X_1, X_2, \ldots, X_n \in \mathcal{X} }[/math] drawn from a distribution where there is a particular value [math]\displaystyle{ \chi_0 \in \mathcal{X} }[/math] which occurs with probability [math]\displaystyle{ 1-p }[/math]. Then, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math],

[math]\displaystyle{ \text{P}(|f(X_1, \ldots, X_n) - \mathbb{E}[f(X_1, \ldots, X_n)]| \geq \varepsilon) \leq 2 \exp \left(\frac{-\varepsilon^2}{2p(2-p)\sum_{i=1}^{n} c_i^2 + \frac{2}{3}\varepsilon\max_i c_i}\right). }[/math]

This may be used to characterize, for example, the value of a function on graphs when evaluated on sparse random graphs and hypergraphs, since in a sparse random graph, it is much more likely for any particular edge to be missing than to be present.

Differences bounded with high probability

McDiarmid's inequality may be extended to the case where the function being analyzed does not strictly satisfy the bounded differences property, but large differences remain very rare.

McDiarmid's Inequality (Differences bounded with high probability)[4] — Let [math]\displaystyle{ f: \mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_n \rightarrow \mathbb{R} }[/math] be a function and [math]\displaystyle{ \mathcal{Y} \subseteq \mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_n }[/math] be a subset of its domain and let [math]\displaystyle{ c_1, c_2, \dots, c_n \ge 0 }[/math] be constants such that for all pairs [math]\displaystyle{ (x_1,\ldots,x_n)\in \mathcal{Y} }[/math] and [math]\displaystyle{ (x'_1,\ldots,x'_n)\in \mathcal{Y} }[/math],

[math]\displaystyle{ \left|f(x_1, \ldots, x_n) - f(x'_1, \ldots, x'_n)\right| \leq \sum_{i: x_i \ne x'_i} c_i. }[/math]

Consider independent random variables [math]\displaystyle{ X_1, X_2, \dots, X_n }[/math] where [math]\displaystyle{ X_i \in \mathcal{X}_i }[/math] for all [math]\displaystyle{ i }[/math]. Let [math]\displaystyle{ p = 1 - \mathrm{P}((X_1, \ldots, X_n) \in \mathcal{Y}) }[/math] and let [math]\displaystyle{ m=\mathbb{E}[f(X_1, \ldots, X_n) \mid (X_1, \ldots, X_n) \in \mathcal{Y}] }[/math]. Then, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math],

[math]\displaystyle{ \text{P}\left(f(X_1, \ldots, X_n) - m \geq \varepsilon\right) \leq p + \exp \left(-\frac{2 \max\left(0,\varepsilon-p\sum_{i=1}^nc_i\right)^2}{\sum_{i=1}^{n} c_i^2} \right), }[/math]

and as an immediate consequence,

[math]\displaystyle{ \text{P}(|f(X_1, \ldots, X_n) - m| \geq \varepsilon) \leq 2p+2\exp \left(-\frac{2 \max\left(0,\varepsilon-p\sum_{i=1}^nc_i\right)^2}{\sum_{i=1}^{n} c_i^2} \right). }[/math]

There exist stronger refinements to this analysis in some distribution-dependent scenarios,[5] such as those that arise in learning theory.

Sub-Gaussian and sub-exponential norms

Let the [math]\displaystyle{ k }[/math]th centered conditional version of a function [math]\displaystyle{ f }[/math] be

[math]\displaystyle{ f_k(X)(x) := f(x_1, \ldots, x_{k-1}, X_k, x_{k+1}, \ldots, x_n) - \mathbb{E}_{X'_k}f(x_1, \ldots, x_{k-1}, X'_k, x_{k+1}, \ldots, x_n), }[/math]

so that [math]\displaystyle{ f_k(X) }[/math] is a random variable depending on random values of [math]\displaystyle{ x_1, \ldots, x_{k-1}, x_{k+1}, \ldots, x_n }[/math].

McDiarmid's Inequality (Sub-Gaussian norm)[6][7] — Let [math]\displaystyle{ f: \mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_n \rightarrow \mathbb{R} }[/math] be a function. Consider independent random variables [math]\displaystyle{ X = (X_1, X_2, \dots, X_n) }[/math] where [math]\displaystyle{ X_i \in \mathcal{X}_i }[/math] for all [math]\displaystyle{ i }[/math].

Let [math]\displaystyle{ f_k(X) }[/math] refer to the [math]\displaystyle{ k }[/math]th centered conditional version of [math]\displaystyle{ f }[/math]. Let [math]\displaystyle{ \|\cdot\|_{\psi_2} }[/math] denote the sub-Gaussian norm of a random variable.

Then, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math],

[math]\displaystyle{ \text{P}\left(f(X_1, \ldots, X_n) - m \geq \varepsilon\right) \leq \exp \left(\frac{-\varepsilon^2}{32e\left\|\sum_{k\in [n]}\|f_k(X)\|_{\psi_2}^2\right\|_{\infty}} \right). }[/math]

McDiarmid's Inequality (Sub-exponential norm)[7] — Let [math]\displaystyle{ f: \mathcal{X}_1 \times \mathcal{X}_2 \times \cdots \times \mathcal{X}_n \rightarrow \mathbb{R} }[/math] be a function. Consider independent random variables [math]\displaystyle{ X = (X_1, X_2, \dots, X_n) }[/math] where [math]\displaystyle{ X_i \in \mathcal{X}_i }[/math] for all [math]\displaystyle{ i }[/math].

Let [math]\displaystyle{ f_k(X) }[/math] refer to the [math]\displaystyle{ k }[/math]th centered conditional version of [math]\displaystyle{ f }[/math]. Let [math]\displaystyle{ \|\cdot\|_{\psi_1} }[/math] denote the sub-exponential norm of a random variable.

Then, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math],

[math]\displaystyle{ \text{P}\left(f(X_1, \ldots, X_n) - m \geq \varepsilon\right) \leq \exp \left(\frac{-\varepsilon^2}{4e^2\left\|\sum_{k\in [n]}\|f_k(X)\|_{\psi_1}^2\right\|_{\infty} + 2\varepsilon e\max_{k \in [n]}\left\|\|f_k(X)\|_{\psi_1}\right\|_{\infty}} \right). }[/math]

Bennett and Bernstein forms

Refinements to McDiarmid's inequality in the style of Bennett's inequality and Bernstein inequalities are made possible by defining a variance term for each function argument. Let

[math]\displaystyle{ \begin{align} B &:= \max_{k \in [n]} \sup_{x_1, \dots, x_{k-1}, x_{k+1}, \dots, x_{n}} \left|f(x_1, \dots, x_{k-1}, X_k, x_{k+1}, \dots, x_n) - \mathbb{E}_{X_k}f(x_1, \dots, x_{k-1}, X_k, x_{k+1}, \dots, x_n)\right|, \\ V_k &:= \sup_{x_1, \dots, x_{k-1}, x_{k+1}, \dots, x_{n}} \mathbb{E}_{X_k} \left(f(x_1, \dots, x_{k-1}, X_k, x_{k+1}, \dots, x_n) - \mathbb{E}_{X_k}f(x_1, \dots, x_{k-1}, X_k, x_{k+1}, \dots, x_n)\right)^2, \\ \tilde \sigma^2 &:= \sum_{k=1}^n V_k. \end{align} }[/math]

McDiarmid's Inequality (Bennett form)[3] — Let [math]\displaystyle{ f: \mathcal{X}^n \rightarrow \mathbb{R} }[/math] satisfy the bounded differences property with bounds [math]\displaystyle{ c_1, c_2, \dots, c_n }[/math]. Consider independent random variables [math]\displaystyle{ X_1, X_2, \dots, X_n }[/math] where [math]\displaystyle{ X_i \in \mathcal{X}_i }[/math] for all [math]\displaystyle{ i }[/math]. Let [math]\displaystyle{ B }[/math] and [math]\displaystyle{ \tilde\sigma^2 }[/math] be defined as at the beginning of this section.

Then, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math],

[math]\displaystyle{ \text{P}(f(X_1, \ldots, X_n) - \mathbb{E}[f(X_1, \ldots, X_n)] \geq \varepsilon) \leq \exp \left(-\frac{\varepsilon}{2B}\log\left(1+\frac{B\varepsilon}{\tilde\sigma^2}\right)\right). }[/math]

McDiarmid's Inequality (Bernstein form)[3] — Let [math]\displaystyle{ f: \mathcal{X}^n \rightarrow \mathbb{R} }[/math] satisfy the bounded differences property with bounds [math]\displaystyle{ c_1, c_2, \dots, c_n }[/math]. Let [math]\displaystyle{ B }[/math] and [math]\displaystyle{ \tilde\sigma^2 }[/math] be defined as at the beginning of this section.

Then, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math],

[math]\displaystyle{ \text{P}(f(X_1, \ldots, X_n) - \mathbb{E}[f(X_1, \ldots, X_n)] \geq \varepsilon) \leq \exp \left(-\frac{\varepsilon^2}{2\left(\tilde\sigma^2 + \frac{B\varepsilon}{3}\right)}\right). }[/math]

Proof

The following proof of McDiarmid's inequality[1] constructs the Doob martingale tracking the conditional expected value of the function as more and more of its arguments are sampled and conditioned on, and then applies a martingale concentration inequality (Azuma's inequality). An alternate argument avoiding the use of martingales also exists, taking advantage of the independence of the function arguments to provide a Chernoff-bound-like argument.[3]

For better readability, we will introduce a notational shorthand: [math]\displaystyle{ z_{i \rightharpoondown j} }[/math] will denote [math]\displaystyle{ z_i, \dots, z_j }[/math] for any [math]\displaystyle{ z \in \mathcal{X}^n }[/math] and integers [math]\displaystyle{ 1 \le i \le j \le n }[/math], so that, for example,

[math]\displaystyle{ f(X_{1 \rightharpoondown (i-1)}, y, x_{(i+1) \rightharpoondown n}) := f(X_1, \ldots, X_{i-1}, y, x_{i+1}, \ldots, x_n). }[/math]

Pick any [math]\displaystyle{ x_1', x_2', \ldots, x_n' }[/math]. Then, for any [math]\displaystyle{ x_1, x_2, \ldots, x_n }[/math], by triangle inequality,

[math]\displaystyle{ \begin{align} &|f(x_{1 \rightharpoondown n}) - f(x'_{1 \rightharpoondown n})| \\[6pt] \leq {} & |f(x_{1 \rightharpoondown\, n}) - f(x'_{1 \rightharpoondown (n-1)}, x_n)| + c_n\\ \leq {} & |f(x_{1 \rightharpoondown n}) - f(x'_{1 \rightharpoondown (n-2)}, x_{(n-1) \rightharpoondown n})| + c_{n-1} + c_n\\ \leq {} & \ldots \\ \leq {} & \sum_{i=1}^n c_i , \end{align} }[/math]

and thus [math]\displaystyle{ f }[/math] is bounded.

Since [math]\displaystyle{ f }[/math] is bounded, define the Doob martingale [math]\displaystyle{ \{Z_i\} }[/math] (each [math]\displaystyle{ Z_i }[/math] being a random variable depending on the random values of [math]\displaystyle{ X_1, \ldots, X_i }[/math]) as

[math]\displaystyle{ Z_i:=\mathbb{E}[f(X_{1 \rightharpoondown n}) \mid X_{1 \rightharpoondown i} ] }[/math]

for all [math]\displaystyle{ i\geq 1 }[/math] and [math]\displaystyle{ Z_0: = \mathbb{E}[f(X_{1 \rightharpoondown n})] }[/math], so that [math]\displaystyle{ Z_n = f(X_{1 \rightharpoondown n}) }[/math].

Now define the random variables for each [math]\displaystyle{ i }[/math]

[math]\displaystyle{ \begin{align} U_i &:= \sup_{x \in \mathcal{X}_i} \mathbb{E}[f(X_{1 \rightharpoondown (i-1)}, x, X_{(i+1) \rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}, X_i = x] - \mathbb[f(X_{1 \rightharpoondown (i-1)}, X_{i\rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}], \\ L_i &:= \inf_{x \in \mathcal{X}_i} \mathbb{E}[f(X_{1 \rightharpoondown (i-1)}, x, X_{(i+1) \rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}, X_i = x] - \mathbb[f(X_{1 \rightharpoondown (i-1)}, X_{i\rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}]. \\ \end{align} }[/math]

Since [math]\displaystyle{ X_i, \ldots, X_n }[/math] are independent of each other, conditioning on [math]\displaystyle{ X_i = x }[/math] does not affect the probabilities of the other variables, so these are equal to the expressions

[math]\displaystyle{ \begin{align} U_i &= \sup_{x \in \mathcal{X}_i} \mathbb{E}[f(X_{1 \rightharpoondown (i-1)}, x, X_{(i+1) \rightharpoondown n}) - f(X_{1 \rightharpoondown (i-1)}, X_{i\rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}], \\ L_i &= \inf_{x \in \mathcal{X}_i} \mathbb{E}[f(X_{1 \rightharpoondown (i-1)}, x, X_{(i+1) \rightharpoondown n}) - f(X_{1 \rightharpoondown (i-1)}, X_{i\rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}]. \\ \end{align} }[/math]

Note that [math]\displaystyle{ L_i \leq Z_i - Z_{i-1} \leq U_i }[/math]. In addition,

[math]\displaystyle{ \begin{align} U_i - L_i &= \sup_{u\in \mathcal{X}_i, \ell \in \mathcal{X}_i} \mathbb{E}[f(X_{1 \rightharpoondown (i-1)}, u, X_{(i+1) \rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}] -\mathbb{E}[f(X_{1 \rightharpoondown (i-1)}, \ell, X_{(i+1) \rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}] \\[6pt] &=\sup_{u\in \mathcal{X}_i, \ell \in \mathcal{X}_i} \mathbb{E}[f(X_{1 \rightharpoondown (i-1)}, u, X_{(i+1) \rightharpoondown n}) - f(X_{1 \rightharpoondown (i-1)}, l, X_{(i+1) \rightharpoondown n}) \mid X_{1 \rightharpoondown (i-1)}] \\ &\leq \sup_{x_u\in \mathcal{X}_i, x_l \in \mathcal{X}_i} \mathbb{E}[c_i \mid X_{1 \rightharpoondown (i-1)}] \\[6pt] &\leq c_i \end{align} }[/math]

Then, applying the general form of Azuma's inequality to [math]\displaystyle{ \left\{Z_i\right\} }[/math], we have

[math]\displaystyle{ \text{P}(f(X_1, \ldots, X_n) - \mathbb{E}[f(X_1, \ldots, X_n) ] \geq \varepsilon ) = \operatorname{P}(Z_n - Z_0 \geq \varepsilon) \leq \exp \left(-\frac{2\varepsilon^2}{\sum_{i=1}^n c_i^2}\right). }[/math]

The one-sided bound in the other direction is obtained by applying Azuma's inequality to [math]\displaystyle{ \left\{-Z_i\right\} }[/math] and the two-sided bound follows from a union bound. [math]\displaystyle{ \square }[/math]

See also

References

  1. 1.0 1.1 Doob, J. L. (1940). "Regularity properties of certain families of chance variables". Transactions of the American Mathematical Society 47 (3): 455–486. doi:10.2307/1989964. https://www.ams.org/journals/tran/1940-047-03/S0002-9947-1940-0002052-6/S0002-9947-1940-0002052-6.pdf. 
  2. Chou, Chi-Ning; Love, Peter J.; Sandhu, Juspreet Singh; Shi, Jonathan (2022). "Limitations of Local Quantum Algorithms on Random Max-k-XOR and Beyond". 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022) 229: 41:13. doi:10.4230/LIPIcs.ICALP.2022.41. https://drops.dagstuhl.de/opus/volltexte/2022/16382/. Retrieved 8 July 2022. 
  3. 3.0 3.1 3.2 3.3 Ying, Yiming (2004). "McDiarmid's inequalities of Bernstein and Bennett forms". http://www0.cs.ucl.ac.uk/staff/Y.Ying/McDiarmid.pdf. 
  4. Combes, Richard (2015). "An extension of McDiarmid's inequality". arXiv:1511.05240 [cs.LG].
  5. Wu, Xinxing; Zhang, Junping (April 2018). "Distribution-dependent concentration inequalities for tighter generalization bounds". Science China Information Sciences 61 (4): 048105:1–048105:3. doi:10.1007/s11432-017-9225-2. https://link.springer.com/article/10.1007/s11432-017-9225-2. Retrieved 10 July 2022. 
  6. Kontorovich, Aryeh (22 June 2014). "Concentration in unbounded metric spaces and algorithmic stability". Proceedings of the 31st International Conference on Machine Learning 32 (2): 28–36. https://proceedings.mlr.press/v32/kontorovicha14.html. Retrieved 10 July 2022. 
  7. 7.0 7.1 Maurer, Andreas; Pontil, Pontil (2021). "Concentration inequalities under sub-Gaussian and sub-exponential conditions". Advances in Neural Information Processing Systems 34: 7588–7597. https://proceedings.neurips.cc/paper/2021/file/3e33b970f21d2fc65096871ea0d2c6e4-Paper.pdf. Retrieved 10 July 2022.