Pinsker's inequality

From HandWiki

In information theory, Pinsker's inequality, named after its inventor Mark Semenovich Pinsker, is an inequality that bounds the total variation distance (or statistical distance) in terms of the Kullback–Leibler divergence. The inequality is tight up to constant factors.[1]

Formal statement

Pinsker's inequality states that, if [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] are two probability distributions on a measurable space [math]\displaystyle{ (X, \Sigma) }[/math], then

[math]\displaystyle{ \delta(P,Q) \le \sqrt{\frac{1}{2} D_{\mathrm{KL}}(P\parallel Q)}, }[/math]

where

[math]\displaystyle{ \delta(P,Q)=\sup \bigl\{ |P(A) - Q(A)| \mid \quad A \in \Sigma \text{ is a measurable event} \bigr\} }[/math]

is the total variation distance (or statistical distance) between [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] and

[math]\displaystyle{ D_{\mathrm{KL}}(P\parallel Q) = \operatorname{E}_P \left( \log \frac{\mathrm{d} P}{\mathrm{d} Q} \right) = \int_X \left( \log \frac{\mathrm{d} P}{\mathrm{d} Q} \right) \, \mathrm{d} P }[/math]

is the Kullback–Leibler divergence in nats. When the sample space [math]\displaystyle{ X }[/math] is a finite set, the Kullback–Leibler divergence is given by

[math]\displaystyle{ D_{\mathrm{KL}}(P\parallel Q) = \sum_{i \in X} \left( \log \frac{P(i)}{Q(i)}\right) P(i)\! }[/math]

Note that in terms of the total variation norm [math]\displaystyle{ \| P - Q \| }[/math] of the signed measure [math]\displaystyle{ P - Q }[/math], Pinsker's inequality differs from the one given above by a factor of two:

[math]\displaystyle{ \| P - Q \| \le \sqrt{2 D_{\mathrm{KL}}(P\parallel Q)}. }[/math]

A proof of Pinsker's inequality uses the partition inequality for f-divergences.

Alternative version

Note that the expression of Pinsker inequality depends on what basis of logarithm is used in the definition of KL-divergence. [math]\displaystyle{ D_{KL} }[/math] is defined using [math]\displaystyle{ \ln }[/math] (logarithm in base [math]\displaystyle{ e }[/math]), whereas [math]\displaystyle{ D }[/math] is typically defined with [math]\displaystyle{ \log_2 }[/math] (logarithm in base 2). Then,

[math]\displaystyle{ D(P\parallel Q) =\frac{D_{KL}(P\parallel Q)}{\ln 2}. }[/math]

Given the above comments, there is an alternative statement of Pinsker's inequality in some literature that relates information divergence to variation distance:

[math]\displaystyle{ D(P\parallel Q) = \frac{D_{KL}(P\parallel Q)}{\ln 2} \ge \frac{1}{2 \ln 2} V^2(p, q), }[/math]

i.e.,

[math]\displaystyle{ \sqrt{\frac{D_{KL}(P\parallel Q)}{2} } \ge \frac{V(p, q)}{2}, }[/math]

in which

[math]\displaystyle{ V(p, q) = \sum_{x \in \mathcal{X}} |p(x) - q(x) | }[/math]

is the (non-normalized) variation distance between two probability density functions [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] on the same alphabet [math]\displaystyle{ \mathcal{X} }[/math].[2]

This form of Pinsker's inequality shows that "convergence in divergence" is stronger notion than "convergence in variation distance".

A simple proof by John Pollard is shown by letting [math]\displaystyle{ r(x)=P(x)/Q(x)-1 \ge -1 }[/math]:

[math]\displaystyle{ \begin{align} D_{KL}(P \parallel Q) &= E_Q[(1+r(x))\log(1+r(x))-r(x)] \\&\ge \frac{1}{2}E_Q\left[\frac{r(x)^2}{1+r(x)/3}\right] \\&\ge \frac{1}{2}\frac{E_Q[|r(x)|]^2}{E_Q[1+r(x)/3]} &\text{(from Titu's lemma)} \\&= \frac{1}{2}E_Q[|r(x)|]^2 &\text{(As } E_Q[1+r(x)/3]=1 \text{ )} \\&= \frac{1}{2}V(p, q)^2. \end{align} }[/math]

Here Titu's lemma is also known as Sedrakyan's inequality.

Note that the lower bound from Pinsker's inequality is vacuous for any distributions where [math]\displaystyle{ D_{\mathrm{KL}}(P\parallel Q)\gt 2 }[/math], since the total variation distance is at most [math]\displaystyle{ 1 }[/math]. For such distributions, an alternative bound can be used, due to Bretagnolle and Huber[3] (see, also, Tsybakov[4]):

[math]\displaystyle{ \delta(P,Q) \le \sqrt{1-e^{ -D_{\mathrm{KL}}(P\parallel Q) }}. }[/math]

History

Pinsker first proved the inequality with a greater constant. The inequality in the above form was proved independently by Kullback, Csiszár, and Kemperman.[5]

Inverse problem

A precise inverse of the inequality cannot hold: for every [math]\displaystyle{ \varepsilon \gt 0 }[/math], there are distributions [math]\displaystyle{ P_\varepsilon, Q }[/math] with [math]\displaystyle{ \delta(P_\varepsilon,Q)\le\varepsilon }[/math] but [math]\displaystyle{ D_{\mathrm{KL}}(P_\varepsilon\parallel Q) = \infty }[/math]. An easy example is given by the two-point space [math]\displaystyle{ \{0,1\} }[/math] with [math]\displaystyle{ Q(0) = 0, Q(1) = 1 }[/math] and [math]\displaystyle{ P_\varepsilon(0) = \varepsilon, P_\varepsilon(1) = 1-\varepsilon }[/math].[6]

However, an inverse inequality holds on finite spaces [math]\displaystyle{ X }[/math] with a constant depending on [math]\displaystyle{ Q }[/math].[7] More specifically, it can be shown that with the definition [math]\displaystyle{ \alpha_Q := \min_{x \in X: Q(x) \gt 0} Q(x) }[/math] we have for any measure [math]\displaystyle{ P }[/math] which is absolutely continuous to [math]\displaystyle{ Q }[/math]

[math]\displaystyle{ \frac{1}{2} D_{\mathrm{KL}}(P\parallel Q) \le \frac{1}{\alpha_Q} \delta(P,Q)^2. }[/math]

As a consequence, if [math]\displaystyle{ Q }[/math] has full support (i.e. [math]\displaystyle{ Q(x) \gt 0 }[/math] for all [math]\displaystyle{ x \in X }[/math]), then

[math]\displaystyle{ \delta(P,Q)^2 \le \frac{1}{2} D(P\parallel Q) \le \frac{1}{\alpha_Q} \delta(P,Q)^2. }[/math]

References

  1. Csiszár, Imre; Körner, János (2011). Information Theory: Coding Theorems for Discrete Memoryless Systems. Cambridge University Press. p. 44. ISBN 9781139499989. https://books.google.com/books?id=2gsLkQlb8JAC&pg=PA44. 
  2. Raymond W., Yeung (2008). Information Theory and Network Coding. Hong Kong: Springer. p. 26. ISBN 978-0-387-79233-0. 
  3. Bretagnolle, J.; Huber, C, Estimation des densités: risque minimax, Séminaire de Probabilités, XII (Univ. Strasbourg, Strasbourg, 1976/1977), pp. 342–363, Lecture Notes in Math., 649, Springer, Berlin, 1978, Lemma 2.1 (French).
  4. Tsybakov, Alexandre B., Introduction to nonparametric estimation, Revised and extended from the 2004 French original. Translated by Vladimir Zaiats. Springer Series in Statistics. Springer, New York, 2009. xii+214 pp. ISBN:978-0-387-79051-0, Equation 2.25.
  5. Tsybakov, Alexandre (2009). Introduction to Nonparametric Estimation. Springer. p. 132. ISBN 9780387790527. https://archive.org/details/introductiontono00tsyb. 
  6. The divergence becomes infinite whenever one of the two distributions assigns probability zero to an event while the other assigns it a nonzero probability (no matter how small); see e.g. Basu, Mitra; Ho, Tin Kam (2006). Data Complexity in Pattern Recognition. Springer. p. 161. ISBN 9781846281723. https://books.google.com/books?id=GflBKbzym9oC&pg=PA161. .
  7. see Lemma 4.1 in Götze, Friedrich; Sambale, Holger; Sinulis, Arthur (2019). "Higher order concentration for functions of weakly dependent random variables". Electronic Journal of Probability 24. doi:10.1214/19-EJP338. 

Further reading

  • Thomas M. Cover and Joy A. Thomas: Elements of Information Theory, 2nd edition, Willey-Interscience, 2006
  • Nicolo Cesa-Bianchi and Gábor Lugosi: Prediction, Learning, and Games, Cambridge University Press, 2006