Prékopa–Leindler inequality

From HandWiki

In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.[1][2]

Statement of the inequality

Let 0 < λ < 1 and let f, g, h : Rn → [0, +∞) be non-negative real-valued measurable functions defined on n-dimensional Euclidean space Rn. Suppose that these functions satisfy

[math]\displaystyle{ h \left( (1-\lambda)x + \lambda y \right) \geq f(x)^{1 - \lambda} g(y)^\lambda }[/math]

 

 

 

 

(1)

for all x and y in Rn. Then

[math]\displaystyle{ \| h\|_{1} := \int_{\mathbb{R}^n} h(x) \, \mathrm{d} x \geq \left( \int_{\mathbb{R}^n} f(x) \, \mathrm{d} x \right)^{1 -\lambda} \left( \int_{\mathbb{R}^n} g(x) \, \mathrm{d} x \right)^\lambda =: \| f\|_1^{1 -\lambda} \| g\|_1^\lambda. }[/math]

Essential form of the inequality

Recall that the essential supremum of a measurable function f : Rn → R is defined by

[math]\displaystyle{ \mathop{\mathrm{ess\,sup}}_{x \in \mathbb{R}^{n}} f(x) = \inf \left\{ t \in [- \infty, + \infty] \mid f(x) \leq t \text{ for almost all } x \in \mathbb{R}^{n} \right\}. }[/math]

This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f, g ∈ L1(Rn; [0, +∞)) be non-negative absolutely integrable functions. Let

[math]\displaystyle{ s(x) = \mathop{\mathrm{ess\,sup}}_{y \in \mathbb{R}^n} f \left( \frac{x - y}{1 - \lambda} \right)^{1 - \lambda} g \left( \frac{y}{\lambda} \right)^\lambda. }[/math]

Then s is measurable and

[math]\displaystyle{ \| s \|_1 \geq \| f \|_1^{1 - \lambda} \| g \|_1^\lambda. }[/math]

The essential supremum form was given by Herm Brascamp and Elliott Lieb.[3] Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form.

Relationship to the Brunn–Minkowski inequality

It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded, measurable subsets of Rn such that the Minkowski sum (1 − λ)A + λB is also measurable, then

[math]\displaystyle{ \mu \left( (1 - \lambda) A + \lambda B \right) \geq \mu (A)^{1 - \lambda} \mu (B)^{\lambda}, }[/math]

where μ denotes n-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used[4] to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non-empty, bounded, measurable subsets of Rn such that (1 − λ)A + λB is also measurable, then

[math]\displaystyle{ \mu \left( (1 - \lambda) A + \lambda B \right)^{1 / n} \geq (1 - \lambda) \mu (A)^{1 / n} + \lambda \mu (B)^{1 / n}. }[/math]

Applications in probability and statistics

Log-concave distributions

The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Since, if [math]\displaystyle{ X, Y }[/math] have pdf [math]\displaystyle{ f, g }[/math], and [math]\displaystyle{ X, Y }[/math] are independent, then [math]\displaystyle{ f\star g }[/math] is the pdf of [math]\displaystyle{ X+Y }[/math], we also have that the convolution of two log-concave functions is log-concave.

Suppose that H(x,y) is a log-concave distribution for (x,y) ∈ Rm × Rn, so that by definition we have

[math]\displaystyle{ H \left( (1 - \lambda)(x_1,y_1) + \lambda (x_2,y_2) \right) \geq H(x_1,y_1)^{1 - \lambda} H(x_2,y_2)^{\lambda}, }[/math]

 

 

 

 

(2)

and let M(y) denote the marginal distribution obtained by integrating over x:

[math]\displaystyle{ M(y) = \int_{\mathbb{R}^m} H(x,y) \, dx. }[/math]

Let y1, y2Rn and 0 < λ < 1 be given. Then equation (2) satisfies condition (1) with h(x) = H(x,(1 − λ)y1 + λy2), f(x) = H(x,y1) and g(x) = H(x,y2), so the Prékopa–Leindler inequality applies. It can be written in terms of M as

[math]\displaystyle{ M((1-\lambda) y_1 + \lambda y_2) \geq M(y_1)^{1-\lambda} M(y_2)^\lambda, }[/math]

which is the definition of log-concavity for M.

To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of (X,Y) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of (X + YX − Y) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of (X + YX − Y), we conclude that X + Y has a log-concave distribution.

Applications to concentration of measure

The Prékopa–Leindler inequality can be used to prove results about concentration of measure.

Theorem Let [math]\displaystyle{ A \subseteq \mathbb{R}^n }[/math], and set [math]\displaystyle{ A_{\epsilon} = \{ x : d(x,A) \lt \epsilon \} }[/math]. Let [math]\displaystyle{ \gamma(x) }[/math] denote the standard Gaussian pdf, and [math]\displaystyle{ \mu }[/math] its associated measure. Then [math]\displaystyle{ \mu(A_{\epsilon}) \geq 1 - \frac{ e^{ - \epsilon^2/4}}{\mu(A)} }[/math].

Proof of concentration of measure

The proof of this theorem goes by way of the following lemma:

Lemma In the notation of the theorem, [math]\displaystyle{ \int_{\mathbb{R}^n} \exp ( d(x,A)^2/4) d\mu \leq 1/\mu(A) }[/math].

This lemma can be proven from Prékopa–Leindler by taking [math]\displaystyle{ h(x) = \gamma(x), f(x) = e^{ \frac{ d(x,A)^2}{4}} \gamma(x), g(x) = 1_A(x) \gamma(x) }[/math] and [math]\displaystyle{ \lambda = 1/2 }[/math]. To verify the hypothesis of the inequality, [math]\displaystyle{ h( \frac{ x + y}{2} ) \geq \sqrt{ f(x) g(y)} }[/math], note that we only need to consider [math]\displaystyle{ y \in A }[/math], in which case [math]\displaystyle{ d(x,A) \leq ||x - y|| }[/math]. This allows us to calculate:

[math]\displaystyle{ (2 \pi)^n f(x) g(x) = \exp( \frac{ d(x,A) }{4} - ||x||^2/2 - ||y||^2/2 ) \leq \exp( \frac{ ||x - y||^2 }{4} - ||x||^2/2 - ||y||^2/2 ) = \exp ( - ||\frac{x + y}{2}||^2 ) = (2 \pi)^n h( \frac{ x + y}{2})^2. }[/math]

Since [math]\displaystyle{ \int h(x) dx = 1 }[/math], the PL-inequality immediately gives the lemma.

To conclude the concentration inequality from the lemma, note that on [math]\displaystyle{ \mathbb{R}^n \setminus A_{\epsilon} }[/math], [math]\displaystyle{ d(x,A) \gt \epsilon }[/math], so we have [math]\displaystyle{ \int_{\mathbb{R}^n} \exp ( d(x,A)^2/4) d\mu \geq ( 1 - \mu(A_{\epsilon})) \exp ( \epsilon^2/4) }[/math]. Applying the lemma and rearranging proves the result.

References

  1. Prékopa, András (1971). "Logarithmic concave measures with application to stochastic programming". Acta Sci. Math. 32: 301–316. http://rutcor.rutgers.edu/~prekopa/SCIENT1.pdf. 
  2. Prékopa, András (1973). "On logarithmic concave measures and functions". Acta Sci. Math. 34: 335–343. http://rutcor.rutgers.edu/~prekopa/SCIENT2.pdf. 
  3. Herm Jan Brascamp; Elliott H. Lieb (1976). "On extensions of the Brunn–Minkowski and Prekopa–Leindler theorems, including inequalities for log concave functions and with an application to the diffusion equation". Journal of Functional Analysis 22 (4): 366–389. doi:10.1016/0022-1236(76)90004-5. 
  4. Gardner, Richard J. (2002). "The Brunn–Minkowski inequality". Bull. Amer. Math. Soc. (N.S.) 39 (3): 355–405 (electronic). doi:10.1090/S0273-0979-02-00941-2. ISSN 0273-0979. https://www.ams.org/bull/2002-39-03/S0273-0979-02-00941-2/S0273-0979-02-00941-2.pdf. 

Further reading

  • Eaton, Morris L. (1987). "Log concavity and related topics". Lectures on Topics in Probability Inequalities. Amsterdam. pp. 77–109. ISBN 90-6196-316-8. 
  • Wainwright, Martin J. (2019). "Concentration of Measure". High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press. pp. 72–76. ISBN 978-1-108-49802-9.