Grönwall's inequality

From HandWiki
Short description: Mathematical theorem

In mathematics, Grönwall's inequality (also called Grönwall's lemma or the Grönwall–Bellman inequality) allows one to bound a function that is known to satisfy a certain differential or integral inequality by the solution of the corresponding differential or integral equation. There are two forms of the lemma, a differential form and an integral form. For the latter there are several variants.

Grönwall's inequality is an important tool to obtain various estimates in the theory of ordinary and stochastic differential equations. In particular, it provides a comparison theorem that can be used to prove uniqueness of a solution to the initial value problem; see the Picard–Lindelöf theorem.

It is named for Thomas Hakon Grönwall (1877–1932). Grönwall is the Swedish spelling of his name, but he spelled his name as Gronwall in his scientific publications after emigrating to the United States.

The inequality was first proven by Grönwall in 1919 (the integral form below with α and β being constants).[1] Richard Bellman proved a slightly more general integral form in 1943.[2]

A nonlinear generalization of the Grönwall–Bellman inequality is known as Bihari–LaSalle inequality. Other variants and generalizations can be found in Pachpatte, B.G. (1998).[3]

Differential form

Let [math]\displaystyle{ I }[/math] denote an interval of the real line of the form [math]\displaystyle{ [a, \infty) }[/math] or [math]\displaystyle{ [a, b] }[/math] or [math]\displaystyle{ [a, b) }[/math] with [math]\displaystyle{ a \lt b }[/math]. Let [math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ u }[/math] be real-valued continuous functions defined on [math]\displaystyle{ I }[/math]. If [math]\displaystyle{ u }[/math] is differentiable in the interior [math]\displaystyle{ I^\circ }[/math] of [math]\displaystyle{ I }[/math] (the interval [math]\displaystyle{ I }[/math] without the end points [math]\displaystyle{ a }[/math] and possibly [math]\displaystyle{ b }[/math]) and satisfies the differential inequality

[math]\displaystyle{ u'(t) \le \beta(t)\,u(t),\qquad t\in I^\circ, }[/math]

then [math]\displaystyle{ u }[/math] is bounded by the solution of the corresponding differential equation [math]\displaystyle{ v'(t) = \beta(t) \, v(t) }[/math]:

[math]\displaystyle{ u(t) \le u(a) \exp\biggl(\int_a^t \beta(s)\, \mathrm{d} s\biggr) }[/math]

for all [math]\displaystyle{ t \in I }[/math].

Remark: There are no assumptions on the signs of the functions [math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ u }[/math].

Proof

Define the function

[math]\displaystyle{ v(t) = \exp\biggl(\int_a^t \beta(s)\, \mathrm{d} s\biggr),\qquad t\in I. }[/math]

Note that [math]\displaystyle{ v }[/math] satisfies

[math]\displaystyle{ v'(t) = \beta(t)\,v(t),\qquad t\in I^\circ, }[/math]

with [math]\displaystyle{ v(a) = 1 }[/math] and [math]\displaystyle{ v(t) \gt 0 }[/math] for all [math]\displaystyle{ t \in I }[/math]. By the quotient rule

[math]\displaystyle{ \frac{d}{dt}\frac{u(t)}{v(t)} = \frac{u'(t)\,v(t)-v'(t)\,u(t)}{v^2(t)} = \frac{u'(t)\,v(t) - \beta(t)\,v(t)\,u(t)}{v^2(t)} \le 0,\qquad t\in I^\circ, }[/math]

Thus the derivative of the function [math]\displaystyle{ u(t)/v(t) }[/math] is non-positive and the function is bounded above by its value at the initial point [math]\displaystyle{ a }[/math] of the interval [math]\displaystyle{ I }[/math]:

[math]\displaystyle{ \frac{u(t)}{v(t)}\le \frac{u(a)}{v(a)}=u(a),\qquad t\in I, }[/math]

which is Grönwall's inequality.

Integral form for continuous functions

Let I denote an interval of the real line of the form [a, ∞) or [a, b] or [a, b) with a < b. Let α, β and u be real-valued functions defined on I. Assume that β and u are continuous and that the negative part of α is integrable on every closed and bounded subinterval of I.

  • (a) If β is non-negative and if u satisfies the integral inequality
[math]\displaystyle{ u(t) \le \alpha(t) + \int_a^t \beta(s) u(s)\,\mathrm{d}s,\qquad \forall t\in I, }[/math]
then
[math]\displaystyle{ u(t) \le \alpha(t) + \int_a^t\alpha(s)\beta(s)\exp\biggl(\int_s^t\beta(r)\,\mathrm{d}r\biggr)\mathrm{d}s,\qquad t\in I. }[/math]
  • (b) If, in addition, the function α is non-decreasing, then
[math]\displaystyle{ u(t) \le \alpha(t)\exp\biggl(\int_a^t\beta(s)\,\mathrm{d}s\biggr),\qquad t\in I. }[/math]

Remarks:

  • There are no assumptions on the signs of the functions α and u.
  • Compared to the differential form, differentiability of u is not needed for the integral form.
  • For a version of Grönwall's inequality which doesn't need continuity of β and u, see the version in the next section.

Proof

(a) Define

[math]\displaystyle{ v(s) = \exp\biggl({-}\int_a^s\beta(r)\,\mathrm{d}r\biggr)\int_a^s\beta(r)u(r)\,\mathrm{d}r,\qquad s\in I. }[/math]

Using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, we obtain for the derivative

[math]\displaystyle{ v'(s) = \biggl(\underbrace{u(s)-\int_a^s\beta(r)u(r)\,\mathrm{d}r}_{\le\,\alpha(s)}\biggr)\beta(s)\exp\biggl({-}\int_a^s\beta(r)\mathrm{d}r\biggr), \qquad s\in I, }[/math]

where we used the assumed integral inequality for the upper estimate. Since β and the exponential are non-negative, this gives an upper estimate for the derivative of [math]\displaystyle{ v(s) }[/math]. Since [math]\displaystyle{ v(a)=0 }[/math], integration of this inequality from a to t gives

[math]\displaystyle{ v(t) \le\int_a^t\alpha(s)\beta(s)\exp\biggl({-}\int_a^s\beta(r)\,\mathrm{d}r\biggr)\mathrm{d}s. }[/math]

Using the definition of [math]\displaystyle{ v(t) }[/math] from the first step, and then this inequality and the functional equation of the exponential function, we obtain

[math]\displaystyle{ \begin{align}\int_a^t\beta(s)u(s)\,\mathrm{d}s &=\exp\biggl(\int_a^t\beta(r)\,\mathrm{d}r\biggr)v(t)\\ &\le\int_a^t\alpha(s)\beta(s)\exp\biggl(\underbrace{\int_a^t\beta(r)\,\mathrm{d}r-\int_a^s\beta(r)\,\mathrm{d}r}_{=\,\int_s^t\beta(r)\,\mathrm{d}r}\biggr)\mathrm{d}s. \end{align} }[/math]

Substituting this result into the assumed integral inequality gives Grönwall's inequality.

(b) If the function α is non-decreasing, then part (a), the fact α(s) ≤ α(t), and the fundamental theorem of calculus imply that

[math]\displaystyle{ \begin{align}u(t)&\le\alpha(t)+\biggl({-}\alpha(t)\exp\biggl(\int_s^t\beta(r)\,\mathrm{d}r\biggr)\biggr)\biggr|^{s=t}_{s=a}\\ &=\alpha(t)\exp\biggl(\int_a^t\beta(r)\,\mathrm{d}r\biggr),\qquad t\in I.\end{align} }[/math]

Integral form with locally finite measures

Let I denote an interval of the real line of the form [a, ∞) or [a, b] or [a, b) with a < b. Let α and u be measurable functions defined on I and let μ be a continuous non-negative measure on the Borel σ-algebra of I satisfying μ([a, t]) < ∞ for all tI (this is certainly satisfied when μ is a locally finite measure). Assume that u is integrable with respect to μ in the sense that

[math]\displaystyle{ \int_{[a,t)}|u(s)|\,\mu(\mathrm{d}s)\lt \infty,\qquad t\in I, }[/math]

and that u satisfies the integral inequality

[math]\displaystyle{ u(t) \le \alpha(t) + \int_{[a,t)} u(s)\,\mu(\mathrm{d}s),\qquad t\in I. }[/math]

If, in addition,

  • the function α is non-negative or
  • the function tμ([a, t]) is continuous for tI and the function α is integrable with respect to μ in the sense that
[math]\displaystyle{ \int_{[a,t)}|\alpha(s)|\,\mu(\mathrm{d}s)\lt \infty,\qquad t\in I, }[/math]

then u satisfies Grönwall's inequality

[math]\displaystyle{ u(t) \le \alpha(t) + \int_{[a,t)}\alpha(s)\exp\bigl(\mu(I_{s,t})\bigr)\,\mu(\mathrm{d}s) }[/math]

for all tI, where Is,t denotes to open interval (s, t).

Remarks

  • There are no continuity assumptions on the functions α and u.
  • The integral in Grönwall's inequality is allowed to give the value infinity.[clarification needed]
  • If α is the zero function and u is non-negative, then Grönwall's inequality implies that u is the zero function.
  • The integrability of u with respect to μ is essential for the result. For a counterexample, let μ denote Lebesgue measure on the unit interval [0, 1], define u(0) = 0 and u(t) = 1/t for t(0, 1], and let α be the zero function.
  • The version given in the textbook by S. Ethier and T. Kurtz.[4] makes the stronger assumptions that α is a non-negative constant and u is bounded on bounded intervals, but doesn't assume that the measure μ is locally finite. Compared to the one given below, their proof does not discuss the behaviour of the remainder Rn(t).

Special cases

  • If the measure μ has a density β with respect to Lebesgue measure, then Grönwall's inequality can be rewritten as
[math]\displaystyle{ u(t) \le \alpha(t) + \int_a^t \alpha(s)\beta(s)\exp\biggl(\int_s^t\beta(r)\,\mathrm{d}r\biggr)\,\mathrm{d}s,\qquad t\in I. }[/math]
  • If the function α is non-negative and the density β of μ is bounded by a constant c, then
[math]\displaystyle{ u(t) \le \alpha(t) + c\int_a^t \alpha(s)\exp\bigl(c(t-s)\bigr)\,\mathrm{d}s,\qquad t\in I. }[/math]
  • If, in addition, the non-negative function α is non-decreasing, then
[math]\displaystyle{ u(t) \le \alpha(t) + c\alpha(t)\int_a^t \exp\bigl(c(t-s)\bigr)\,\mathrm{d}s =\alpha(t)\exp(c(t-a)),\qquad t\in I. }[/math]

Outline of proof

The proof is divided into three steps. The idea is to substitute the assumed integral inequality into itself n times. This is done in Claim 1 using mathematical induction. In Claim 2 we rewrite the measure of a simplex in a convenient form, using the permutation invariance of product measures. In the third step we pass to the limit n to infinity to derive the desired variant of Grönwall's inequality.

Detailed proof

Claim 1: Iterating the inequality

For every natural number n including zero,

[math]\displaystyle{ u(t) \le \alpha(t) + \int_{[a,t)} \alpha(s) \sum_{k=0}^{n-1} \mu^{\otimes k}(A_k(s,t))\,\mu(\mathrm{d}s) + R_n(t) }[/math]

with remainder

[math]\displaystyle{ R_n(t) :=\int_{[a,t)}u(s)\mu^{\otimes n}(A_n(s,t))\,\mu(\mathrm{d}s),\qquad t\in I, }[/math]

where

[math]\displaystyle{ A_n(s,t)=\{(s_1,\ldots,s_n)\in I_{s,t}^n\mid s_1\lt s_2\lt \cdots\lt s_n\},\qquad n\ge1, }[/math]

is an n-dimensional simplex and

[math]\displaystyle{ \mu^{\otimes 0}(A_0(s,t)):=1. }[/math]

Proof of Claim 1

We use mathematical induction. For n = 0 this is just the assumed integral inequality, because the empty sum is defined as zero.

Induction step from n to n + 1: Inserting the assumed integral inequality for the function u into the remainder gives

[math]\displaystyle{ R_n(t)\le\int_{[a,t)} \alpha(s) \mu^{\otimes n}(A_n(s,t))\,\mu(\mathrm{d}s) +\tilde R_n(t) }[/math]

with

[math]\displaystyle{ \tilde R_n(t):=\int_{[a,t)} \biggl(\int_{[a,q)} u(s)\,\mu(\mathrm{d}s)\biggr)\mu^{\otimes n}(A_n(q,t))\,\mu(\mathrm{d}q),\qquad t\in I. }[/math]

Using the Fubini–Tonelli theorem to interchange the two integrals, we obtain

[math]\displaystyle{ \tilde R_n(t) =\int_{[a,t)} u(s)\underbrace{\int_{(s,t)} \mu^{\otimes n}(A_n(q,t))\,\mu(\mathrm{d}q)}_{=\,\mu^{\otimes n+1}(A_{n+1}(s,t))}\,\mu(\mathrm{d}s) =R_{n+1}(t),\qquad t\in I. }[/math]

Hence Claim 1 is proved for n + 1.

Claim 2: Measure of the simplex

For every natural number n including zero and all s < t in I

[math]\displaystyle{ \mu^{\otimes n}(A_n(s,t))\le\frac{\bigl(\mu(I_{s,t})\bigr)^n}{n!} }[/math]

with equality in case tμ([a, t]) is continuous for tI.

Proof of Claim 2

For n = 0, the claim is true by our definitions. Therefore, consider n ≥ 1 in the following.

Let Sn denote the set of all permutations of the indices in {1, 2, . . . , n}. For every permutation σSn define

[math]\displaystyle{ A_{n,\sigma}(s,t)=\{(s_1,\ldots,s_n)\in I_{s,t}^n\mid s_{\sigma(1)}\lt s_{\sigma(2)}\lt \cdots\lt s_{\sigma(n)}\}. }[/math]

These sets are disjoint for different permutations and

[math]\displaystyle{ \bigcup_{\sigma\in S_n}A_{n,\sigma}(s,t)\subset I_{s,t}^n. }[/math]

Therefore,

[math]\displaystyle{ \sum_{\sigma\in S_n} \mu^{\otimes n}(A_{n,\sigma}(s,t)) \le\mu^{\otimes n}\bigl(I_{s,t}^n\bigr)=\bigl(\mu(I_{s,t})\bigr)^n. }[/math]

Since they all have the same measure with respect to the n-fold product of μ, and since there are n! permutations in Sn, the claimed inequality follows.

Assume now that tμ([a, t]) is continuous for tI. Then, for different indices i, j ∈ {1, 2, . . . , n}, the set

[math]\displaystyle{ \{(s_1,\ldots,s_n)\in I_{s,t}^n\mid s_i=s_j\} }[/math]

is contained in a hyperplane, hence by an application of Fubini's theorem its measure with respect to the n-fold product of μ is zero. Since

[math]\displaystyle{ I_{s,t}^n\subset\bigcup_{\sigma\in S_n}A_{n,\sigma}(s,t) \cup \bigcup_{1\le i\lt j\le n}\{(s_1,\ldots,s_n)\in I_{s,t}^n\mid s_i=s_j\}, }[/math]

the claimed equality follows.

Proof of Grönwall's inequality

For every natural number n, Claim 2 implies for the remainder of Claim 1 that

[math]\displaystyle{ |R_n(t)| \le \frac{\bigl(\mu(I_{a,t})\bigr)^n}{n!} \int_{[a,t)} |u(s)|\,\mu(\mathrm{d}s),\qquad t\in I. }[/math]

By assumption we have μ(Ia,t) < ∞. Hence, the integrability assumption on u implies that

[math]\displaystyle{ \lim_{n\to\infty}R_n(t)=0,\qquad t\in I. }[/math]

Claim 2 and the series representation of the exponential function imply the estimate

[math]\displaystyle{ \sum_{k=0}^{n-1} \mu^{\otimes k}(A_k(s,t)) \le\sum_{k=0}^{n-1} \frac{\bigl(\mu(I_{s,t})\bigr)^k}{k!} \le\exp\bigl(\mu(I_{s,t})\bigr) }[/math]

for all s < t in I. If the function α is non-negative, then it suffices to insert these results into Claim 1 to derive the above variant of Grönwall's inequality for the function u.

In case tμ([a, t]) is continuous for tI, Claim 2 gives

[math]\displaystyle{ \sum_{k=0}^{n-1} \mu^{\otimes k}(A_k(s,t)) =\sum_{k=0}^{n-1} \frac{\bigl(\mu(I_{s,t})\bigr)^k}{k!} \to\exp\bigl(\mu(I_{s,t})\bigr)\qquad\text{as }n\to\infty }[/math]

and the integrability of the function α permits to use the dominated convergence theorem to derive Grönwall's inequality.

See also

  • Stochastic Gronwall inequality
  • Logarithmic norm, for a version of Gronwall's lemma that gives upper and lower bounds to the norm of the state transition matrix.
  • Halanay inequality. A similar inequality to Gronwall's lemma that is used for differential equations with delay.

References

  1. Gronwall, Thomas H. (1919), "Note on the derivatives with respect to a parameter of the solutions of a system of differential equations", Ann. of Math. 20 (2): 292–296, doi:10.2307/1967124 
  2. Bellman, Richard (1943), "The stability of solutions of linear differential equations", Duke Math. J. 10 (4): 643–647, doi:10.1215/s0012-7094-43-01059-2, http://projecteuclid.org/euclid.dmj/1077472225 
  3. Pachpatte, B.G. (1998). Inequalities for differential and integral equations. San Diego: Academic Press. ISBN 9780080534640. 
  4. Ethier, Steward N.; Kurtz, Thomas G. (1986), Markov Processes, Characterization and Convergence, New York: John Wiley & Sons, p. 498, ISBN 0-471-08186-8