Convergence of measures

From HandWiki
(Redirected from Portmanteau theorem)

In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for nN to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.

Three of the most common notions of convergence are described below.

Informal descriptions

This section attempts to provide a rough intuitive description of three notions of convergence, using terminology developed in calculus courses; this section is necessarily imprecise as well as inexact, and the reader should refer to the formal clarifications in subsequent sections. In particular, the descriptions here do not address the possibility that the measure of some sets could be infinite, or that the underlying space could exhibit pathological behavior, and additional technical assumptions are needed for some of the statements. The statements in this section are however all correct if [math]\displaystyle{ \mu_n }[/math] is a sequence of probability measures on a Polish space.

The various notions of convergence formalize the assertion that the 'average value' of each 'sufficiently nice' function should converge: [math]\displaystyle{ \int f\, d\mu_n \to \int f\, d\mu }[/math]

To formalize this requires a careful specification of the set of functions under consideration and how uniform the convergence should be.

The notion of weak convergence requires this convergence to take place for every continuous bounded function [math]\displaystyle{ f }[/math]. This notion treats convergence for different functions f independently of one another, i.e., different functions f may require different values of N ≤ n to be approximated equally well (thus, convergence is non-uniform in [math]\displaystyle{ f }[/math]).

The notion of setwise convergence formalizes the assertion that the measure of each measurable set should converge: [math]\displaystyle{ \mu_n(A) \to \mu(A) }[/math]

Again, no uniformity over the set [math]\displaystyle{ A }[/math] is required. Intuitively, considering integrals of 'nice' functions, this notion provides more uniformity than weak convergence. As a matter of fact, when considering sequences of measures with uniformly bounded variation on a Polish space, setwise convergence implies the convergence [math]\displaystyle{ \int f\, d\mu_n \to \int f\, d\mu }[/math] for any bounded measurable function [math]\displaystyle{ f }[/math]. As before, this convergence is non-uniform in [math]\displaystyle{ f }[/math]

The notion of total variation convergence formalizes the assertion that the measure of all measurable sets should converge uniformly, i.e. for every [math]\displaystyle{ \varepsilon \gt 0 }[/math] there exists N such that [math]\displaystyle{ |\mu_n(A) - \mu(A)| \lt \varepsilon }[/math] for every n > N and for every measurable set [math]\displaystyle{ A }[/math]. As before, this implies convergence of integrals against bounded measurable functions, but this time convergence is uniform over all functions bounded by any fixed constant.

Total variation convergence of measures

This is the strongest notion of convergence shown on this page and is defined as follows. Let [math]\displaystyle{ (X, \mathcal{F}) }[/math] be a measurable space. The total variation distance between two (positive) measures μ and ν is then given by

[math]\displaystyle{ \left \|\mu- \nu \right \|_\text{TV} = \sup_f \left \{ \int_X f \, d\mu - \int_X f \, d\nu \right \}. }[/math]

Here the supremum is taken over f ranging over the set of all measurable functions from X to [−1, 1]. This is in contrast, for example, to the Wasserstein metric, where the definition is of the same form, but the supremum is taken over f ranging over the set of measurable functions from X to [−1, 1] which have Lipschitz constant at most 1; and also in contrast to the Radon metric, where the supremum is taken over f ranging over the set of continuous functions from X to [−1, 1]. In the case where X is a Polish space, the total variation metric coincides with the Radon metric.

If μ and ν are both probability measures, then the total variation distance is also given by

[math]\displaystyle{ \left \|\mu- \nu \right \|_{\text{TV}} = 2\cdot\sup_{A\in \mathcal{F}} | \mu (A) - \nu (A) |. }[/math]

The equivalence between these two definitions can be seen as a particular case of the Monge-Kantorovich duality. From the two definitions above, it is clear that the total variation distance between probability measures is always between 0 and 2.

To illustrate the meaning of the total variation distance, consider the following thought experiment. Assume that we are given two probability measures μ and ν, as well as a random variable X. We know that X has law either μ or ν but we do not know which one of the two. Assume that these two measures have prior probabilities 0.5 each of being the true law of X. Assume now that we are given one single sample distributed according to the law of X and that we are then asked to guess which one of the two distributions describes that law. The quantity

[math]\displaystyle{ {2+\|\mu-\nu\|_\text{TV} \over 4} }[/math]

then provides a sharp upper bound on the prior probability that our guess will be correct.

Given the above definition of total variation distance, a sequence μn of measures defined on the same measure space is said to converge to a measure μ in total variation distance if for every ε > 0, there exists an N such that for all n > N, one has that[1]

[math]\displaystyle{ \|\mu_n - \mu\|_\text{TV} \lt \varepsilon. }[/math]

Setwise convergence of measures

For [math]\displaystyle{ (X, \mathcal{F}) }[/math] a measurable space, a sequence μn is said to converge setwise to a limit μ if

[math]\displaystyle{ \lim_{n \to \infty} \mu_n(A) = \mu(A) }[/math]

for every set [math]\displaystyle{ A\in\mathcal{F} }[/math].

Typical arrow notations are [math]\displaystyle{ \mu_n \xrightarrow{sw} \mu }[/math] and [math]\displaystyle{ \mu_n \xrightarrow{s} \mu }[/math].

For example, as a consequence of the Riemann–Lebesgue lemma, the sequence μn of measures on the interval [−1, 1] given by μn(dx) = (1+ sin(nx))dx converges setwise to Lebesgue measure, but it does not converge in total variation.

In a measure theoretical or probabilistic context setwise convergence is often referred to as strong convergence (as opposed to weak convergence). This can lead to some ambiguity because in functional analysis, strong convergence usually refers to convergence with respect to a norm.

Weak convergence of measures

In mathematics and statistics, weak convergence is one of many types of convergence relating to the convergence of measures. It depends on a topology on the underlying space and thus is not a purely measure theoretic notion.

There are several equivalent definitions of weak convergence of a sequence of measures, some of which are (apparently) more general than others. The equivalence of these conditions is sometimes known as the Portmanteau theorem.[2]

Definition. Let [math]\displaystyle{ S }[/math] be a metric space with its Borel [math]\displaystyle{ \sigma }[/math]-algebra [math]\displaystyle{ \Sigma }[/math]. A bounded sequence of positive probability measures [math]\displaystyle{ P_n\, (n = 1, 2, \dots) }[/math] on [math]\displaystyle{ (S, \Sigma) }[/math] is said to converge weakly to a probability measure [math]\displaystyle{ P }[/math] (denoted [math]\displaystyle{ P_n\Rightarrow P }[/math]) if any of the following equivalent conditions is true (here [math]\displaystyle{ \operatorname{E}_n }[/math] denotes expectation or the [math]\displaystyle{ L^1 }[/math] norm with respect to [math]\displaystyle{ P_n }[/math], while [math]\displaystyle{ \operatorname{E} }[/math] denotes expectation or the [math]\displaystyle{ L^1 }[/math] norm with respect to [math]\displaystyle{ P }[/math]):

  • [math]\displaystyle{ \operatorname{E}_n[f] \to \operatorname{E}[f] }[/math] for all bounded, continuous functions [math]\displaystyle{ f }[/math];
  • [math]\displaystyle{ \operatorname{E}_n[f] \to \operatorname{E}[f] }[/math] for all bounded and Lipschitz functions [math]\displaystyle{ f }[/math];
  • [math]\displaystyle{ \limsup \operatorname{E}_n[f] \le \operatorname{E}[f] }[/math] for every upper semi-continuous function [math]\displaystyle{ f }[/math] bounded from above;
  • [math]\displaystyle{ \liminf \operatorname{E}_n[f] \ge \operatorname{E}[f] }[/math] for every lower semi-continuous function [math]\displaystyle{ f }[/math] bounded from below;
  • [math]\displaystyle{ \limsup P_n(C) \le P(C) }[/math] for all closed sets [math]\displaystyle{ C }[/math] of space [math]\displaystyle{ S }[/math];
  • [math]\displaystyle{ \liminf P_n(U) \ge P(U) }[/math] for all open sets [math]\displaystyle{ U }[/math] of space [math]\displaystyle{ S }[/math];
  • [math]\displaystyle{ \lim P_n(A) = P(A) }[/math] for all continuity sets [math]\displaystyle{ A }[/math] of measure [math]\displaystyle{ P }[/math].

In the case [math]\displaystyle{ S \equiv \mathbf{R} }[/math] with its usual topology, if [math]\displaystyle{ F_n }[/math] and [math]\displaystyle{ F }[/math] denote the cumulative distribution functions of the measures [math]\displaystyle{ P_n }[/math] and [math]\displaystyle{ P }[/math], respectively, then [math]\displaystyle{ P_n }[/math] converges weakly to [math]\displaystyle{ P }[/math] if and only if [math]\displaystyle{ \lim_{n \to \infty} F_n(x) = F(x) }[/math] for all points [math]\displaystyle{ x \in \mathbf{R} }[/math] at which [math]\displaystyle{ F }[/math] is continuous.

For example, the sequence where [math]\displaystyle{ P_n }[/math] is the Dirac measure located at [math]\displaystyle{ 1/n }[/math] converges weakly to the Dirac measure located at 0 (if we view these as measures on [math]\displaystyle{ \mathbf{R} }[/math] with the usual topology), but it does not converge setwise. This is intuitively clear: we only know that [math]\displaystyle{ 1/n }[/math] is "close" to [math]\displaystyle{ 0 }[/math] because of the topology of [math]\displaystyle{ \mathbf{R} }[/math].

This definition of weak convergence can be extended for [math]\displaystyle{ S }[/math] any metrizable topological space. It also defines a weak topology on [math]\displaystyle{ \mathcal{P}(S) }[/math], the set of all probability measures defined on [math]\displaystyle{ (S,\Sigma) }[/math]. The weak topology is generated by the following basis of open sets:

[math]\displaystyle{ \left\{ \ U_{\phi, x, \delta} \ \left| \quad \phi \colon S \to \mathbf{R} \text{ is bounded and continuous, } x \in \mathbf{R} \text{ and } \delta \gt 0 \ \right. \right\}, }[/math]

where

[math]\displaystyle{ U_{\phi, x, \delta} := \left\{ \ \mu \in \mathcal{P}(S) \ \left| \quad \left| \int_{S} \phi \, \mathrm{d} \mu - x \right| \lt \delta \ \right. \right\}. }[/math]

If [math]\displaystyle{ S }[/math] is also separable, then [math]\displaystyle{ \mathcal{P}(S) }[/math] is metrizable and separable, for example by the Lévy–Prokhorov metric. If [math]\displaystyle{ S }[/math] is also compact or Polish, so is [math]\displaystyle{ \mathcal{P}(S) }[/math].

If [math]\displaystyle{ S }[/math] is separable, it naturally embeds into [math]\displaystyle{ \mathcal{P}(S) }[/math] as the (closed) set of Dirac measures, and its convex hull is dense.

There are many "arrow notations" for this kind of convergence: the most frequently used are [math]\displaystyle{ P_{n} \Rightarrow P }[/math], [math]\displaystyle{ P_{n} \rightharpoonup P }[/math], [math]\displaystyle{ P_{n} \xrightarrow{w} P }[/math] and [math]\displaystyle{ P_{n} \xrightarrow{\mathcal{D}} P }[/math].

Weak convergence of random variables

Let [math]\displaystyle{ (\Omega, \mathcal{F}, \mathbb{P}) }[/math] be a probability space and X be a metric space. If Xn: Ω → X is a sequence of random variables then Xn is said to converge weakly (or in distribution or in law) to the random variable X: Ω → X as n → ∞ if the sequence of pushforward measures (Xn)(P) converges weakly to X(P) in the sense of weak convergence of measures on X, as defined above.


Comparison with vague convergence

Let [math]\displaystyle{ X }[/math] be a metric space (for example [math]\displaystyle{ \mathbb{R} }[/math] or [math]\displaystyle{ [0,1] }[/math]). The following spaces of test functions are commonly used in the convergence of probability measures.[3]

  • [math]\displaystyle{ C_c(X) }[/math] the class of continuous functions [math]\displaystyle{ f }[/math] each vanishing outside a compact set.
  • [math]\displaystyle{ C_0(X) }[/math] the class of continuous functions [math]\displaystyle{ f }[/math] such that [math]\displaystyle{ \lim _{|x| \rightarrow \infty} f(x)=0 }[/math]
  • [math]\displaystyle{ C_B(X) }[/math] the class of continuous bounded functions

We have [math]\displaystyle{ C_c \subset C_0 \subset C_B \subset C }[/math]. It is well known that [math]\displaystyle{ C_0 }[/math] is the closure of [math]\displaystyle{ C_c }[/math] with respect to uniform convergence.[3]

Vague Convergence

A sequence of measures [math]\displaystyle{ \left(\mu_n\right)_{n \in \mathbb{N}} }[/math] converges vaguely to a measure [math]\displaystyle{ \mu }[/math] if for all [math]\displaystyle{ f \in C_0(X) }[/math], [math]\displaystyle{ \int_X f d \mu_n \rightarrow \int_X f d \mu }[/math].

Weak Convergence

A sequence of measures [math]\displaystyle{ \left(\mu_n\right)_{n \in \mathbb{N}} }[/math] converges weakly to a measure [math]\displaystyle{ \mu }[/math] if for all [math]\displaystyle{ f \in C_B(X) }[/math], [math]\displaystyle{ \int_X f d \mu_n \rightarrow \int_X f d \mu }[/math].

In general, these two convergence notions are not equivalent.

In a probability setting, vague convergence and weak convergence of probability measures are equivalent assuming tightness. That is, a tight sequence of probability measures [math]\displaystyle{ (\mu_n)_{n\in \mathbb{N}} }[/math] converges vaguely to a probability measure [math]\displaystyle{ \mu }[/math] if and only if [math]\displaystyle{ (\mu_n)_{n \in \mathbb{N}} }[/math] converges weakly to [math]\displaystyle{ \mu }[/math].

The weak limit of a sequence of probability measures, provided it exists, is a probability measure. In general, if tightness is not assumed, a sequence of probability (or sub-probability) measures may not necessarily converge vaguely to a true probability measure, but rather to a sub-probability measure (a measure such that [math]\displaystyle{ \mu(X)\leq 1 }[/math]).[3] Thus, a sequence of probability measures [math]\displaystyle{ (\mu_n)_{n\in \mathbb{N}} }[/math] such that [math]\displaystyle{ \mu_n \overset{v}{\to} \mu }[/math] where [math]\displaystyle{ \mu }[/math] is not specified to be a probability measure is not guaranteed to imply weak convergence.

Weak convergence of measures as an example of weak-* convergence

Despite having the same name as weak convergence in the context of functional analysis, weak convergence of measures is actually an example of weak-* convergence. The definitions of weak and weak-* convergences used in functional analysis are as follows:

Let [math]\displaystyle{ V }[/math] be a topological vector space or Banach space.

  1. A sequence [math]\displaystyle{ x_n }[/math] in [math]\displaystyle{ V }[/math] converges weakly to [math]\displaystyle{ x }[/math] if [math]\displaystyle{ \varphi\left(x_n\right) \rightarrow \varphi(x) }[/math] as [math]\displaystyle{ n \to \infty }[/math] for all [math]\displaystyle{ \varphi \in V^* }[/math]. One writes [math]\displaystyle{ x_n \stackrel{w}{\rightarrow} x }[/math] as [math]\displaystyle{ n \to \infty }[/math].
  2. A sequence of [math]\displaystyle{ \phi_n \in V^* }[/math]converges in the weak-* topology to [math]\displaystyle{ \phi }[/math] provided that [math]\displaystyle{ \phi_n(x) \rightarrow \phi(x) }[/math] for all [math]\displaystyle{ x \in V }[/math]. That is, convergence occurs in the point-wise sense. In this case, one writes [math]\displaystyle{ \phi_n \stackrel{w^*}{\rightarrow} \phi }[/math] as [math]\displaystyle{ n \to \infty }[/math].

To illustrate how weak convergence of measures is an example of weak-* convergence, we give an example in terms of vague convergence (see above). Let [math]\displaystyle{ X }[/math] be a locally compact Hausdorff space. By the Riesz-Representation theorem, the space [math]\displaystyle{ M(X) }[/math] of Radon measures is isomorphic to a subspace of the space of continuous linear functionals on [math]\displaystyle{ C_0(X) }[/math]. Therefore, for each Radon measure [math]\displaystyle{ \mu_n \in M(X) }[/math], there is a linear functional [math]\displaystyle{ \phi_n \in C_0(X)^* }[/math] such that [math]\displaystyle{ \varphi_n(f)=\int_X f d \mu_n }[/math] for all [math]\displaystyle{ f \in C_0(X) }[/math]. Applying the definition of weak-* convergence in terms of linear functionals, the characterization of vague convergence of measures is obtained. For compact [math]\displaystyle{ X }[/math], [math]\displaystyle{ C_0(X)=C_B(X) }[/math], so in this case weak convergence of measures is a special case of weak-* convergence.

See also

References

  1. Madras, Neil; Sezer, Deniz (25 Feb 2011). "Quantitative bounds for Markov chain convergence: Wasserstein and total variation distances". Bernoulli 16 (3): 882–908. doi:10.3150/09-BEJ238. 
  2. Klenke, Achim (2006). Probability Theory. Springer-Verlag. ISBN 978-1-84800-047-6. 
  3. 3.0 3.1 3.2 Chung, Kai Lai (1974). A course in probability theory. Internet Archive. New York, Academic Press. pp. 84–99. ISBN 978-0-12-174151-8. http://archive.org/details/courseinprobabil0000chun.