Tightness of measures

From HandWiki
Short description: Concept in measure theory

In mathematics, tightness is a concept in measure theory. The intuitive idea is that a given collection of measures does not "escape to infinity".

Definitions

Let [math]\displaystyle{ (X, T) }[/math] be a Hausdorff space, and let [math]\displaystyle{ \Sigma }[/math] be a σ-algebra on [math]\displaystyle{ X }[/math] that contains the topology [math]\displaystyle{ T }[/math]. (Thus, every open subset of [math]\displaystyle{ X }[/math] is a measurable set and [math]\displaystyle{ \Sigma }[/math] is at least as fine as the Borel σ-algebra on [math]\displaystyle{ X }[/math].) Let [math]\displaystyle{ M }[/math] be a collection of (possibly signed or complex) measures defined on [math]\displaystyle{ \Sigma }[/math]. The collection [math]\displaystyle{ M }[/math] is called tight (or sometimes uniformly tight) if, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math], there is a compact subset [math]\displaystyle{ K_{\varepsilon} }[/math] of [math]\displaystyle{ X }[/math] such that, for all measures [math]\displaystyle{ \mu \in M }[/math],

[math]\displaystyle{ |\mu| (X \setminus K_{\varepsilon}) \lt \varepsilon. }[/math]

where [math]\displaystyle{ |\mu| }[/math] is the total variation measure of [math]\displaystyle{ \mu }[/math]. Very often, the measures in question are probability measures, so the last part can be written as

[math]\displaystyle{ \mu (K_{\varepsilon}) \gt 1 - \varepsilon. \, }[/math]

If a tight collection [math]\displaystyle{ M }[/math] consists of a single measure [math]\displaystyle{ \mu }[/math], then (depending upon the author) [math]\displaystyle{ \mu }[/math] may either be said to be a tight measure or to be an inner regular measure.

If [math]\displaystyle{ Y }[/math] is an [math]\displaystyle{ X }[/math]-valued random variable whose probability distribution on [math]\displaystyle{ X }[/math] is a tight measure then [math]\displaystyle{ Y }[/math] is said to be a separable random variable or a Radon random variable.

Another equivalent criterion of the tightness of a collection [math]\displaystyle{ M }[/math] is sequentially weakly compact. We say the family [math]\displaystyle{ M }[/math] of probability measures is sequentially weakly compact if for every sequence [math]\displaystyle{ \left\{\mu_n\right\} }[/math] from the family, there is a subsequence of measures that converges weakly to some probability measure [math]\displaystyle{ \mu }[/math]. It can be shown that a family of measure is tight if and only if it is sequentially weakly compact.

Examples

Compact spaces

If [math]\displaystyle{ X }[/math] is a metrisable compact space, then every collection of (possibly complex) measures on [math]\displaystyle{ X }[/math] is tight. This is not necessarily so for non-metrisable compact spaces. If we take [math]\displaystyle{ [0,\omega_1] }[/math] with its order topology, then there exists a measure [math]\displaystyle{ \mu }[/math] on it that is not inner regular. Therefore, the singleton [math]\displaystyle{ \{\mu\} }[/math] is not tight.

Polish spaces

If [math]\displaystyle{ X }[/math] is a Polish space, then every probability measure on [math]\displaystyle{ X }[/math] is tight. Furthermore, by Prokhorov's theorem, a collection of probability measures on [math]\displaystyle{ X }[/math] is tight if and only if it is precompact in the topology of weak convergence.

A collection of point masses

Consider the real line [math]\displaystyle{ \mathbb{R} }[/math] with its usual Borel topology. Let [math]\displaystyle{ \delta_{x} }[/math] denote the Dirac measure, a unit mass at the point [math]\displaystyle{ x }[/math] in [math]\displaystyle{ \mathbb{R} }[/math]. The collection

[math]\displaystyle{ M_{1} := \{ \delta_{n} | n \in \mathbb{N} \} }[/math]

is not tight, since the compact subsets of [math]\displaystyle{ \mathbb{R} }[/math] are precisely the closed and bounded subsets, and any such set, since it is bounded, has [math]\displaystyle{ \delta_{n} }[/math]-measure zero for large enough [math]\displaystyle{ n }[/math]. On the other hand, the collection

[math]\displaystyle{ M_{2} := \{ \delta_{1 / n} | n \in \mathbb{N} \} }[/math]

is tight: the compact interval [math]\displaystyle{ [0, 1] }[/math] will work as [math]\displaystyle{ K_{\varepsilon} }[/math] for any [math]\displaystyle{ \varepsilon \gt 0 }[/math]. In general, a collection of Dirac delta measures on [math]\displaystyle{ \mathbb{R}^{n} }[/math] is tight if, and only if, the collection of their supports is bounded.

A collection of Gaussian measures

Consider [math]\displaystyle{ n }[/math]-dimensional Euclidean space [math]\displaystyle{ \mathbb{R}^{n} }[/math] with its usual Borel topology and σ-algebra. Consider a collection of Gaussian measures

[math]\displaystyle{ \Gamma = \{ \gamma_{i} | i \in I \}, }[/math]

where the measure [math]\displaystyle{ \gamma_{i} }[/math] has expected value (mean) [math]\displaystyle{ m_{i} \in \mathbb{R}^{n} }[/math] and covariance matrix [math]\displaystyle{ C_{i} \in \mathbb{R}^{n \times n} }[/math]. Then the collection [math]\displaystyle{ \Gamma }[/math] is tight if, and only if, the collections [math]\displaystyle{ \{ m_{i} | i \in I \} \subseteq \mathbb{R}^{n} }[/math] and [math]\displaystyle{ \{ C_{i} | i \in I \} \subseteq \mathbb{R}^{n \times n} }[/math] are both bounded.

Tightness and convergence

Tightness is often a necessary criterion for proving the weak convergence of a sequence of probability measures, especially when the measure space has infinite dimension. See

Exponential tightness

A strengthening of tightness is the concept of exponential tightness, which has applications in large deviations theory. A family of probability measures [math]\displaystyle{ (\mu_{\delta})_{\delta \gt 0} }[/math] on a Hausdorff topological space [math]\displaystyle{ X }[/math] is said to be exponentially tight if, for any [math]\displaystyle{ \varepsilon \gt 0 }[/math], there is a compact subset [math]\displaystyle{ K_{\varepsilon} }[/math] of [math]\displaystyle{ X }[/math] such that

[math]\displaystyle{ \limsup_{\delta \downarrow 0} \delta \log \mu_{\delta} (X \setminus K_{\varepsilon}) \lt - \varepsilon. }[/math]

References

  • Billingsley, Patrick (1995). Probability and Measure. New York, NY: John Wiley & Sons, Inc.. ISBN 0-471-00710-2. 
  • Billingsley, Patrick (1999). Convergence of Probability Measures. New York, NY: John Wiley & Sons, Inc.. ISBN 0-471-19745-9. https://archive.org/details/convergenceofpro0000bill. 
  • Ledoux, Michel (1991). Probability in Banach spaces. Berlin: Springer-Verlag. pp. xii+480. ISBN 3-540-52013-9.  MR1102015 (See chapter 2)