Random measure

From HandWiki
Revision as of 17:26, 6 March 2023 by AnLinks (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In probability theory, a random measure is a measure-valued random element.[1][2] Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

Definition

Random measures can be defined as transition kernels or as random elements. Both definitions are equivalent. For the definitions, let [math]\displaystyle{ E }[/math] be a separable complete metric space and let [math]\displaystyle{ \mathcal E }[/math] be its Borel [math]\displaystyle{ \sigma }[/math]-algebra. (The most common example of a separable complete metric space is [math]\displaystyle{ \R^n }[/math])

As a transition kernel

A random measure [math]\displaystyle{ \zeta }[/math] is a (a.s.) locally finite transition kernel from a (abstract) probability space [math]\displaystyle{ (\Omega, \mathcal A, P) }[/math] to [math]\displaystyle{ (E, \mathcal E) }[/math].[3]

Being a transition kernel means that

  • For any fixed [math]\displaystyle{ B \in \mathcal \mathcal E }[/math], the mapping
[math]\displaystyle{ \omega \mapsto \zeta(\omega,B) }[/math]
is measurable from [math]\displaystyle{ (\Omega, \mathcal A) }[/math] to [math]\displaystyle{ (E, \mathcal E) }[/math]
  • For every fixed [math]\displaystyle{ \omega \in \Omega }[/math], the mapping
[math]\displaystyle{ B \mapsto \zeta(\omega, B) \quad (B \in \mathcal E) }[/math]
is a measure on [math]\displaystyle{ (E, \mathcal E) }[/math]

Being locally finite means that the measures

[math]\displaystyle{ B \mapsto \zeta(\omega, B) }[/math]

satisfy [math]\displaystyle{ \zeta(\omega,\tilde B) \lt \infty }[/math] for all bounded measurable sets [math]\displaystyle{ \tilde B \in \mathcal E }[/math] and for all [math]\displaystyle{ \omega \in \Omega }[/math] except some [math]\displaystyle{ P }[/math]-null set

In the context of stochastic processes there is the related concept of a stochastic kernel, probability kernel, Markov kernel.

As a random element

Define

[math]\displaystyle{ \tilde \mathcal M:= \{ \mu \mid \mu \text{ is measure on } (E, \mathcal E) \} }[/math]

and the subset of locally finite measures by

[math]\displaystyle{ \mathcal M:= \{ \mu \in \tilde \mathcal M \mid \mu(\tilde B) \lt \infty \text{ for all bounded measurable } \tilde B \in \mathcal E \} }[/math]

For all bounded measurable [math]\displaystyle{ \tilde B }[/math], define the mappings

[math]\displaystyle{ I_{\tilde B } \colon \mu \mapsto \mu(\tilde B) }[/math]

from [math]\displaystyle{ \tilde \mathcal M }[/math] to [math]\displaystyle{ \R }[/math]. Let [math]\displaystyle{ \tilde \mathbb M }[/math] be the [math]\displaystyle{ \sigma }[/math]-algebra induced by the mappings [math]\displaystyle{ I_{\tilde B } }[/math] on [math]\displaystyle{ \tilde \mathcal M }[/math] and [math]\displaystyle{ \mathbb M }[/math] the [math]\displaystyle{ \sigma }[/math]-algebra induced by the mappings [math]\displaystyle{ I_{\tilde B } }[/math] on [math]\displaystyle{ \mathcal M }[/math]. Note that [math]\displaystyle{ \tilde\mathbb M|_{\mathcal M}= \mathbb M }[/math].

A random measure is a random element from [math]\displaystyle{ (\Omega, \mathcal A, P) }[/math] to [math]\displaystyle{ (\tilde \mathcal M, \tilde \mathbb M) }[/math] that almost surely takes values in [math]\displaystyle{ (\mathcal M, \mathbb M) }[/math][3][4][5]

Basic related concepts

Intensity measure

For a random measure [math]\displaystyle{ \zeta }[/math], the measure [math]\displaystyle{ \operatorname E \zeta }[/math] satisfying

[math]\displaystyle{ \operatorname E \left[ \int f(x) \; \zeta (\mathrm dx )\right] = \int f(x) \; \operatorname E \zeta (\mathrm dx) }[/math]

for every positive measurable function [math]\displaystyle{ f }[/math] is called the intensity measure of [math]\displaystyle{ \zeta }[/math]. The intensity measure exists for every random measure and is a s-finite measure.

Supporting measure

For a random measure [math]\displaystyle{ \zeta }[/math], the measure [math]\displaystyle{ \nu }[/math] satisfying

[math]\displaystyle{ \int f(x) \; \zeta(\mathrm dx )=0 \text{ a.s. } \text{ iff } \int f(x) \; \nu (\mathrm dx)=0 }[/math]

for all positive measurable functions is called the supporting measure of [math]\displaystyle{ \zeta }[/math]. The supporting measure exists for all random measures and can be chosen to be finite.

Laplace transform

For a random measure [math]\displaystyle{ \zeta }[/math], the Laplace transform is defined as

[math]\displaystyle{ \mathcal L_\zeta(f)= \operatorname E \left[ \exp \left( -\int f(x) \; \zeta (\mathrm dx ) \right) \right] }[/math]

for every positive measurable function [math]\displaystyle{ f }[/math].

Basic properties

Measurability of integrals

For a random measure [math]\displaystyle{ \zeta }[/math], the integrals

[math]\displaystyle{ \int f(x) \zeta(\mathrm dx) }[/math]

and [math]\displaystyle{ \zeta(A) := \int \mathbf 1_A(x) \zeta(\mathrm dx) }[/math]

for positive [math]\displaystyle{ \mathcal E }[/math]-measurable [math]\displaystyle{ f }[/math] are measurable, so they are random variables.

Uniqueness

The distribution of a random measure is uniquely determined by the distributions of

[math]\displaystyle{ \int f(x) \zeta(\mathrm dx) }[/math]

for all continuous functions with compact support [math]\displaystyle{ f }[/math] on [math]\displaystyle{ E }[/math]. For a fixed semiring [math]\displaystyle{ \mathcal I \subset \mathcal E }[/math] that generates [math]\displaystyle{ \mathcal E }[/math] in the sense that [math]\displaystyle{ \sigma(\mathcal I)=\mathcal E }[/math], the distribution of a random measure is also uniquely determined by the integral over all positive simple [math]\displaystyle{ \mathcal I }[/math]-measurable functions [math]\displaystyle{ f }[/math].[6]

Decomposition

A measure generally might be decomposed as:

[math]\displaystyle{ \mu=\mu_d + \mu_a = \mu_d + \sum_{n=1}^N \kappa_n \delta_{X_n}, }[/math]

Here [math]\displaystyle{ \mu_d }[/math] is a diffuse measure without atoms, while [math]\displaystyle{ \mu_a }[/math] is a purely atomic measure.

Random counting measure

A random measure of the form:

[math]\displaystyle{ \mu=\sum_{n=1}^N \delta_{X_n}, }[/math]

where [math]\displaystyle{ \delta }[/math] is the Dirac measure, and [math]\displaystyle{ X_n }[/math] are random variables, is called a point process[1][2] or random counting measure. This random measure describes the set of N particles, whose locations are given by the (generally vector valued) random variables [math]\displaystyle{ X_n }[/math]. The diffuse component [math]\displaystyle{ \mu_d }[/math] is null for a counting measure.

In the formal notation of above a random counting measure is a map from a probability space to the measurable space ([math]\displaystyle{ N_X }[/math], [math]\displaystyle{ \mathfrak{B}(N_X) }[/math]) a measurable space. Here [math]\displaystyle{ N_X }[/math] is the space of all boundedly finite integer-valued measures [math]\displaystyle{ N \in M_X }[/math] (called counting measures).

The definitions of expectation measure, Laplace functional, moment measures and stationarity for random measures follow those of point processes. Random measures are useful in the description and analysis of Monte Carlo methods, such as Monte Carlo numerical quadrature and particle filters.[7]

See also

References

  1. 1.0 1.1 Kallenberg, O., Random Measures, 4th edition. Academic Press, New York, London; Akademie-Verlag, Berlin (1986). ISBN:0-12-394960-2 MR854102. An authoritative but rather difficult reference.
  2. 2.0 2.1 Jan Grandell, Point processes and random measures, Advances in Applied Probability 9 (1977) 502-526. MR0478331 JSTOR A nice and clear introduction.
  3. 3.0 3.1 Kallenberg, Olav (2017). Random Measures, Theory and Applications. Probability Theory and Stochastic Modelling. 77. Switzerland: Springer. p. 1. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3. 
  4. Klenke, Achim (2008). Probability Theory. Berlin: Springer. p. 526. doi:10.1007/978-1-84800-048-3. ISBN 978-1-84800-047-6. 
  5. Daley, D. J.; Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes. Probability and its Applications. doi:10.1007/b97277. ISBN 0-387-95541-0. 
  6. Kallenberg, Olav (2017). Random Measures, Theory and Applications. Probability Theory and Stochastic Modelling. 77. Switzerland: Springer. p. 52. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3. 
  7. "Crisan, D., Particle Filters: A Theoretical Perspective, in Sequential Monte Carlo in Practice, Doucet, A., de Freitas, N. and Gordon, N. (Eds), Springer, 2001, ISBN:0-387-95146-6