Compound Poisson process

From HandWiki
Revision as of 16:40, 30 June 2023 by Sherlock (talk | contribs) (add)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

A compound Poisson process is a continuous-time stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. To be precise, a compound Poisson process, parameterised by a rate [math]\displaystyle{ \lambda \gt 0 }[/math] and jump size distribution G, is a process [math]\displaystyle{ \{\,Y(t) : t \geq 0 \,\} }[/math] given by

[math]\displaystyle{ Y(t) = \sum_{i=1}^{N(t)} D_i }[/math]

where, [math]\displaystyle{ \{\,N(t) : t \geq 0\,\} }[/math] is the counting variable of a Poisson process with rate [math]\displaystyle{ \lambda }[/math], and [math]\displaystyle{ \{\,D_i : i \geq 1\,\} }[/math] are independent and identically distributed random variables, with distribution function G, which are also independent of [math]\displaystyle{ \{\,N(t) : t \geq 0\,\}.\, }[/math]

When [math]\displaystyle{ D_i }[/math] are non-negative integer-valued random variables, then this compound Poisson process is known as a stuttering Poisson process.

Properties of the compound Poisson process

The expected value of a compound Poisson process can be calculated using a result known as Wald's equation as:

[math]\displaystyle{ \operatorname E(Y(t)) = \operatorname E(D_1 + \cdots + D_{N(t)}) = \operatorname E(N(t))\operatorname E(D_1) = \operatorname E(N(t)) \operatorname E(D) = \lambda t \operatorname E(D). }[/math]

Making similar use of the law of total variance, the variance can be calculated as:

[math]\displaystyle{ \begin{align} \operatorname{var}(Y(t)) &= \operatorname E(\operatorname{var}(Y(t)\mid N(t))) + \operatorname{var}(\operatorname E(Y(t)\mid N(t))) \\[5pt] &= \operatorname E(N(t)\operatorname{var}(D)) + \operatorname{var}(N(t) \operatorname E(D)) \\[5pt] &= \operatorname{var}(D) \operatorname E(N(t)) + \operatorname E(D)^2 \operatorname{var}(N(t)) \\[5pt] &= \operatorname{var}(D)\lambda t + \operatorname E(D)^2\lambda t \\[5pt] &= \lambda t(\operatorname{var}(D) + \operatorname E(D)^2) \\[5pt] &= \lambda t \operatorname E(D^2). \end{align} }[/math]

Lastly, using the law of total probability, the moment generating function can be given as follows:

[math]\displaystyle{ \Pr(Y(t)=i) = \sum_n \Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n) }[/math]
[math]\displaystyle{ \begin{align} \operatorname E(e^{sY}) & = \sum_i e^{si} \Pr(Y(t)=i) \\[5pt] & = \sum_i e^{si} \sum_{n} \Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n) \\[5pt] & = \sum_n \Pr(N(t)=n) \sum_i e^{si} \Pr(Y(t)=i\mid N(t)=n) \\[5pt] & = \sum_n \Pr(N(t)=n) \sum_i e^{si}\Pr(D_1 + D_2 + \cdots + D_n=i) \\[5pt] & = \sum_n \Pr(N(t)=n) M_D(s)^n \\[5pt] & = \sum_n \Pr(N(t)=n) e^{n\ln(M_D(s))} \\[5pt] & = M_{N(t)}(\ln(M_D(s))) \\[5pt] & = e^{\lambda t \left( M_D(s) - 1 \right) }. \end{align} }[/math]

Exponentiation of measures

Let N, Y, and D be as above. Let μ be the probability measure according to which D is distributed, i.e.

[math]\displaystyle{ \mu(A) = \Pr(D \in A).\, }[/math]

Let δ0 be the trivial probability distribution putting all of the mass at zero. Then the probability distribution of Y(t) is the measure

[math]\displaystyle{ \exp(\lambda t(\mu - \delta_0))\, }[/math]

where the exponential exp(ν) of a finite measure ν on Borel subsets of the real line is defined by

[math]\displaystyle{ \exp(\nu) = \sum_{n=0}^\infty {\nu^{*n} \over n!} }[/math]

and

[math]\displaystyle{ \nu^{*n} = \underbrace{\nu * \cdots *\nu}_{n \text{ factors}} }[/math]

is a convolution of measures, and the series converges weakly.

See also

de:Poisson-Prozess#Zusammengesetzte Poisson-Prozesse