Indecomposable distribution

From HandWiki
Short description: Probability distribution

In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable: Z = X + Y. If, further, it can be expressed as the distribution of the sum of two or more independent identically distributed random variables, then it is divisible: Z = X1 + X2.

Examples

Indecomposable

[math]\displaystyle{ X = \begin{cases} 1 & \text{with probability } p, \\ 0 & \text{with probability } 1-p, \end{cases} }[/math]
then the probability distribution of X is indecomposable.
Proof: Given non-constant distributions U and V, so that U assumes at least two values ab and V assumes two values cd, with a < b and c < d, then U + V assumes at least three distinct values: a + c, a + d, b + d (b + c may be equal to a + d, for example if one uses 0, 1 and 0, 1). Thus the sum of non-constant distributions assumes at least three values, so the Bernoulli distribution is not the sum of non-constant distributions.
  • Suppose a + b + c = 1, abc ≥ 0, and
[math]\displaystyle{ X = \begin{cases} 2 & \text{with probability } a, \\ 1 & \text{with probability } b, \\ 0 & \text{with probability } c. \end{cases} }[/math]
This probability distribution is decomposable (as the distribution of the sum of two Bernoulli-distributed random variables) if
[math]\displaystyle{ \sqrt{a} + \sqrt{c} \le 1 \ }[/math]
and otherwise indecomposable. To see, this, suppose U and V are independent random variables and U + V has this probability distribution. Then we must have
[math]\displaystyle{ \begin{matrix} U = \begin{cases} 1 & \text{with probability } p, \\ 0 & \text{with probability } 1 - p, \end{cases} & \mbox{and} & V = \begin{cases} 1 & \text{with probability } q, \\ 0 & \text{with probability } 1 - q, \end{cases} \end{matrix} }[/math]
for some pq ∈ [0, 1], by similar reasoning to the Bernoulli case (otherwise the sum U + V will assume more than three values). It follows that
[math]\displaystyle{ a = pq, \, }[/math]
[math]\displaystyle{ c = (1-p)(1-q), \, }[/math]
[math]\displaystyle{ b = 1 - a - c. \, }[/math]
This system of two quadratic equations in two variables p and q has a solution (pq) ∈ [0, 1]2 if and only if
[math]\displaystyle{ \sqrt{a} + \sqrt{c} \le 1. \ }[/math]
Thus, for example, the discrete uniform distribution on the set {0, 1, 2} is indecomposable, but the binomial distribution for two trials each having probabilities 1/2, thus giving respective probabilities a, b, c as 1/4, 1/2, 1/4, is decomposable.
[math]\displaystyle{ f(x) = {1 \over \sqrt{2\pi\,}} x^2 e^{-x^2/2} }[/math]
is indecomposable.

Decomposable

[math]\displaystyle{ \sum_{n=1}^\infty {X_n \over 2^n }, }[/math]
where the independent random variables Xn are each equal to 0 or 1 with equal probabilities – this is a Bernoulli trial of each digit of the binary expansion.
  • A sum of indecomposable random variables is decomposable into the original summands. But it may turn out to be infinitely divisible. Suppose a random variable Y has a geometric distribution
[math]\displaystyle{ \Pr(Y = n) = (1-p)^n p\, }[/math]
on {0, 1, 2, ...}.
For any positive integer k, there is a sequence of negative-binomially distributed random variables Yj, j = 1, ..., k, such that Y1 + ... + Yk has this geometric distribution.[citation needed] Therefore, this distribution is infinitely divisible.
On the other hand, let Dn be the nth binary digit of Y, for n ≥ 0. Then the Dn's are independent[why?] and
[math]\displaystyle{ Y = \sum_{n=1}^\infty 2^n D_n, }[/math]
and each term in this sum is indecomposable.

Related concepts

At the other extreme from indecomposability is infinite divisibility.

  • Cramér's theorem shows that while the normal distribution is infinitely divisible, it can only be decomposed into normal distributions.
  • Cochran's theorem shows that the terms in a decomposition of a sum of squares of normal random variables into sums of squares of linear combinations of these variables always have independent chi-squared distributions.

See also

References

  • Linnik, Yu. V. and Ostrovskii, I. V. Decomposition of random variables and vectors, Amer. Math. Soc., Providence RI, 1977.
  • Lukacs, Eugene, Characteristic Functions, New York, Hafner Publishing Company, 1970.