Characterization of probability distributions

From HandWiki

In mathematics in general, a characterization theorem says that a particular object – a function, a space, etc. – is the only one that possesses properties specified in the theorem. A characterization of a probability distribution accordingly states that it is the only probability distribution that satisfies specified conditions. More precisely, the model of characterization of probability distribution was described by V.M. Zolotarev (ru) [1] in such manner. On the probability space we define the space [math]\displaystyle{ \mathcal{X}=\{ X \} }[/math] of random variables with values in measurable metric space [math]\displaystyle{ (U,d_{u}) }[/math] and the space [math]\displaystyle{ \mathcal{Y}=\{ Y \} }[/math] of random variables with values in measurable metric space [math]\displaystyle{ (V,d_{v}) }[/math]. By characterizations of probability distributions we understand general problems of description of some set [math]\displaystyle{ \mathcal{C} }[/math] in the space [math]\displaystyle{ \mathcal{X} }[/math] by extracting the sets [math]\displaystyle{ \mathcal{A} \subseteq \mathcal{X} }[/math] and [math]\displaystyle{ \mathcal{B} \subseteq \mathcal{Y} }[/math] which describe the properties of random variables [math]\displaystyle{ X \in\mathcal{A} }[/math] and their images [math]\displaystyle{ Y=\mathbf{F}X \in \mathcal{B} }[/math], obtained by means of a specially chosen mapping [math]\displaystyle{ \mathbf{F}:\mathcal{X} \to \mathcal{Y} }[/math].
The description of the properties of the random variables [math]\displaystyle{ X }[/math] and of their images [math]\displaystyle{ Y=\mathbf{F}X }[/math] is equivalent to the indication of the set [math]\displaystyle{ \mathcal{A} \subseteq \mathcal{X} }[/math] from which [math]\displaystyle{ X }[/math] must be taken and of the set [math]\displaystyle{ \mathcal{B} \subseteq \mathcal{Y} }[/math] into which its image must fall. So, the set which interests us appears therefore in the following form:

[math]\displaystyle{ X\in\mathcal{A}, \mathbf{F} X \in \mathcal{B} \Leftrightarrow X \in \mathcal{C}, i.e. \mathcal{C} = \mathbf{F}^{-1} \mathcal{B}, }[/math]

where [math]\displaystyle{ \mathbf{F}^{-1} \mathcal{B} }[/math] denotes the complete inverse image of [math]\displaystyle{ \mathcal{B} }[/math] in [math]\displaystyle{ \mathcal{A} }[/math]. This is the general model of characterization of probability distribution. Some examples of characterization theorems:

  • The assumption that two linear (or non-linear) statistics are identically distributed (or independent, or have a constancy regression and so on) can be used to characterize various populations.[2] For example, according to George Pólya's[3] characterization theorem, if [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] are independent identically distributed random variables with finite variance, then the statistics [math]\displaystyle{ S_1 = X_1 }[/math] and [math]\displaystyle{ S_2 = \cfrac{X_1 + X_2}{\sqrt{2}} }[/math] are identically distributed if and only if [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ X_2 }[/math] have a normal distribution with zero mean. In this case
[math]\displaystyle{ \mathbf{F} = \begin{bmatrix} 1 & 0 \\ 1/\sqrt{2} & 1/\sqrt{2} \end{bmatrix} }[/math],
[math]\displaystyle{ \mathcal{A} }[/math] is a set of random two-dimensional column-vectors with independent identically distributed components, [math]\displaystyle{ \mathcal{B} }[/math] is a set of random two-dimensional column-vectors with identically distributed components and [math]\displaystyle{ \mathcal{C} }[/math] is a set of two-dimensional column-vectors with independent identically distributed normal components.
  • According to generalized George Pólya's characterization theorem (without condition on finiteness of variance [2]) if [math]\displaystyle{ X_1 , X_2 , \dots, X_n }[/math] are non-degenerate independent identically distributed random variables, statistics [math]\displaystyle{ X_1 }[/math] and [math]\displaystyle{ a_1X_1 + a_2X_2 + \dots + a_nX_n }[/math] are identically distributed and [math]\displaystyle{ \left | a_j \right \vert \lt 1, a_1^2 + a_2^2 + \dots + a_n^2 = 1 }[/math], then [math]\displaystyle{ X_j }[/math] is normal random variable for any [math]\displaystyle{ j, j=1,2, \dots, n }[/math]. In this case
[math]\displaystyle{ \mathbf{F} = \begin{bmatrix} 1 & 0 & \dots & 0\\ a_1 & a_2 & \dots & a_n \end{bmatrix} }[/math],
[math]\displaystyle{ \mathcal{A} }[/math] is a set of random n-dimensional column-vectors with independent identically distributed components, [math]\displaystyle{ \mathcal{B} }[/math] is a set of random two-dimensional column-vectors with identically distributed components and [math]\displaystyle{ \mathcal{C} }[/math] is a set of n-dimensional column-vectors with independent identically distributed normal components.[4]
  • All probability distributions on the half-line [math]\displaystyle{ \left [ 0, \infty \right ) }[/math] that are memoryless are exponential distributions. "Memoryless" means that if [math]\displaystyle{ X }[/math] is a random variable with such a distribution, then for any numbers [math]\displaystyle{ 0 \lt y \lt x }[/math] ,
[math]\displaystyle{ \Pr(X \gt x\mid X\gt y) = \Pr(X\gt x-y) }[/math].


Verification of conditions of characterization theorems in practice is possible only with some error [math]\displaystyle{ \epsilon }[/math], i.e., only to a certain degree of accuracy.[5] Such a situation is observed, for instance, in the cases where a sample of finite size is considered. That is why there arises the following natural question. Suppose that the conditions of the characterization theorem are fulfilled not exactly but only approximately. May we assert that the conclusion of the theorem is also fulfilled approximately? The theorems in which the problems of this kind are considered are called stability characterizations of probability distributions.

See also

References