Maximum entropy probability distribution
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
Definition of entropy and differential entropy
If [math]\displaystyle{ X }[/math] is a continuous random variable with probability density [math]\displaystyle{ p(x) }[/math], then the differential entropy of [math]\displaystyle{ \ X\ }[/math] is defined as[1][2][3]
- [math]\displaystyle{ H(X) ~=~ - \int_{-\infty}^\infty p(x)\ \log p(x)\ \mathrm{d}\,\! x ~. }[/math]
If [math]\displaystyle{ \ X\ }[/math] is a discrete random variable with distribution given by
- [math]\displaystyle{ \operatorname{Pr}\{\ X=x_k \} = p_k \qquad ~ \mbox{ for } ~ \quad k = 1,\ 2,\ \ldots ~ }[/math]
then the entropy of [math]\displaystyle{ \ X\ }[/math] is defined as
- [math]\displaystyle{ H(X) = - \sum_{k\ge 1}\ p_k\ \log p_k ~. }[/math]
The seemingly divergent term [math]\displaystyle{ \ p(x)\ \log p(x)\ }[/math] is replaced by zero, whenever [math]\displaystyle{ \ p(x) = 0 ~. }[/math]
This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing [math]\displaystyle{ \ H(X)\ }[/math] will also maximize the more general forms.
The base of the logarithm is not important, as long as the same one is used consistently: Change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists often prefer the natural logarithm, resulting in a unit of "nat"s for the entropy.
However, the chosen measure [math]\displaystyle{ \ \mathrm{d}\,\! x\ }[/math] is crucial, even though the typical use of the Lebesgue measure is often defended as a "natural" choice: Which measure is chosen determines the entropy and the consequent maximum entropy distribution.
Distributions with measured constants
Many statistical distributions of applicable interest are those for which the moments or other measurable quantities are constrained to be constants. The following theorem by Ludwig Boltzmann gives the form of the probability density under these constraints.
Continuous case
Suppose [math]\displaystyle{ \ S\ }[/math] is a continuous, closed subset of the real numbers [math]\displaystyle{ \ \mathbb{R}\ }[/math] and we choose to specify [math]\displaystyle{ \ n\ }[/math] measurable functions [math]\displaystyle{ \ f_1,\ \cdots,\ f_n\ }[/math] and [math]\displaystyle{ \ n\ }[/math] numbers [math]\displaystyle{ \ a_1,\ \ldots,\ a_n ~. }[/math] We consider the class [math]\displaystyle{ \ C\ }[/math] of all real-valued random variables which are supported on [math]\displaystyle{ \ S\ }[/math] (i.e. whose density function is zero outside of [math]\displaystyle{ \ S\ }[/math]) and which satisfy the [math]\displaystyle{ \ n\ }[/math] moment conditions:
- [math]\displaystyle{ \mathbb{E} \{\ f_j(X)]\}\ \geq a_j \qquad ~\mbox{ for }~ \quad j=1,\ \ldots,\ n }[/math]
If there is a member in [math]\displaystyle{ \ C\ }[/math] whose density function is positive everywhere in [math]\displaystyle{ \ S\ , }[/math] and if there exists a maximal entropy distribution for [math]\displaystyle{ \ C\ , }[/math] then its probability density [math]\displaystyle{ \ p(x)\ }[/math] has the following form:
- [math]\displaystyle{ p(x) = \exp \left(\ \sum_{j=0}^n\ \lambda_j f_j(x)\ \right) \qquad ~\mbox{ for all }~ \quad x \in S }[/math]
where we assume that [math]\displaystyle{ \ f_0(x) = 1 ~. }[/math] The constant [math]\displaystyle{ \ \lambda_0\ }[/math] and the [math]\displaystyle{ \ n\ }[/math] Lagrange multipliers [math]\displaystyle{ \ \boldsymbol\lambda = (\lambda_1,\ \ldots,\ \lambda_n)\ }[/math] solve the constrained optimization problem with [math]\displaystyle{ \ a_0 = 1\ }[/math] (which ensures that [math]\displaystyle{ \ p\ }[/math] integrates to unity):[4]
- [math]\displaystyle{ \ \max_{\ \lambda_0;\ \boldsymbol\lambda\ }\ \left\{\ \sum_{j=0}^n \lambda_ja_j - \int\ \exp\left( \sum_{j=0}^n \lambda_jf_j(x) \right)\ \mathrm{d}\,\! x\ \right\} \qquad ~\mathrm{ subject\ to }~ \quad \boldsymbol\lambda \geq \mathbf{0}\ }[/math]
Using the Karush–Kuhn–Tucker conditions, it can be shown that the optimization problem has a unique solution because the objective function in the optimization is concave in [math]\displaystyle{ \ \boldsymbol\lambda ~. }[/math]
Note that when the moment constraints are equalities (instead of inequalities), that is,
- [math]\displaystyle{ \ \mathbb{E} \{\ f_j(X)\ \} ~=~ a_j \qquad ~\mbox{ for }~ \quad j=1,\ \ldots,\ n\ , }[/math]
then the constraint condition [math]\displaystyle{ \ \boldsymbol\lambda \geq \mathbf{0}\ }[/math] can be dropped, which makes optimization over the Lagrange multipliers unconstrained.
Discrete case
Suppose [math]\displaystyle{ \ S = \{\ x_1,\ x_2,\ \ldots\ \}\ }[/math] is a (finite or infinite) discrete subset of the reals, and that we choose to specify [math]\displaystyle{ \ n\ }[/math] functions [math]\displaystyle{ \ f_1 ,\ \ldots\ , f_n\ }[/math] and [math]\displaystyle{ \ n\ }[/math] numbers [math]\displaystyle{ \ a_1 ,\ \ldots\ , a_n ~. }[/math] We consider the class [math]\displaystyle{ \ C\ }[/math] of all discrete random variables [math]\displaystyle{ \ X\ }[/math] which are supported on [math]\displaystyle{ \ S\ }[/math] and which satisfy the [math]\displaystyle{ \ n\ }[/math] moment conditions
- [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ f_j(X)\ \} \geq a_j \qquad ~\mbox{ for }~ \quad j=1,\ \ldots\ , n\ }[/math]
If there exists a member of class [math]\displaystyle{ \ C\ }[/math] which assigns positive probability to all members of [math]\displaystyle{ \ S\ }[/math] and if there exists a maximum entropy distribution for [math]\displaystyle{ \ C\ , }[/math] then this distribution has the following shape:
- [math]\displaystyle{ \ \operatorname{\mathbb P}\{\ X = x_k\ \} = \exp\left(\ \sum_{j=0}^n \lambda_j\ f_j(x_k)\ \right) \qquad ~\mbox{ for }~ \quad k = 1,\ 2,\ \ldots\ }[/math]
where we assume that [math]\displaystyle{ f_0=1 }[/math] and the constants [math]\displaystyle{ \ \lambda_0, \; \boldsymbol\lambda \equiv ( \lambda_1 ,\ \ldots\ , \lambda_n )\ }[/math] solve the constrained optimization problem with [math]\displaystyle{ \ a_0 = 1 ~: }[/math][5]
- [math]\displaystyle{ \ \max_{ \lambda_0;\ \boldsymbol\lambda } \left\{\ \sum_{j=0}^n\ \lambda_j\ a_j - \sum_{k \geq 1}\ \exp\left(\ \sum_{j=0}^n\ \lambda_j\ f_j(x_k)\ \right)\ \right\} \qquad ~\mathrm{ for\ which }~ \quad \boldsymbol \lambda \geq \mathbf{0}\ }[/math]
Again as above, if the moment conditions are equalities (instead of inequalities), then the constraint condition [math]\displaystyle{ \ \boldsymbol \lambda \geq \mathbf{0}\ }[/math] is not present in the optimization.
Proof in the case of equality constraints
In the case of equality constraints, this theorem is proved with the calculus of variations and Lagrange multipliers. The constraints can be written as
- [math]\displaystyle{ \int_{-\infty}^{\infty}f_j(x)p(x)dx=a_j }[/math]
We consider the functional
- [math]\displaystyle{ J(p)=\int_{-\infty}^{\infty} p(x)\ln{p(x)}dx-\eta_0\left(\int_{-\infty}^{\infty} p(x)dx-1\right)-\sum_{j=1}^{n}\lambda_j\left(\int_{-\infty}^{\infty} f_j(x)p(x)dx-a_j\right) }[/math]
where [math]\displaystyle{ \eta_0 }[/math] and [math]\displaystyle{ \lambda_j, j\geq 1 }[/math] are the Lagrange multipliers. The zeroth constraint ensures the second axiom of probability. The other constraints are that the measurements of the function are given constants up to order [math]\displaystyle{ n }[/math]. The entropy attains an extremum when the functional derivative is equal to zero:
- [math]\displaystyle{ \frac{\delta J}{\delta p}\left(p\right)=\ln{p(x)}+1-\eta_0-\sum_{j=1}^{n}\lambda_j f_j(x)=0 }[/math]
Therefore, the extremal entropy probability distribution in this case must be of the form ([math]\displaystyle{ \lambda_0:=\eta_0-1 }[/math]),
- [math]\displaystyle{ p(x)=e^{-1+\eta_0}\cdot e^{\sum_{j=1}^{n}\lambda_j f_j(x)} = \exp\left(\sum_{j=0}^{n}\lambda_j f_j(x)\right) \;, }[/math]
remembering that [math]\displaystyle{ f_0(x) = 1 }[/math]. It can be verified that this is the maximal solution by checking that the variation around this solution is always negative.
Uniqueness of the maximum
Suppose [math]\displaystyle{ \ p,\ p'\ }[/math] are distributions satisfying the expectation-constraints. Letting [math]\displaystyle{ \ \alpha\in \bigl( 0, 1 \bigr)\ }[/math] and considering the distribution [math]\displaystyle{ \ q = \alpha \cdot p + (1 - \alpha) \cdot p'\ }[/math] it is clear that this distribution satisfies the expectation-constraints and furthermore has as support [math]\displaystyle{ \ \mathrm{supp}(q) = \mathrm{supp}(p) \cup \mathrm{supp}(p') ~. }[/math] From basic facts about entropy, it holds that [math]\displaystyle{ \ \mathcal{H}(q) \geq \alpha\ \mathcal{H}(p) + (1 - \alpha)\ \mathcal{H}(p') ~. }[/math] Taking limits [math]\displaystyle{ \ \alpha\longrightarrow 1\ }[/math] and [math]\displaystyle{ \ \alpha\longrightarrow 0\ , }[/math] respectively, yields [math]\displaystyle{ \ \mathcal{H}(q) \geq \mathcal{H}(p), \mathcal{H}(p') ~. }[/math]
It follows that a distribution satisfying the expectation-constraints and maximising entropy must necessarily have full support — i. e. the distribution is almost everywhere strictly positive. It follows that the maximising distribution must be an internal point in the space of distributions satisfying the expectation-constraints, that is, it must be a local extreme. Thus it suffices to show that the local extreme is unique, in order to show both that the entropy-maximising distribution is unique (and this also shows that the local extreme is the global maximum).
Suppose [math]\displaystyle{ \ p,\ p'\ }[/math] are local extremes. Reformulating the above computations these are characterised by parameters [math]\displaystyle{ \ \vec\lambda,\ \vec\lambda' \in \mathbb{R}^n\ }[/math] via [math]\displaystyle{ \ p(x) = \frac{\ \exp{ \Bigl\langle\vec\lambda, \vec{f}(x) \Bigr\rangle }\ }{\ C( \vec\lambda )\ }\ }[/math] and similarly for [math]\displaystyle{ \ p'\ , }[/math] where [math]\displaystyle{ \ C(\vec{\lambda}) = \int_{x \in \mathbb{R} } \exp{ \Bigl\langle \vec\lambda , \vec{f}(x) \Bigr\rangle }\ {\mathrm d}\,\! x ~. }[/math] We now note a series of identities: Via 1the satisfaction of the expectation-constraints and utilising gradients / directional derivatives, one has
[math]\displaystyle{ \ { D\ \log C(\cdot) }\ {\bigg|}_{ \vec\lambda } = \tfrac{ D\ C(\cdot) }{ C(\cdot) }\ {\Bigg|}_{\vec\lambda } = \mathbb{E}_{p} \left\{\ \vec{f}(X)\ \right\} = \vec{a}\ }[/math] and similarly for [math]\displaystyle{ \ \vec\lambda' ~.\ }[/math] Letting [math]\displaystyle{ \ u = \vec\lambda'-\vec\lambda \in \mathbb{R}^n\ }[/math] one obtains:
- [math]\displaystyle{ 0 = \Bigl\langle u, \vec{a} - \vec{a} \Bigr\rangle = D_u \log C(\cdot)\ {\Big|}_{ \vec\lambda' } - D_u \log C(\cdot)\ {\Big|}_{ \vec\lambda } = D_u^2 \log C(\cdot)\ {\Big|}_{\vec\gamma } }[/math]
where [math]\displaystyle{ \ \vec\gamma = \theta\ \vec\lambda + (1 - \theta)\vec\lambda'\ }[/math] for some [math]\displaystyle{ \ \theta \in \bigl( 0, 1 \bigr) ~. }[/math] Computing further one has
- [math]\displaystyle{ \begin{array}{rcl} 0 & = & D_u^2\ \log C(\cdot)\ {\bigg|}_{ \vec\gamma } \\ & = & D_u\ \left(\frac{\ \ D_u\ C(\cdot)\ }{ C(\cdot) } \right)\ {\Bigg|}_{ \vec\gamma } \\ & = & \frac{\ D_u^2 C(\cdot)\ }{ C(\cdot) }\ {\Bigg|}_{ \vec\gamma } - \frac{ (\ D_u C(\cdot) )^2\ }{ C(\cdot)^2 }\ {\Bigg|}_{ \vec\gamma } \\ & = & \mathbb{E}_q \left\{\ { \Bigl\langle u, \vec{f}(X) \Bigr\rangle }^2\ \right\} - \left( \mathbb{E}_q \left\{\ \Bigl\langle u, \vec{f}(X) \Bigr\rangle\ \right\} \right)^2 \\ {} \\ & = & \operatorname{Var}_{q}\{\ \Bigl\langle u,\vec{f}(X) \Bigr\rangle\ \} \\ {} \end{array} }[/math]
where [math]\displaystyle{ \ q\ }[/math] is similar to the distribution above, only parameterised by [math]\displaystyle{ \ \vec\gamma ~, }[/math] Assuming that no non-trivial linear combination of the observables is almost everywhere (a.e.) constant, (which e.g. holds if the observables are independent and not a.e. constant), it holds that [math]\displaystyle{ \ \langle u,\vec{f}(X)\rangle\ }[/math] has non-zero variance, unless [math]\displaystyle{ \ u = 0 ~. }[/math] By the above equation it is thus clear, that the latter must be the case. Hence [math]\displaystyle{ \ \vec\lambda' - \vec\lambda = u = 0\ , }[/math] so the parameters characterising the local extrema [math]\displaystyle{ \ p,\ p'\ }[/math] are identical, which means that the distributions themselves are identical. Thus, the local extreme is unique and by the above discussion, the maximum is unique – provided a local extreme actually exists.
Caveats
Note that not all classes of distributions contain a maximum entropy distribution. It is possible that a class contain distributions of arbitrarily large entropy (e.g. the class of all continuous distributions on R with mean 0 but arbitrary standard deviation), or that the entropies are bounded above but there is no distribution which attains the maximal entropy.[lower-alpha 1] It is also possible that the expected value restrictions for the class C force the probability distribution to be zero in certain subsets of S. In that case our theorem doesn't apply, but one can work around this by shrinking the set S.
Examples
Every probability distribution is trivially a maximum entropy probability distribution under the constraint that the distribution has its own entropy. To see this, rewrite the density as [math]\displaystyle{ p(x)=\exp{(\ln{p(x)})} }[/math] and compare to the expression of the theorem above. By choosing [math]\displaystyle{ \ln{p(x)} \rightarrow f(x) }[/math] to be the measurable function and
- [math]\displaystyle{ \int \exp{(f(x))} f(x) dx=-H }[/math]
to be the constant, [math]\displaystyle{ p(x) }[/math] is the maximum entropy probability distribution under the constraint
- [math]\displaystyle{ \int p(x)f(x)dx=-H }[/math].
Nontrivial examples are distributions that are subject to multiple constraints that are different from the assignment of the entropy. These are often found by starting with the same procedure [math]\displaystyle{ \ln{p(x)} \rightarrow f(x) }[/math] and finding that [math]\displaystyle{ f(x) }[/math] can be separated into parts.
A table of examples of maximum entropy distributions is given in Lisman (1972)[6] and Park & Bera (2009).[7]
Uniform and piecewise uniform distributions
The uniform distribution on the interval [a,b] is the maximum entropy distribution among all continuous distributions which are supported in the interval [a, b], and thus the probability density is 0 outside of the interval. This uniform density can be related to Laplace's principle of indifference, sometimes called the principle of insufficient reason. More generally, if we are given a subdivision a=a0 < a1 < ... < ak = b of the interval [a,b] and probabilities p1,...,pk that add up to one, then we can consider the class of all continuous distributions such that
- [math]\displaystyle{ \operatorname{Pr}(a_{j-1}\le X \lt a_j) = p_j \quad \mbox{ for } j=1,\ldots,k }[/math]
The density of the maximum entropy distribution for this class is constant on each of the intervals [aj-1,aj). The uniform distribution on the finite set {x1,...,xn} (which assigns a probability of 1/n to each of these values) is the maximum entropy distribution among all discrete distributions supported on this set.
Positive and specified mean: the exponential distribution
The exponential distribution, for which the density function is
- [math]\displaystyle{ p(x|\lambda) = \begin{cases} \lambda e^{-\lambda x} & x \ge 0, \\ 0 & x \lt 0, \end{cases} }[/math]
is the maximum entropy distribution among all continuous distributions supported in [0,∞) that have a specified mean of 1/λ.
In the case of distributions supported on [0,∞), the maximum entropy distribution depends on relationships between the first and second moments. In specific cases, it may be the exponential distribution, or may be another distribution, or may be undefinable.[8]
Specified mean and variance: the normal distribution
The normal distribution N(μ,σ2), for which the density function is
- [math]\displaystyle{ p(x| \mu, \sigma) = \frac{1}{\sigma \sqrt{2\pi} } e^{ -\frac{(x-\mu)^2}{2\sigma^2} }, }[/math]
has maximum entropy among all real-valued distributions supported on (−∞,∞) with a specified variance σ2 (a particular moment). The same is true when the mean μ and the variance σ2 is specified (the first two moments), since entropy is translation invariant on (−∞,∞). Therefore, the assumption of normality imposes the minimal prior structural constraint beyond these moments. (See the differential entropy article for a derivation.)
Discrete distributions with specified mean
Among all the discrete distributions supported on the set {x1,...,xn} with a specified mean μ, the maximum entropy distribution has the following shape:
- [math]\displaystyle{ \operatorname{Pr}(X=x_k) = Cr^{x_k} \quad\mbox{ for } k=1,\ldots, n }[/math]
where the positive constants C and r can be determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ.
For example, if a large number N of dice are thrown, and you are told that the sum of all the shown numbers is S. Based on this information alone, what would be a reasonable assumption for the number of dice showing 1, 2, ..., 6? This is an instance of the situation considered above, with {x1,...,x6} = {1,...,6} and μ = S/N.
Finally, among all the discrete distributions supported on the infinite set [math]\displaystyle{ \{x_1, x_2,...\} }[/math] with mean μ, the maximum entropy distribution has the shape:
- [math]\displaystyle{ \operatorname{Pr}(X=x_k) = Cr^{x_k} \quad\mbox{ for } k=1,2,\ldots , }[/math]
where again the constants C and r were determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ. For example, in the case that xk = k, this gives
- [math]\displaystyle{ C = \frac{1}{\mu -1} , \quad\quad r = \frac{\mu - 1}{\mu} , }[/math]
such that respective maximum entropy distribution is the geometric distribution.
Circular random variables
For a continuous random variable [math]\displaystyle{ \theta_i }[/math] distributed about the unit circle, the Von Mises distribution maximizes the entropy when the real and imaginary parts of the first circular moment are specified[9] or, equivalently, the circular mean and circular variance are specified.
When the mean and variance of the angles [math]\displaystyle{ \theta_i }[/math] modulo [math]\displaystyle{ 2\pi }[/math] are specified, the wrapped normal distribution maximizes the entropy.[9]
Maximizer for specified mean, variance and skew
There exists an upper bound on the entropy of continuous random variables on [math]\displaystyle{ \mathbb R }[/math] with a specified mean, variance, and skew. However, there is no distribution which achieves this upper bound, because [math]\displaystyle{ p(x) = c\exp{(\lambda_1x+\lambda_2x^2+\lambda_3x^3)} }[/math] is unbounded when [math]\displaystyle{ \lambda_3 \neq 0 }[/math] (see Cover & Thomas (2006: chapter 12)).
However, the maximum entropy is ε-achievable: a distribution's entropy can be arbitrarily close to the upper bound. Start with a normal distribution of the specified mean and variance. To introduce a positive skew, perturb the normal distribution upward by a small amount at a value many σ larger than the mean. The skewness, being proportional to the third moment, will be affected more than the lower order moments.
This is a special case of the general case in which the exponential of any odd-order polynomial in x will be unbounded on [math]\displaystyle{ \mathbb R }[/math]. For example, [math]\displaystyle{ c e^{\lambda x} }[/math] will likewise be unbounded on [math]\displaystyle{ \mathbb R }[/math], but when the support is limited to a bounded or semi-bounded interval the upper entropy bound may be achieved (e.g. if x lies in the interval [0,∞] and λ< 0, the exponential distribution will result).
Maximizer for specified mean and deviation risk measure
Every distribution with log-concave density is a maximal entropy distribution with specified mean μ and deviation risk measure D .[10]
In particular, the maximal entropy distribution with specified mean [math]\displaystyle{ \ E(X) \equiv \mu\ }[/math] and deviation [math]\displaystyle{ \ D(X) \equiv d\ }[/math] is:
- The normal distribution [math]\displaystyle{ \ \mathcal{N} \left( m, d^2 \right)\ , }[/math] if [math]\displaystyle{ \ D(X) = \sqrt{\ \operatorname{\mathbb E} \left\{\ \left( X - \mu \right)^2\ \right\}\ }\ }[/math] is the standard deviation;
- The Laplace distribution, if [math]\displaystyle{ \; D(X) = \operatorname{\mathbb E} \left\{\ \left|\ X - \mu \right|\ \right\}\ }[/math] is the average absolute deviation;[6]
- The distribution with density of the form [math]\displaystyle{ ~ f(x) = c\ \exp \left(\ a\ x + b \downharpoonright x - \mu \downharpoonleft_{-}^2\ \right) ~ }[/math] if [math]\displaystyle{ ~ D(X) = \sqrt{\ \operatorname{\mathbb E} \left\{\ \downharpoonright X - \mu \downharpoonleft_{-}^2\ \right\} ~} ~ }[/math] is the standard lower semi-deviation, where [math]\displaystyle{ \; a, b, c \; }[/math] are constants and the function [math]\displaystyle{ \ \downharpoonright y \downharpoonleft_{-}\ \equiv\ \min \left\{\ 0,\ y\ \right\} ~\ \mbox{ for any } ~\ y \in \mathbb{R}\ , ~ }[/math] returns only the negative values of its argument, otherwise zero.[10]
Other examples
In the table below, each listed distribution maximizes the entropy for a particular set of functional constraints listed in the third column, and the constraint that [math]\displaystyle{ \ x\ }[/math] be included in the support of the probability density, which is listed in the fourth column.[6][7]
Several listed examples (Bernoulli, geometric, exponential, Laplace, Pareto) are trivially true, because their associated constraints are equivalent to the assignment of their entropy. They are included anyway because their constraint is related to a common or easily measured quantity.
For reference, [math]\displaystyle{ \ \Gamma(x) = \int_0^{\infty} e^{-t} t^{x-1}\ {\mathrm d}t\ }[/math] is the gamma function, [math]\displaystyle{ \psi(x) = \frac{\mathrm d}{\ {\mathrm d} x\ } \ln\Gamma(x) = \frac{\Gamma'(x)}{\ \Gamma(x)\ }\ }[/math] is the digamma function, [math]\displaystyle{ \ B(p,q) = \frac{\ \Gamma(p)\ \Gamma(q)\ }{\Gamma(p + q)}\ }[/math] is the beta function, and [math]\displaystyle{ \ \gamma_{\mathsf E}\ }[/math] is the Euler-Mascheroni constant.
Distribution name | Probability density / mass function | Maximum Entropy constraint | Support |
---|---|---|---|
Uniform (discrete) | [math]\displaystyle{ \ f(k) = \frac{1}{b-a+1}\ }[/math] | None | [math]\displaystyle{ \ \{a,a+1,...,b-1,b\}\ }[/math] |
Uniform (continuous) | [math]\displaystyle{ \ f(x) = \frac{1}{b-a}\ }[/math] | None | [math]\displaystyle{ \ [a,b]\ }[/math] |
Bernoulli | [math]\displaystyle{ \ f(k) = p^k(1-p)^{1-k}\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ K\ \} = p\ }[/math] | [math]\displaystyle{ \ \{0,1\}\ }[/math] |
Geometric | [math]\displaystyle{ \ f(k) = (1-p)^{k-1}\ p\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ K\ \} = \frac{1}{p}\ }[/math] | [math]\displaystyle{ \ \mathbb{N} \smallsetminus \left\{0\right\} = \{1,2,3,...\}\ }[/math] |
Exponential | [math]\displaystyle{ \ f(x) = \lambda \exp\left(-\lambda x\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \} = \frac{1}{\lambda}\ }[/math] | [math]\displaystyle{ \ [0,\infty)\ }[/math] |
Laplace | [math]\displaystyle{ \ f(x) = \frac{1}{2b} \exp\left(-\frac{|x - \mu|}{b}\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ |X - \mu|\ \} = b\ }[/math] | [math]\displaystyle{ \ (-\infty,\infty)\ }[/math] |
Asymmetric Laplace | [math]\displaystyle{ \ f(x)=\frac{\ \lambda\ \exp\bigl( -(x-m)\ \lambda\ s\ \kappa^s\bigr)\ }{ \bigl( \kappa + \frac{1}{\kappa} \bigr) }\ }[/math] where [math]\displaystyle{ ~ s \equiv \sgn(x - m) \ }[/math] |
[math]\displaystyle{ \ \operatorname{\mathbb E}(\{\ (X - m)\ s\ \kappa^s\ \} = \frac{1}{\lambda}\ }[/math] | [math]\displaystyle{ \ (-\infty,\infty)\ }[/math] |
Pareto | [math]\displaystyle{ \ f(x) = \frac{\ \alpha\ x_m^\alpha\ }{\ x^{\alpha + 1}\ }\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X\ \} = \frac{1}{\alpha} + \ln(x_m)\ }[/math] | [math]\displaystyle{ \ [x_m,\infty)\ }[/math] |
Normal | [math]\displaystyle{ \ f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \}\ = \mu\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ ( X - \mu )^2\ \} = \sigma^2\ }[/math] |
[math]\displaystyle{ \ (-\infty,\infty)\ }[/math] |
Truncated normal | (see article) | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \}\ = \mu_{\mathsf T}\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ (X-\mu_{\mathsf T})^2\ \} = \sigma_{\mathsf T}^2\ }[/math] |
[math]\displaystyle{ \ [a,b]\ }[/math] |
von Mises | [math]\displaystyle{ \ f(\theta) = \frac{1}{2\pi I_0(\kappa)} \exp{(\kappa \cos{(\theta-\mu)})}\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}(\cos\Theta\ \} = \frac{I_1(\kappa)}{I_0(\kappa)}\cos\mu\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}(\sin\Theta\ \} = \frac{I_1(\kappa)}{I_0(\kappa)}\sin\mu\ }[/math] |
[math]\displaystyle{ \ [0,2\pi)\ }[/math] |
Rayleigh | [math]\displaystyle{ \ f(x) = \frac{x}{\sigma^2} \exp\left(-\frac{x^2}{2\sigma^2}\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X^2\ \}\ = 2\sigma^2\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X\ \} = \frac{\ln(2\sigma^2)-\gamma_\mathrm{E}}{2}\ }[/math] |
[math]\displaystyle{ \ [0,\infty)\ }[/math] |
Beta | [math]\displaystyle{ \ f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)} }[/math] for [math]\displaystyle{ 0 \leq x \leq 1\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X\ \} = \psi(\alpha)-\psi(\alpha+\beta)\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln( 1 - X )\ \} = \psi(\beta )-\psi(\alpha+\beta)\ }[/math] |
[math]\displaystyle{ \ [0,1]\ }[/math] |
Cauchy | [math]\displaystyle{ \ f(x) = \frac{1}{\pi(1+x^2)}\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln( 1 + X^2 )\ \} = 2\ln 2\ }[/math] | [math]\displaystyle{ \ (-\infty,\infty)\ }[/math] |
Chi | [math]\displaystyle{ \ f(x) = \frac{2}{2^{k/2} \Gamma(k/2)} x^{k-1} \exp\left(-\frac{x^2}{2}\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X^2\ \} = k\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X\ \} = \frac{1}{2}\left[\psi\left(\frac{k}{2}\right)\!+\!\ln(2)\right]\ }[/math] |
[math]\displaystyle{ \ [0,\infty)\ }[/math] |
Chi-squared | [math]\displaystyle{ \ f(x) = \frac{1}{2^{k/2} \Gamma(k/2)} x^{\frac{k}{2}\!-\!1} \exp\left(-\frac{x}{2}\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \} = k\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X\ \} = \psi\left(\frac{k}{2}\right)+\ln(2)\ }[/math] |
[math]\displaystyle{ \ [0,\infty)\ }[/math] |
Erlang | [math]\displaystyle{ \ f(x) = \frac{\lambda^k}{(k-1)!} x^{k-1} \exp(-\lambda x)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \} = k/\lambda\ , ~ }[/math] [math]\displaystyle{ \ \ \operatorname{\mathbb E}\{\ \ln X\ \} = \psi(k)-\ln(\lambda)\ }[/math] |
[math]\displaystyle{ \ [0,\infty)\ }[/math] |
Gamma | [math]\displaystyle{ \ f(x) = \frac{\ x^{k - 1} \exp\left( -\frac{x}{\ \theta}\ \right)\ }{ \theta^k\ \Gamma(k)}\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \} = k\ \theta\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X\ \} = \psi(k) + \ln \theta \ }[/math] |
[math]\displaystyle{ \ [0,\infty)\ }[/math] |
Lognormal | [math]\displaystyle{ \ f(x) = \frac{ 1 }{\ \sigma\ x \sqrt{2\pi\ }\ } \exp\left(-\frac{\; (\ln x - \mu)^2\ }{ 2\sigma^2 }\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X \ \} = \mu\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ (\ln(X) - \mu)^2\ \} = \sigma^2\ }[/math] |
[math]\displaystyle{ \ (0,\infty)\ }[/math] |
Maxwell–Boltzmann | [math]\displaystyle{ \ f(x) = \frac{ 1 }{\; a^3\ }\sqrt{ \frac{\ 2\ }{\pi}\ }\ x^{2}\exp\left(-\frac{\ x^2}{\ 2a^2}\right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X^2\ \} = 3a^2\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln X\ \} = 1 + \ln\left(\frac{a}{\sqrt{2}}\right) - \frac{\gamma_\mathrm{E}}{2}\ }[/math] |
[math]\displaystyle{ \ [0,\infty)\ }[/math] |
Weibull | [math]\displaystyle{ \ f(x) = \frac{ k }{\; \lambda^k\ }\ x^{k-1}\ \exp\left(-\frac{\ x^k }{\ \lambda^k } \right)\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X^k\ \} = \lambda^k\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{E}\{\ \ln X\ \} = \ln(\lambda)-\frac{\gamma_\mathrm{E}}{k}\ }[/math] |
[math]\displaystyle{ \ [0,\infty)\ }[/math] |
Multivariate normal | [math]\displaystyle{ \ f_X(\vec{x}) =\ }[/math] [math]\displaystyle{ \ \frac{\ \exp \left( -\frac{\ 1\ }{ 2 }\ \left[ \vec{x} - \vec{\mu} \right]^\top \Sigma^{-1} \left[ \vec{x} - \vec{\mu} \right] \right)\ }{\ \sqrt{ (2\pi)^N \bigl| \Sigma \bigr| \;}\ }\ }[/math] |
[math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \vec{X}\ \} = \vec{\mu}\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ (\vec{X} - \vec{\mu})(\vec{X} - \vec{\mu})^\top\ \} = = \Sigma\ }[/math] |
[math]\displaystyle{ \ \mathbb{R}^n\ }[/math] |
Binomial | [math]\displaystyle{ \ f(k) = {n \choose k} p^k (1-p)^{n-k}\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \} = \mu\ , ~ }[/math] [math]\displaystyle{ \ f \in\ }[/math] n-generalized binomial distribution[11] |
[math]\displaystyle{ \left\{0, {\ldots}, n\right\} }[/math] |
Poisson | [math]\displaystyle{ \ f(k) = \frac{\lambda^k \exp(-\lambda)}{k!}\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \} = \lambda\ , ~ }[/math] [math]\displaystyle{ \ f \in \infty\ }[/math]-generalized binomial distribution}[11] |
[math]\displaystyle{ \ \mathbb{N}=\left\{0,1,{\ldots}\right\}\ }[/math] |
Logistic | [math]\displaystyle{ \ f(x) = \frac{ e^{-x} }{\; \left( 1 + e^{-x} \right)^2\ }\ = \frac{e^{+x} }{\; \left( e^{+x} + 1 \right)^2\ }\ }[/math] | [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ X\ \} = 0\ , ~ }[/math] [math]\displaystyle{ \ \operatorname{\mathbb E}\{\ \ln \left( 1 + e^{-x} \right)\ \} = 1\ }[/math] |
[math]\displaystyle{ \ \left\{-\infty, \infty\right\}\ }[/math] |
The maximum entropy principle can be used to upper bound the entropy of statistical mixtures.[12]
See also
- Exponential family
- Gibbs measure
- Partition function (mathematics)
- Maximal entropy random walk - maximizing entropy rate for a graph
Notes
- ↑ For example, the class of all continuous distributions X on R with E(X) = 0 and E(X2) = E(X3) = 1 (see Cover, Ch 12).
Citations
- ↑ Williams, D. (2001). Weighing the Odds. Cambridge University Press. pp. 197-199. ISBN 0-521-00618-X.
- ↑ Bernardo, J.M.; Smith, A.F.M. (2000). Bayesian Theory. Wiley. pp. 209, 366. ISBN 0-471-49464-X.
- ↑ O'Hagan, A. (1994), Bayesian Inference. Kendall's Advanced Theory of Statistics. 2B. Edward Arnold. section 5.40. ISBN 0-340-52922-9.
- ↑ Botev, Z.I.; Kroese, D.P. (2011). "The generalized cross entropy method, with applications to probability density estimation". Methodology and Computing in Applied Probability 13 (1): 1–27. doi:10.1007/s11009-009-9133-7. http://espace.library.uq.edu.au/view/UQ:200564/UQ200564_preprint.pdf.
- ↑ Botev, Z.I.; Kroese, D.P. (2008). "Non-asymptotic bandwidth selection for density estimation of discrete data". Methodology and Computing in Applied Probability 10 (3): 435. doi:10.1007/s11009-007-9057-zv.
- ↑ 6.0 6.1 6.2 Lisman, J. H. C.; van Zuylen, M. C. A. (1972). "Note on the generation of most probable frequency distributions". Statistica Neerlandica 26 (1): 19–23. doi:10.1111/j.1467-9574.1972.tb00152.x.
- ↑ 7.0 7.1 Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics 150 (2): 219–230. doi:10.1016/j.jeconom.2008.12.014. http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf. Retrieved 2011-06-02.
- ↑ Dowson, D.; Wragg, A. (September 1973). "Maximum-entropy distributions having prescribed first and second moments". IEEE Transactions on Information Theory 19 (5): 689–693. doi:10.1109/tit.1973.1055060. ISSN 0018-9448.
- ↑ 9.0 9.1 Jammalamadaka, S. Rao; SenGupta, A. (2001). Topics in circular statistics. New Jersey: World Scientific. ISBN 978-981-02-3778-3. https://books.google.com/books?id=sKqWMGqQXQkC&q=Jammalamadaka+Topics+in+circular. Retrieved 2011-05-15.
- ↑ 10.0 10.1 Grechuk, Bogdan; Molyboha, Anton; Zabarankin, Michael (2009). "Maximum entropy principle with general deviation measures". Mathematics of Operations Research 34 (2): 445–467. doi:10.1287/moor.1090.0377. https://www.researchgate.net/publication/220442393_Maximum_Entropy_Principle_with_General_Deviation_Measures.
- ↑ 11.0 11.1 Harremös, Peter (2001). "Binomial and Poisson distributions as maximum entropy distributions". IEEE Transactions on Information Theory 47 (5): 2039–2041. doi:10.1109/18.930936.
- ↑ Nielsen, Frank; Nock, Richard (2017). "MaxEnt upper bounds for the differential entropy of univariate continuous distributions". IEEE Signal Processing Letters (IEEE) 24 (4): 402–406. doi:10.1109/LSP.2017.2666792. Bibcode: 2017ISPL...24..402N.
References
- Cover, T. M.; Thomas, J. A. (2006). "Chapter 12, Maximum Entropy". Elements of Information Theory (2 ed.). Wiley. ISBN 978-0471241959. https://archive.org/download/ElementsOfInformationTheory2ndEd/Wiley_-_2006_-_Elements_of_Information_Theory_2nd_Ed.pdf.
- F. Nielsen, R. Nock (2017), MaxEnt upper bounds for the differential entropy of univariate continuous distributions, IEEE Signal Processing Letters, 24(4), 402-406
- I. J. Taneja (2001), Generalized Information Measures and Their Applications. Chapter 1
- Nader Ebrahimi, Ehsan S. Soofi, Refik Soyer (2008), "Multivariate maximum entropy identification, transformation, and dependence", Journal of Multivariate Analysis 99: 1217–1231, doi:10.1016/j.jmva.2007.08.004
Original source: https://en.wikipedia.org/wiki/Maximum entropy probability distribution.
Read more |