Truncated normal distribution

From HandWiki
Short description: Type of probability distribution
Probability density function
TnormPDF.png
Probability density function for the truncated normal distribution for different sets of parameters. In all cases, a = −10 and b = 10. For the black: μ = −8, σ = 2; blue: μ = 0, σ = 2; red: μ = 9, σ = 10; orange: μ = 0, σ = 10.
Cumulative distribution function
TnormCDF.svg
Cumulative distribution function for the truncated normal distribution for different sets of parameters. In all cases, a = −10 and b = 10. For the black: μ = −8, σ = 2; blue: μ = 0, σ = 2; red: μ = 9, σ = 10; orange: μ = 0, σ = 10.
Notation [math]\displaystyle{ \xi=\frac{x-\mu}{\sigma},\ \alpha=\frac{a-\mu}{\sigma},\ \beta=\frac{b-\mu}{\sigma} }[/math]
[math]\displaystyle{ Z = \Phi(\beta)-\Phi(\alpha) }[/math]
Parameters [math]\displaystyle{ \mu \in \mathbb{R} }[/math]
[math]\displaystyle{ \sigma^2 \geq 0 }[/math] (but see definition)
[math]\displaystyle{ a \in \mathbb{R} }[/math] — minimum value of [math]\displaystyle{ x }[/math]
[math]\displaystyle{ b \in \mathbb{R} }[/math] — maximum value of [math]\displaystyle{ x }[/math] ([math]\displaystyle{ b \gt a }[/math])
Support [math]\displaystyle{ x \in [a, b] }[/math]
PDF [math]\displaystyle{ f(x;\mu,\sigma, a,b) = \frac{\varphi(\xi)}{\sigma Z}\, }[/math][1]
CDF [math]\displaystyle{ F(x;\mu,\sigma, a,b) = \frac{\Phi(\xi) - \Phi(\alpha)}{Z} }[/math]
Mean [math]\displaystyle{ \mu + \frac{\varphi(\alpha)-\varphi(\beta)}{Z}\sigma }[/math]
Median [math]\displaystyle{ \mu + \Phi^{-1}\left(\frac{\Phi(\alpha)+\Phi(\beta)}{2}\right) \sigma }[/math]
Mode [math]\displaystyle{ \left\{\begin{array}{ll}a, & \mathrm{if}\ \mu\lt a \\ \mu, & \mathrm{if}\ a\le\mu\le b\\ b, & \mathrm{if}\ \mu\gt b\end{array}\right. }[/math]
Variance [math]\displaystyle{ \sigma^2\left[1-\frac{\beta\varphi(\beta)-\alpha\varphi(\alpha)}{Z} -\left(\frac{\varphi(\alpha)-\varphi(\beta)}{Z}\right)^2\right] }[/math]
Entropy [math]\displaystyle{ \ln(\sqrt{2 \pi e} \sigma Z) + \frac{\alpha\varphi(\alpha)-\beta\varphi(\beta)}{2Z} }[/math]
MGF [math]\displaystyle{ e^{\mu t + \sigma^2 t^2 / 2} \left[ \frac{ \Phi(\beta- \sigma t) - \Phi(\alpha - \sigma t) }{\Phi(\beta) - \Phi(\alpha) } \right] }[/math]

In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics.

Definitions

Suppose [math]\displaystyle{ X }[/math] has a normal distribution with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math] and lies within the interval [math]\displaystyle{ (a,b), \text{with} \; -\infty \leq a \lt b \leq \infty }[/math]. Then [math]\displaystyle{ X }[/math] conditional on [math]\displaystyle{ a \lt X \lt b }[/math] has a truncated normal distribution.

Its probability density function, [math]\displaystyle{ f }[/math], for [math]\displaystyle{ a \leq x \leq b }[/math], is given by

[math]\displaystyle{ f(x;\mu,\sigma,a,b) = \frac{1}{\sigma}\,\frac{\varphi(\frac{x - \mu}{\sigma})}{\Phi(\frac{b - \mu}{\sigma}) - \Phi(\frac{a - \mu}{\sigma}) } }[/math]

and by [math]\displaystyle{ f=0 }[/math] otherwise.

Here, [math]\displaystyle{ \varphi(\xi)=\frac{1}{\sqrt{2 \pi}}\exp\left(-\frac{1}{2}\xi^2\right) }[/math] is the probability density function of the standard normal distribution and [math]\displaystyle{ \Phi(\cdot) }[/math] is its cumulative distribution function [math]\displaystyle{ \Phi(x) = \frac{1}{2} \left( 1+\operatorname{erf}(x/\sqrt{2}) \right). }[/math] By definition, if [math]\displaystyle{ b=\infty }[/math], then [math]\displaystyle{ \Phi\left(\tfrac{b - \mu}{\sigma}\right) =1 }[/math], and similarly, if [math]\displaystyle{ a = -\infty }[/math], then [math]\displaystyle{ \Phi\left(\tfrac{a - \mu}{\sigma}\right) = 0 }[/math].

The above formulae show that when [math]\displaystyle{ -\infty\lt a\lt b\lt +\infty }[/math] the scale parameter [math]\displaystyle{ \sigma^2 }[/math] of the truncated normal distribution is allowed to assume negative values. The parameter [math]\displaystyle{ \sigma }[/math] is in this case imaginary, but the function [math]\displaystyle{ f }[/math] is nevertheless real, positive, and normalizable. The scale parameter [math]\displaystyle{ \sigma^2 }[/math] of the untruncated normal distribution must be positive because the distribution would not be normalizable otherwise. The doubly truncated normal distribution, on the other hand, can in principle have a negative scale parameter (which is different from the variance, see summary formulae), because no such integrability problems arise on a bounded domain. In this case the distribution cannot be interpreted as an untruncated normal conditional on [math]\displaystyle{ a \lt X \lt b }[/math], of course, but can still be interpreted as a maximum-entropy distribution with first and second moments as constraints, and has an additional peculiar feature: it presents two local maxima instead of one, located at [math]\displaystyle{ x=a }[/math] and [math]\displaystyle{ x=b }[/math].

Properties

The truncated normal is one of two possible maximum entropy probability distributions for a fixed mean and variance constrained to the interval [a,b], the other being the truncated U.[2] Truncated normals with fixed support form an exponential family. Nielsen[3] reported closed-form formula for calculating the Kullback-Leibler divergence and the Bhattacharyya distance between two truncated normal distributions with the support of the first distribution nested into the support of the second distribution.

Moments

If the random variable has been truncated only from below, some probability mass has been shifted to higher values, giving a first-order stochastically dominating distribution and hence increasing the mean to a value higher than the mean [math]\displaystyle{ \mu }[/math] of the original normal distribution. Likewise, if the random variable has been truncated only from above, the truncated distribution has a mean less than [math]\displaystyle{ \mu. }[/math]

Regardless of whether the random variable is bounded above, below, or both, the truncation is a mean-preserving contraction combined with a mean-changing rigid shift, and hence the variance of the truncated distribution is less than the variance [math]\displaystyle{ \sigma^2 }[/math] of the original normal distribution.

Two sided truncation[4]

Let [math]\displaystyle{ \alpha = (a-\mu)/\sigma }[/math] and [math]\displaystyle{ \beta = (b-\mu)/\sigma }[/math]. Then: [math]\displaystyle{ \operatorname{E}(X \mid a\lt X\lt b) = \mu - \sigma\frac{\varphi(\beta) - \varphi(\alpha)}{\Phi(\beta)-\Phi(\alpha)} }[/math] and [math]\displaystyle{ \operatorname{Var}(X \mid a\lt X\lt b) = \sigma^2\left[ 1 - \frac{\beta\varphi(\beta) - \alpha\varphi(\alpha)}{\Phi(\beta)-\Phi(\alpha)} -\left(\frac{\varphi(\beta) - \varphi(\alpha)}{\Phi(\beta)-\Phi(\alpha)}\right)^2\right] }[/math]

Care must be taken in the numerical evaluation of these formulas, which can result in catastrophic cancellation when the interval [math]\displaystyle{ [a,b] }[/math] does not include [math]\displaystyle{ \mu }[/math]. There are better ways to rewrite them that avoid this issue.[5]

One sided truncation (of lower tail)[6]

In this case [math]\displaystyle{ \; b=\infty, \; \varphi(\beta)=0, \; \Phi(\beta)=1, }[/math] then

[math]\displaystyle{ \operatorname{E}(X \mid X\gt a) = \mu +\sigma \varphi(\alpha)/Z ,\! }[/math]

and

[math]\displaystyle{ \operatorname{Var}(X \mid X\gt a) = \sigma^2[1+ \alpha \varphi(\alpha)/Z- (\varphi(\alpha)/Z)^2 ], }[/math]

where [math]\displaystyle{ Z=1-\Phi(\alpha). }[/math]

One sided truncation (of upper tail)

In this case [math]\displaystyle{ \; a=\alpha=-\infty, \; \varphi(\alpha)=0, \; \Phi(\alpha) = 0, }[/math] then

[math]\displaystyle{ \operatorname{E}(X \mid X\lt b) = \mu -\sigma\frac{\varphi(\beta)}{\Phi(\beta)} , }[/math] [math]\displaystyle{ \operatorname{Var}(X \mid X\lt b) = \sigma^2\left[1-\beta \frac{\varphi(\beta)}{\Phi(\beta)}- \left(\frac{\varphi(\beta)}{\Phi(\beta)} \right)^2\right]. }[/math]

(Barr Sherrill) give a simpler expression for the variance of one sided truncations. Their formula is in terms of the chi-square CDF, which is implemented in standard software libraries. (Bebu Mathew) provide formulas for (generalized) confidence intervals around the truncated moments.

A recursive formula

As for the non-truncated case, there is a recursive formula for the truncated moments.[7]

Multivariate

Computing the moments of a multivariate truncated normal is harder.

Generating values from the truncated normal distribution

A random variate [math]\displaystyle{ x }[/math] defined as [math]\displaystyle{ x = \Phi^{-1}( \Phi(\alpha) + U\cdot(\Phi(\beta)-\Phi(\alpha)))\sigma + \mu }[/math] with [math]\displaystyle{ \Phi }[/math] the cumulative distribution function and [math]\displaystyle{ \Phi^{-1} }[/math] its inverse, [math]\displaystyle{ U }[/math] a uniform random number on [math]\displaystyle{ (0, 1) }[/math], follows the distribution truncated to the range [math]\displaystyle{ (a, b) }[/math]. This is simply the inverse transform method for simulating random variables. Although one of the simplest, this method can either fail when sampling in the tail of the normal distribution,[8] or be much too slow.[9] Thus, in practice, one has to find alternative methods of simulation.

One such truncated normal generator (implemented in Matlab and in R (programming language) as trandn.R ) is based on an acceptance rejection idea due to Marsaglia.[10] Despite the slightly suboptimal acceptance rate of (Marsaglia 1964) in comparison with (Robert 1995), Marsaglia's method is typically faster,[9] because it does not require the costly numerical evaluation of the exponential function.

For more on simulating a draw from the truncated normal distribution, see (Robert 1995), (Lynch 2007), (Devroye 1986). The MSM package in R has a function, rtnorm, that calculates draws from a truncated normal. The truncnorm package in R also has functions to draw from a truncated normal.

(Chopin 2011) proposed (arXiv) an algorithm inspired from the Ziggurat algorithm of Marsaglia and Tsang (1984, 2000), which is usually considered as the fastest Gaussian sampler, and is also very close to Ahrens's algorithm (1995). Implementations can be found in C, C++, Matlab and Python.

Sampling from the multivariate truncated normal distribution is considerably more difficult.[11] Exact or perfect simulation is only feasible in the case of truncation of the normal distribution to a polytope region.[11][12] In more general cases, (Damien Walker) introduce a general methodology for sampling truncated densities within a Gibbs sampling framework. Their algorithm introduces one latent variable and, within a Gibbs sampling framework, it is more computationally efficient than the algorithm of (Robert 1995).

See also

Notes

  1. "Lecture 4: Selection". Instituto Superior Técnico. November 11, 2002. p. 1. http://web.ist.utl.pt/~ist11038/compute/qc/,truncG/lecture4k.pdf. 
  2. Dowson, D.; Wragg, A. (September 1973). "Maximum-entropy distributions having prescribed first and second moments (Corresp.)". IEEE Transactions on Information Theory 19 (5): 689–693. doi:10.1109/TIT.1973.1055060. ISSN 1557-9654. https://ieeexplore.ieee.org/document/1055060. 
  3. Frank Nielsen (2022). "Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences". Entropy (MDPI) 24 (3): 421. doi:10.3390/e24030421. PMID 35327931. Bibcode2022Entrp..24..421N. 
  4. Johnson, Norman Lloyd; Kotz, Samuel; Balakrishnan, N. (1994). Continuous Univariate Distributions. 1 (2nd ed.). New York: Wiley. Section 10.1. ISBN 0-471-58495-9. OCLC 29428092. https://www.worldcat.org/oclc/29428092. 
  5. Fernandez-de-Cossio-Diaz, Jorge (2017-12-06), TruncatedNormal.jl: Compute mean and variance of the univariate truncated normal distribution (works far from the peak), https://github.com/cossio/TruncatedNormal.jl, retrieved 2017-12-06 
  6. Greene, William H. (2003). Econometric Analysis (5th ed.). Prentice Hall. ISBN 978-0-13-066189-0. 
  7. Document by Eric Orjebin, "https://people.smp.uq.edu.au/YoniNazarathy/teaching_projects/studentWork/EricOrjebin_TruncatedNormalMoments.pdf"
  8. Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo methods. John Wiley & Sons. 
  9. 9.0 9.1 Botev, Z. I.; L'Ecuyer, P. (2017). "Simulation from the Normal Distribution Truncated to an Interval in the Tail". 25th–28th Oct 2016 Taormina, Italy: ACM. pp. 23–29. doi:10.4108/eai.25-10-2016.2266879. ISBN 978-1-63190-141-6. 
  10. Marsaglia, George (1964). "Generating a variable from the tail of the normal distribution". Technometrics 6 (1): 101–102. doi:10.2307/1266749. 
  11. 11.0 11.1 Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax tilting". Journal of the Royal Statistical Society, Series B 79: 125–148. doi:10.1111/rssb.12162. 
  12. Botev, Zdravko; L'Ecuyer, Pierre (2018). "Chapter 8: Simulation from the Tail of the Univariate and Multivariate Normal Distribution". in Puliafito, Antonio. Systems Modeling: Methodologies and Tools. EAI/Springer Innovations in Communication and Computing.. Springer, Cham. pp. 115–132. doi:10.1007/978-3-319-92378-9_8. ISBN 978-3-319-92377-2. 
  13. Sun, Jingchao; Kong, Maiying; Pal, Subhadip (22 June 2021). "The Modified-Half-Normal distribution: Properties and an efficient sampling scheme". Communications in Statistics - Theory and Methods 52 (5): 1591–1613. doi:10.1080/03610926.2021.1934700. ISSN 0361-0926. https://www.tandfonline.com/doi/abs/10.1080/03610926.2021.1934700?journalCode=lsta20. 

References

fr:Loi tronquée#Loi normale tronquée