Wishart distribution
Notation | X ~ Wp(V, n) | ||
---|---|---|---|
Parameters |
n > p − 1 degrees of freedom (real) V > 0 scale matrix (p × p pos. def) | ||
Support | X(p × p) positive definite matrix | ||
[math]\displaystyle{ f_{\mathbf X}(\mathbf X) = \frac{|\mathbf{X}|^{(n-p-1)/2} e^{-\operatorname{tr}(\mathbf{V}^{-1}\mathbf{X})/2}}{2^\frac{np}{2}|{\mathbf V}|^{n/2}\Gamma_p(\frac n 2)} }[/math]
| |||
Mean | [math]\displaystyle{ \operatorname{E}[X]=n{\mathbf V} }[/math] | ||
Mode | (n − p − 1)V for n ≥ p + 1 | ||
Variance | [math]\displaystyle{ \operatorname{Var}(\mathbf{X}_{ij}) = n \left (v_{ij}^2+v_{ii}v_{jj} \right ) }[/math] | ||
Entropy | see below | ||
CF | [math]\displaystyle{ \Theta \mapsto \left|{\mathbf I} - 2i\,{\mathbf\Theta}{\mathbf V}\right|^{-\frac{n}{2}} }[/math] |
In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928.[1] Other names include Wishart ensemble (in random matrix theory, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve Laguerre polynomials), or LOE, LUE, LSE (in analogy with GOE, GUE, GSE).[2]
It is a family of probability distributions defined over symmetric, positive-definite random matrices (i.e. matrix-valued random variables). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector.[3]
Definition
Suppose G is a p × n matrix, each column of which is independently drawn from a p-variate normal distribution with zero mean:
- [math]\displaystyle{ G = (g_i^1,\dots,g_i^n) \sim \mathcal{N}_p(0,V). }[/math]
Then the Wishart distribution is the probability distribution of the p × p random matrix [4]
- [math]\displaystyle{ S= G G^T = \sum_{i=1}^n g_{i}g_{i}^T }[/math]
known as the scatter matrix. One indicates that S has that probability distribution by writing
- [math]\displaystyle{ S\sim W_p(V,n). }[/math]
The positive integer n is the number of degrees of freedom. Sometimes this is written W(V, p, n). For n ≥ p the matrix S is invertible with probability 1 if V is invertible.
If p = V = 1 then this distribution is a chi-squared distribution with n degrees of freedom.
Occurrence
The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices[citation needed] and in multidimensional Bayesian analysis.[5] It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels .[6]
Probability density function
The Wishart distribution can be characterized by its probability density function as follows:
Let X be a p × p symmetric matrix of random variables that is positive semi-definite. Let V be a (fixed) symmetric positive definite matrix of size p × p.
Then, if n ≥ p, X has a Wishart distribution with n degrees of freedom if it has the probability density function
- [math]\displaystyle{ f_{\mathbf X} (\mathbf X) = \frac{1}{2^{np/2} \left|{\mathbf V}\right|^{n/2} \Gamma_p\left(\frac {n}{2}\right ) }{\left|\mathbf{X}\right|}^{(n-p-1)/2} e^{-\frac{1}{2}\operatorname{tr}({\mathbf V}^{-1}\mathbf{X})} }[/math]
where [math]\displaystyle{ \left|{\mathbf X}\right| }[/math] is the determinant of [math]\displaystyle{ \mathbf X }[/math] and Γp is the multivariate gamma function defined as
- [math]\displaystyle{ \Gamma_p \left (\frac n 2 \right )= \pi^{p(p-1)/4}\prod_{j=1}^p \Gamma\left( \frac{n}{2} - \frac{j-1}{2} \right ). }[/math]
The density above is not the joint density of all the [math]\displaystyle{ p^2 }[/math] elements of the random matrix X (such [math]\displaystyle{ p^2 }[/math]-dimensional density does not exist because of the symmetry constrains [math]\displaystyle{ X_{ij}=X_{ji} }[/math]), it is rather the joint density of [math]\displaystyle{ p(p+1)/2 }[/math] elements [math]\displaystyle{ X_{ij} }[/math] for [math]\displaystyle{ i\le j }[/math] (,[1] page 38). Also, the density formula above applies only to positive definite matrices [math]\displaystyle{ \mathbf x; }[/math] for other matrices the density is equal to zero.
Spectral density
The joint-eigenvalue density for the eigenvalues [math]\displaystyle{ \lambda_1,\dots , \lambda_p\ge 0 }[/math] of a random matrix [math]\displaystyle{ \mathbf{X}\sim W_p(\mathbf{I},n) }[/math] is,[8][9]
- [math]\displaystyle{ c_{n,p}e^{-\frac{1}{2}\sum_i\lambda_i}\prod \lambda_i^{(n-p-1)/2}\prod_{i\lt j}|\lambda_i-\lambda_j| }[/math]
where [math]\displaystyle{ c_{n,p} }[/math]is a constant.
In fact the above definition can be extended to any real n > p − 1. If n ≤ p − 1, then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lower-dimension subspace of the space of p × p matrices.[10]
Use in Bayesian statistics
In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the precision matrix Ω = Σ−1, where Σ is the covariance matrix.[11]:135[12]
Choice of parameters
The least informative, proper Wishart prior is obtained by setting n = p.[citation needed]
The prior mean of Wp(V, n) is nV, suggesting that a reasonable choice for V would be n−1Σ0−1, where Σ0 is some prior guess for the covariance matrix.
Properties
Log-expectation
The following formula plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution. From equation (2.63),[13]
- [math]\displaystyle{ \operatorname{E}[\, \ln\left|\mathbf{X}\right|\, ] = \psi_p\left(\frac n 2\right) + p \, \ln(2) + \ln|\mathbf{V}| }[/math]
where [math]\displaystyle{ \psi_p }[/math] is the multivariate digamma function (the derivative of the log of the multivariate gamma function).
Log-variance
The following variance computation could be of help in Bayesian statistics:
- [math]\displaystyle{ \operatorname{Var}\left[\, \ln\left|\mathbf{X}\right| \,\right]=\sum_{i=1}^p \psi_1\left(\frac{n+1-i} 2\right) }[/math]
where [math]\displaystyle{ \psi_1 }[/math] is the trigamma function. This comes up when computing the Fisher information of the Wishart random variable.
Entropy
The information entropy of the distribution has the following formula:[11]:693
- [math]\displaystyle{ \operatorname{H}\left[\, \mathbf{X} \,\right] = -\ln \left( B(\mathbf{V},n) \right) -\frac{n-p-1}{2} \operatorname{E}\left[\, \ln\left|\mathbf{X}\right|\,\right] + \frac{np}{2} }[/math]
where B(V, n) is the normalizing constant of the distribution:
- [math]\displaystyle{ B(\mathbf{V},n) = \frac{1}{\left|\mathbf{V}\right|^{n/2} 2^{np/2}\Gamma_p\left(\frac n 2 \right)}. }[/math]
This can be expanded as follows:
- [math]\displaystyle{ \begin{align} \operatorname{H}\left[\, \mathbf{X}\, \right] & = \frac{n}{2} \ln \left|\mathbf{V}\right| +\frac{n p}{2} \ln 2 + \ln \Gamma_p \left(\frac{n}{2} \right) - \frac{n-p-1}{2} \operatorname{E}\left[\, \ln\left|\mathbf{X}\right|\, \right] + \frac{n p}{2} \\[8pt] &= \frac{n}{2} \ln\left|\mathbf{V}\right| + \frac{n p}{2} \ln 2 + \ln\Gamma_p\left(\frac{n}{2} \right) - \frac{n-p-1} 2 \left( \psi_p \left(\frac{n}{2}\right) + p\ln 2 + \ln\left|\mathbf{V}\right|\right) + \frac{n p}{2} \\[8pt] &= \frac{n}{2} \ln\left|\mathbf{V}\right| + \frac{n p}{2} \ln 2 + \ln\Gamma_p\left(\frac n 2\right) - \frac{n-p-1}{2} \psi_p\left(\frac n 2 \right) - \frac{n-p-1} 2 \left(p\ln 2 +\ln\left|\mathbf{V}\right| \right) + \frac{n p}{2} \\[8pt] &= \frac{p+1}{2} \ln\left|\mathbf{V}\right| + \frac1 2 p(p+1) \ln 2 + \ln\Gamma_p\left(\frac n 2\right) - \frac{n-p-1}{2} \psi_p\left(\frac n 2 \right) + \frac{n p}{2} \end{align} }[/math]
Cross-entropy
The cross-entropy of two Wishart distributions [math]\displaystyle{ p_0 }[/math] with parameters [math]\displaystyle{ n_0, V_0 }[/math] and [math]\displaystyle{ p_1 }[/math] with parameters [math]\displaystyle{ n_1, V_1 }[/math] is
- [math]\displaystyle{ \begin{align} H(p_0, p_1) &= \operatorname{E}_{p_0}[\, -\log p_1\, ]\\[8pt] &= \operatorname{E}_{p_0} \left[\, -\log \frac{\left|\mathbf{X}\right|^{(n_1 - p_1 - 1)/2} e^{-\operatorname{tr}(\mathbf{V}_1^{-1} \mathbf{X})/2}}{2^{n_1 p_1/2} \left|\mathbf{V}_1\right|^{n_1/2} \Gamma_{p_1}\left(\tfrac{n_1}{2}\right)} \right]\\[8pt] &= \tfrac{n_1 p_1} 2 \log 2 + \tfrac{n_1} 2 \log \left|\mathbf{V}_1\right| + \log \Gamma_{p_1}(\tfrac{n_1} 2) - \tfrac{n_1 - p_1 - 1} 2 \operatorname{E}_{p_0}\left[\, \log\left|\mathbf{X}\right|\, \right] + \tfrac{1}{2}\operatorname{E}_{p_0}\left[\, \operatorname{tr}\left(\,\mathbf{V}_1^{-1}\mathbf{X}\,\right) \, \right] \\[8pt] &= \tfrac{n_1 p_1}{2} \log 2 + \tfrac{n_1} 2 \log \left|\mathbf{V}_1\right| + \log \Gamma_{p_1}(\tfrac{n_1}{2}) - \tfrac{n_1 - p_1 - 1}{2} \left( \psi_{p_0}(\tfrac{n_0} 2) + p_0 \log 2 + \log \left|\mathbf{V}_0\right|\right)+ \tfrac{1}{2} \operatorname{tr}\left(\, \mathbf{V}_1^{-1} n_0 \mathbf{V}_0\, \right) \\[8pt] &=-\tfrac{n_1}{2} \log \left|\, \mathbf{V}_1^{-1} \mathbf{V}_0\, \right| + \tfrac{p_1+1} 2 \log \left|\mathbf{V}_0\right| + \tfrac{n_0} 2 \operatorname{tr}\left(\, \mathbf{V}_1^{-1} \mathbf{V}_0\right)+ \log \Gamma_{p_1}\left(\tfrac{n_1}{2}\right) - \tfrac{n_1 - p_1 - 1}{2} \psi_{p_0}(\tfrac{n_0}{2}) + \tfrac{n_1(p_1 - p_0)+p_0(p_1+1)}{2} \log 2 \end{align} }[/math]
Note that when [math]\displaystyle{ p_0=p_1 }[/math] and [math]\displaystyle{ n_0=n_1 }[/math]we recover the entropy.
KL-divergence
The Kullback–Leibler divergence of [math]\displaystyle{ p_1 }[/math] from [math]\displaystyle{ p_0 }[/math] is
- [math]\displaystyle{ \begin{align} D_{KL}(p_0 \| p_1) & = H(p_0, p_1) - H(p_0) \\[6pt] & =-\frac{n_1} 2 \log |\mathbf{V}_1^{-1} \mathbf{V}_0| + \frac{n_0}{2}(\operatorname{tr}(\mathbf{V}_1^{-1} \mathbf{V}_0) - p)+ \log \frac{\Gamma_p\left(\frac{n_1} 2 \right)}{\Gamma_p\left(\frac{n_0} 2 \right)} + \tfrac{n_0 - n_1 } 2 \psi_p\left(\frac{n_0} 2\right) \end{align} }[/math]
Characteristic function
The characteristic function of the Wishart distribution is
- [math]\displaystyle{ \Theta \mapsto \operatorname{E}\left[ \, \exp\left( \,i \operatorname{tr}\left(\,\mathbf{X}{\mathbf\Theta}\,\right)\,\right)\, \right] = \left|\, 1 - 2i\, {\mathbf\Theta}\,{\mathbf V}\, \right|^{-n/2} }[/math]
where E[⋅] denotes expectation. (Here Θ is any matrix with the same dimensions as V, 1 indicates the identity matrix, and i is a square root of −1).[9] Properly interpreting this formula requires a little care, because noninteger complex powers are multivalued; when n is noninteger, the correct branch must be determined via analytic continuation.[14]
Theorem
If a p × p random matrix X has a Wishart distribution with m degrees of freedom and variance matrix V — write [math]\displaystyle{ \mathbf{X}\sim\mathcal{W}_p({\mathbf V},m) }[/math] — and C is a q × p matrix of rank q, then [15]
- [math]\displaystyle{ \mathbf{C}\mathbf{X}{\mathbf C}^T \sim \mathcal{W}_q\left({\mathbf C}{\mathbf V}{\mathbf C}^T,m\right). }[/math]
Corollary 1
If z is a nonzero p × 1 constant vector, then:[15]
- [math]\displaystyle{ \sigma_z^{-2} \, {\mathbf z}^T\mathbf{X}{\mathbf z} \sim \chi_m^2. }[/math]
In this case, [math]\displaystyle{ \chi_m^2 }[/math] is the chi-squared distribution and [math]\displaystyle{ \sigma_z^2={\mathbf z}^T{\mathbf V}{\mathbf z} }[/math] (note that [math]\displaystyle{ \sigma_z^2 }[/math] is a constant; it is positive because V is positive definite).
Corollary 2
Consider the case where zT = (0, ..., 0, 1, 0, ..., 0) (that is, the j-th element is one and all others zero). Then corollary 1 above shows that
- [math]\displaystyle{ \sigma_{jj}^{-1} \, w_{jj}\sim \chi^2_m }[/math]
gives the marginal distribution of each of the elements on the matrix's diagonal.
George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the off-diagonal elements is not chi-squared. Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family.[16]
Estimator of the multivariate normal distribution
The Wishart distribution is the sampling distribution of the maximum-likelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution.[17] A derivation of the MLE uses the spectral theorem.
Bartlett decomposition
The Bartlett decomposition of a matrix X from a p-variate Wishart distribution with scale matrix V and n degrees of freedom is the factorization:
- [math]\displaystyle{ \mathbf{X} = {\textbf L}{\textbf A}{\textbf A}^T{\textbf L}^T, }[/math]
where L is the Cholesky factor of V, and:
- [math]\displaystyle{ \mathbf A = \begin{pmatrix} c_1 & 0 & 0 & \cdots & 0\\ n_{21} & c_2 &0 & \cdots& 0 \\ n_{31} & n_{32} & c_3 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots \\ n_{p1} & n_{p2} & n_{p3} &\cdots & c_p \end{pmatrix} }[/math]
where [math]\displaystyle{ c_i^2 \sim \chi^2_{n-i+1} }[/math] and nij ~ N(0, 1) independently.[18] This provides a useful method for obtaining random samples from a Wishart distribution.[19]
Marginal distribution of matrix elements
Let V be a 2 × 2 variance matrix characterized by correlation coefficient −1 < ρ < 1 and L its lower Cholesky factor:
- [math]\displaystyle{ \mathbf{V} = \begin{pmatrix} \sigma_1^2 & \rho \sigma_1 \sigma_2 \\ \rho \sigma_1 \sigma_2 & \sigma_2^2 \end{pmatrix}, \qquad \mathbf{L} = \begin{pmatrix} \sigma_1 & 0 \\ \rho \sigma_2 & \sqrt{1-\rho^2} \sigma_2 \end{pmatrix} }[/math]
Multiplying through the Bartlett decomposition above, we find that a random sample from the 2 × 2 Wishart distribution is
- [math]\displaystyle{ \mathbf{X} = \begin{pmatrix} \sigma_1^2 c_1^2 & \sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt{1-\rho^2} c_1 n_{21} \right ) \\ \sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt{1-\rho^2} c_1 n_{21} \right ) & \sigma_2^2 \left(\left (1-\rho^2 \right ) c_2^2 + \left (\sqrt{1-\rho^2} n_{21} + \rho c_1 \right )^2 \right) \end{pmatrix} }[/math]
The diagonal elements, most evidently in the first element, follow the χ2 distribution with n degrees of freedom (scaled by σ2) as expected. The off-diagonal element is less familiar but can be identified as a normal variance-mean mixture where the mixing density is a χ2 distribution. The corresponding marginal probability density for the off-diagonal element is therefore the variance-gamma distribution
- [math]\displaystyle{ f(x_{12}) = \frac{\left | x_{12} \right |^{\frac{n-1}{2}}}{\Gamma\left(\frac{n}{2}\right) \sqrt{2^{n-1} \pi \left (1-\rho^2 \right ) \left (\sigma_1 \sigma_2 \right )^{n+1}}} \cdot K_{\frac{n-1}{2}} \left(\frac{\left |x_{12} \right |}{\sigma_1 \sigma_2 \left (1-\rho^2 \right )}\right) \exp{\left(\frac{\rho x_{12}}{\sigma_1 \sigma_2 (1-\rho^2)}\right)} }[/math]
where Kν(z) is the modified Bessel function of the second kind.[20] Similar results may be found for higher dimensions, but the interdependence of the off-diagonal correlations becomes increasingly complicated. It is also possible to write down the moment-generating function even in the noncentral case (essentially the nth power of Craig (1936)[21] equation 10) although the probability density becomes an infinite sum of Bessel functions.
The range of the shape parameter
It can be shown [22] that the Wishart distribution can be defined if and only if the shape parameter n belongs to the set
- [math]\displaystyle{ \Lambda_p:=\{0,\ldots,p-1\}\cup \left(p-1,\infty\right). }[/math]
This set is named after Gindikin, who introduced it[23] in the 1970s in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,
- [math]\displaystyle{ \Lambda_p^*:=\{0, \ldots, p-1\}, }[/math]
the corresponding Wishart distribution has no Lebesgue density.
Relationships to other distributions
- The Wishart distribution is related to the inverse-Wishart distribution, denoted by [math]\displaystyle{ W_p^{-1} }[/math], as follows: If X ~ Wp(V, n) and if we do the change of variables C = X−1, then [math]\displaystyle{ \mathbf{C}\sim W_p^{-1}(\mathbf{V}^{-1},n) }[/math]. This relationship may be derived by noting that the absolute value of the Jacobian determinant of this change of variables is |C|p+1, see for example equation (15.15) in.[24]
- In Bayesian statistics, the Wishart distribution is a conjugate prior for the precision parameter of the multivariate normal distribution, when the mean parameter is known.[11]
- A generalization is the multivariate gamma distribution.
- A different type of generalization is the normal-Wishart distribution, essentially the product of a multivariate normal distribution with a Wishart distribution.
See also
- Chi-squared distribution
- Complex Wishart distribution
- F-distribution
- Gamma distribution
- Hotelling's T-squared distribution
- Inverse-Wishart distribution
- Multivariate gamma distribution
- Student's t-distribution
- Wilks' lambda distribution
References
- ↑ 1.0 1.1 Wishart, J. (1928). "The generalised product moment distribution in samples from a normal multivariate population". Biometrika 20A (1–2): 32–52. doi:10.1093/biomet/20A.1-2.32.
- ↑ Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo (2018), Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo, eds., "Classical Ensembles: Wishart-Laguerre" (in en), Introduction to Random Matrices: Theory and Practice, SpringerBriefs in Mathematical Physics (Cham: Springer International Publishing): pp. 89–95, doi:10.1007/978-3-319-70885-0_13, ISBN 978-3-319-70885-0, https://doi.org/10.1007/978-3-319-70885-0_13, retrieved 2023-05-17
- ↑ Koop, Gary; Korobilis, Dimitris (2010). "Bayesian Multivariate Time Series Methods for Empirical Macroeconomics". Foundations and Trends in Econometrics 3 (4): 267–358. doi:10.1561/0800000013.
- ↑ Gupta, A. K.; Nagar, D. K. (2000). Matrix Variate Distributions. Chapman & Hall /CRC. ISBN 1584880465.
- ↑ Gelman, Andrew (2003). Bayesian Data Analysis (2nd ed.). Boca Raton, Fla.: Chapman & Hall. p. 582. ISBN 158488388X. http://www.stat.columbia.edu/~gelman/book/. Retrieved 3 June 2015.
- ↑ Zanella, A.; Chiani, M.; Win, M.Z. (April 2009). "On the marginal distribution of the eigenvalues of wishart matrices". IEEE Transactions on Communications 57 (4): 1050–1060. doi:10.1109/TCOMM.2009.04.070143. https://dspace.mit.edu/bitstream/1721.1/66900/1/Zanella-2009-On%20the%20Marginal%20Distribution%20of%20the%20Eigenvalues%20of%20Wishart%20Matrices.pdf.
- ↑ Livan, Giacomo; Vivo, Pierpaolo (2011). "Moments of Wishart-Laguerre and Jacobi ensembles of random matrices: application to the quantum transport problem in chaotic cavities". Acta Physica Polonica B 42 (5): 1081. doi:10.5506/APhysPolB.42.1081. ISSN 0587-4254.
- ↑ Muirhead, Robb J. (2005). Aspects of Multivariate Statistical Theory (2nd ed.). Wiley Interscience. ISBN 0471769851.
- ↑ 9.0 9.1 Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 259. ISBN 0-471-36091-0.
- ↑ Uhlig, H. (1994). "On Singular Wishart and Singular Multivariate Beta Distributions". The Annals of Statistics 22: 395–405. doi:10.1214/aos/1176325375.
- ↑ 11.0 11.1 11.2 Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
- ↑ Hoff, Peter D. (2009). A First Course in Bayesian Statistical Methods. New York: Springer. pp. 109–111. ISBN 978-0-387-92299-7.
- ↑ Nguyen, Duy. "AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE". https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4541076.
- ↑ Mayerhofer, Eberhard (2019-01-27). "Reforming the Wishart characteristic function". arXiv:1901.09347 [math.PR].
- ↑ 15.0 15.1 Rao, C. R. (1965). Linear Statistical Inference and its Applications. Wiley. p. 535.
- ↑ Seber, George A. F. (2004). Multivariate Observations. John Wiley & Sons. ISBN 978-0471691211.
- ↑ Chatfield, C.; Collins, A. J. (1980). Introduction to Multivariate Analysis. London: Chapman and Hall. pp. 103–108. ISBN 0-412-16030-7. https://archive.org/details/introductiontomu0000chat/page/103.
- ↑ Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 257. ISBN 0-471-36091-0.
- ↑ Smith, W. B.; Hocking, R. R. (1972). "Algorithm AS 53: Wishart Variate Generator". Journal of the Royal Statistical Society, Series C 21 (3): 341–345.
- ↑ Pearson, Karl; Jeffery, G. B.; Elderton, Ethel M. (December 1929). "On the Distribution of the First Product Moment-Coefficient, in Samples Drawn from an Indefinitely Large Normal Population". Biometrika (Biometrika Trust) 21 (1/4): 164–201. doi:10.2307/2332556.
- ↑ Craig, Cecil C. (1936). "On the Frequency Function of xy". Ann. Math. Statist. 7: 1–15. doi:10.1214/aoms/1177732541. http://projecteuclid.org/euclid.aoms/1177732541.
- ↑ Peddada and Richards, Shyamal Das; Richards, Donald St. P. (1991). "Proof of a Conjecture of M. L. Eaton on the Characteristic Function of the Wishart Distribution". Annals of Probability 19 (2): 868–874. doi:10.1214/aop/1176990455.
- ↑ Gindikin, S.G. (1975). "Invariant generalized functions in homogeneous domains". Funct. Anal. Appl. 9 (1): 50–52. doi:10.1007/BF01078179.
- ↑ Dwyer, Paul S. (1967). "Some Applications of Matrix Derivatives in Multivariate Analysis". J. Amer. Statist. Assoc. 62 (318): 607–625. doi:10.1080/01621459.1967.10482934.
External links
Original source: https://en.wikipedia.org/wiki/Wishart distribution.
Read more |