Matrix variate beta distribution

From HandWiki
Revision as of 06:44, 27 June 2023 by MainAI5 (talk | contribs) (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistics, the matrix variate beta distribution is a generalization of the beta distribution. If [math]\displaystyle{ U }[/math] is a [math]\displaystyle{ p\times p }[/math] positive definite matrix with a matrix variate beta distribution, and [math]\displaystyle{ a,b\gt (p-1)/2 }[/math] are real parameters, we write [math]\displaystyle{ U\sim B_p\left(a,b\right) }[/math] (sometimes [math]\displaystyle{ B_p^I\left(a,b\right) }[/math]). The probability density function for [math]\displaystyle{ U }[/math] is:

[math]\displaystyle{ \left\{\beta_p\left(a,b\right)\right\}^{-1} \det\left(U\right)^{a-(p+1)/2}\det\left(I_p-U\right)^{b-(p+1)/2}. }[/math]


Matrix variate beta distribution
Notation [math]\displaystyle{ {\rm B}_{p}(a,b) }[/math]
Parameters [math]\displaystyle{ a,b }[/math]
Support [math]\displaystyle{ p\times p }[/math] matrices with both [math]\displaystyle{ U }[/math] and [math]\displaystyle{ I_p-U }[/math] positive definite
PDF [math]\displaystyle{ \left\{\beta_p\left(a,b\right)\right\}^{-1} \det\left(U\right)^{a-(p+1)/2}\det\left(I_p-U\right)^{b-(p+1)/2}. }[/math]
CDF [math]\displaystyle{ {}_1F_1\left(a;a+b;iZ\right) }[/math]

Here [math]\displaystyle{ \beta_p\left(a,b\right) }[/math] is the multivariate beta function:

[math]\displaystyle{ \beta_p\left(a,b\right)=\frac{\Gamma_p\left(a\right)\Gamma_p\left(b\right)}{\Gamma_p\left(a+b\right)} }[/math]

where [math]\displaystyle{ \Gamma_p\left(a\right) }[/math] is the multivariate gamma function given by

[math]\displaystyle{ \Gamma_p\left(a\right)= \pi^{p(p-1)/4}\prod_{i=1}^p\Gamma\left(a-(i-1)/2\right). }[/math]

Theorems

Distribution of matrix inverse

If [math]\displaystyle{ U\sim B_p(a,b) }[/math] then the density of [math]\displaystyle{ X=U^{-1} }[/math] is given by

[math]\displaystyle{ \frac{1}{\beta_p\left(a,b\right)}\det(X)^{-(a+b)}\det\left(X-I_p\right)^{b-(p+1)/2} }[/math]

provided that [math]\displaystyle{ X\gt I_p }[/math] and [math]\displaystyle{ a,b\gt (p-1)/2 }[/math].

Orthogonal transform

If [math]\displaystyle{ U\sim B_p(a,b) }[/math] and [math]\displaystyle{ H }[/math] is a constant [math]\displaystyle{ p\times p }[/math] orthogonal matrix, then [math]\displaystyle{ HUH^T\sim B(a,b). }[/math]

Also, if [math]\displaystyle{ H }[/math] is a random orthogonal [math]\displaystyle{ p\times p }[/math] matrix which is independent of [math]\displaystyle{ U }[/math], then [math]\displaystyle{ HUH^T\sim B_p(a,b) }[/math], distributed independently of [math]\displaystyle{ H }[/math].

If [math]\displaystyle{ A }[/math] is any constant [math]\displaystyle{ q\times p }[/math], [math]\displaystyle{ q\leq p }[/math] matrix of rank [math]\displaystyle{ q }[/math], then [math]\displaystyle{ AUA^T }[/math] has a generalized matrix variate beta distribution, specifically [math]\displaystyle{ AUA^T\sim GB_q\left(a,b;AA^T,0\right) }[/math].

Partitioned matrix results

If [math]\displaystyle{ U\sim B_p\left(a,b\right) }[/math] and we partition [math]\displaystyle{ U }[/math] as

[math]\displaystyle{ U=\begin{bmatrix} U_{11} & U_{12} \\ U_{21} & U_{22} \end{bmatrix} }[/math]

where [math]\displaystyle{ U_{11} }[/math] is [math]\displaystyle{ p_1\times p_1 }[/math] and [math]\displaystyle{ U_{22} }[/math] is [math]\displaystyle{ p_2\times p_2 }[/math], then defining the Schur complement [math]\displaystyle{ U_{22\cdot 1} }[/math] as [math]\displaystyle{ U_{22}-U_{21}{U_{11}}^{-1}U_{12} }[/math] gives the following results:

  • [math]\displaystyle{ U_{11} }[/math] is independent of [math]\displaystyle{ U_{22\cdot 1} }[/math]
  • [math]\displaystyle{ U_{11}\sim B_{p_1}\left(a,b\right) }[/math]
  • [math]\displaystyle{ U_{22\cdot 1}\sim B_{p_2}\left(a-p_1/2,b\right) }[/math]
  • [math]\displaystyle{ U_{21}\mid U_{11},U_{22\cdot 1} }[/math] has an inverted matrix variate t distribution, specifically [math]\displaystyle{ U_{21}\mid U_{11},U_{22\cdot 1}\sim IT_{p_2,p_1} \left(2b-p+1,0,I_{p_2}-U_{22\cdot 1},U_{11}(I_{p_1}-U_{11})\right). }[/math]

Wishart results

Mitra proves the following theorem which illustrates a useful property of the matrix variate beta distribution. Suppose [math]\displaystyle{ S_1,S_2 }[/math] are independent Wishart [math]\displaystyle{ p\times p }[/math] matrices [math]\displaystyle{ S_1\sim W_p(n_1,\Sigma), S_2\sim W_p(n_2,\Sigma) }[/math]. Assume that [math]\displaystyle{ \Sigma }[/math] is positive definite and that [math]\displaystyle{ n_1+n_2\geq p }[/math]. If

[math]\displaystyle{ U = S^{-1/2}S_1\left(S^{-1/2}\right)^T, }[/math]

where [math]\displaystyle{ S=S_1+S_2 }[/math], then [math]\displaystyle{ U }[/math] has a matrix variate beta distribution [math]\displaystyle{ B_p(n_1/2,n_2/2) }[/math]. In particular, [math]\displaystyle{ U }[/math] is independent of [math]\displaystyle{ \Sigma }[/math].

See also

References

  • Gupta, A. K.; Nagar, D. K. (1999). Matrix Variate Distributions. Chapman and Hall. ISBN 1-58488-046-5. 
  • Mitra, S. K. (1970). "A density-free approach to matrix variate beta distribution". The Indian Journal of Statistics. Series A (1961–2002) 32 (1): 81–88.