From HandWiki

In linear algebra and statistics, the pseudo-determinant[1] is the product of all non-zero eigenvalues of a square matrix. It coincides with the regular determinant when the matrix is non-singular.


The pseudo-determinant of a square n-by-n matrix A may be defined as:

[math]\displaystyle{ |\mathbf{A}|_+ = \lim_{\alpha\to 0} \frac{|\mathbf{A} + \alpha \mathbf{I}|}{\alpha^{n-\operatorname{rank}(\mathbf{A})}} }[/math]

where |A| denotes the usual determinant, I denotes the identity matrix and rank(A) denotes the rank of A.[2]

Definition of pseudo-determinant using Vahlen matrix

The Vahlen matrix of a conformal transformation, the Möbius transformation (i.e. [math]\displaystyle{ (ax + b)(cx + d)^{-1} }[/math] for [math]\displaystyle{ a, b, c, d \in \mathcal{G}(p, q) }[/math]), is defined as [math]\displaystyle{ [f] = \begin{bmatrix}a & b \\c & d \end{bmatrix} }[/math]. By the pseudo-determinant of the Vahlen matrix for the conformal transformation, we mean

[math]\displaystyle{ \operatorname{pdet} \begin{bmatrix}a & b\\ c& d\end{bmatrix} = ad^\dagger - bc^\dagger. }[/math]

If [math]\displaystyle{ \operatorname{pdet}[f] \gt 0 }[/math], the transformation is sense-preserving (rotation) whereas if the [math]\displaystyle{ \operatorname{pdet}[f] \lt 0 }[/math], the transformation is sense-preserving (reflection).

Computation for positive semi-definite case

If [math]\displaystyle{ A }[/math] is positive semi-definite, then the singular values and eigenvalues of [math]\displaystyle{ A }[/math] coincide. In this case, if the singular value decomposition (SVD) is available, then [math]\displaystyle{ |\mathbf{A}|_+ }[/math] may be computed as the product of the non-zero singular values. If all singular values are zero, then the pseudo-determinant is 1.

Supposing [math]\displaystyle{ \operatorname{rank}(A) = k }[/math], so that k is the number of non-zero singular values, we may write [math]\displaystyle{ A = PP^\dagger }[/math] where [math]\displaystyle{ P }[/math] is some n-by-k matrix and the dagger is the conjugate transpose. The singular values of [math]\displaystyle{ A }[/math] are the squares of the singular values of [math]\displaystyle{ P }[/math] and thus we have [math]\displaystyle{ |A|_+ = \left|P^\dagger P\right| }[/math], where [math]\displaystyle{ \left|P^\dagger P\right| }[/math] is the usual determinant in k dimensions. Further, if [math]\displaystyle{ P }[/math] is written as the block column [math]\displaystyle{ P = \left(\begin{smallmatrix} C \\ D \end{smallmatrix}\right) }[/math], then it holds, for any heights of the blocks [math]\displaystyle{ C }[/math] and [math]\displaystyle{ D }[/math], that [math]\displaystyle{ |A|_+ = \left|C^\dagger C + D^\dagger D\right| }[/math].

Application in statistics

If a statistical procedure ordinarily compares distributions in terms of the determinants of variance-covariance matrices then, in the case of singular matrices, this comparison can be undertaken by using a combination of the ranks of the matrices and their pseudo-determinants, with the matrix of higher rank being counted as "largest" and the pseudo-determinants only being used if the ranks are equal.[3] Thus pseudo-determinants are sometime presented in the outputs of statistical programs in cases where covariance matrices are singular.[4]

See also

  • Matrix determinant
  • Moore–Penrose pseudoinverse, which can also be obtained in terms of the non-zero singular values.


  1. Minka, T.P. (2001). "Inferring a Gaussian Distribution".  PDF
  2. Florescu, Ionut (2014). Probability and Stochastic Processes. Wiley. p. 529. 
  3. SAS documentation on "Robust Distance"
  4. Bohling, Geoffrey C. (1997) "GSLIB-style programs for discriminant analysis and regionalized classification", Computers & Geosciences, 23 (7), 739–761 doi: 10.1016/S0098-3004(97)00050-2