Generalized inverse

From HandWiki
Short description: Algebraic element satisfying some of the criteria of an inverse

In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix [math]\displaystyle{ A }[/math].

A matrix [math]\displaystyle{ A^\mathrm{g} \in \mathbb{R}^{n \times m} }[/math] is a generalized inverse of a matrix [math]\displaystyle{ A \in \mathbb{R}^{m \times n} }[/math] if [math]\displaystyle{ AA^\mathrm{g}A = A. }[/math][1][2][3] A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse.[1]

Motivation

Consider the linear system

[math]\displaystyle{ Ax = y }[/math]

where [math]\displaystyle{ A }[/math] is an [math]\displaystyle{ n \times m }[/math] matrix and [math]\displaystyle{ y \in \mathcal R(A), }[/math] the column space of [math]\displaystyle{ A }[/math]. If [math]\displaystyle{ A }[/math] is nonsingular (which implies [math]\displaystyle{ n = m }[/math]) then [math]\displaystyle{ x = A^{-1}y }[/math] will be the solution of the system. Note that, if [math]\displaystyle{ A }[/math] is nonsingular, then

[math]\displaystyle{ AA^{-1}A = A. }[/math]

Now suppose [math]\displaystyle{ A }[/math] is rectangular ([math]\displaystyle{ n \neq m }[/math]), or square and singular. Then we need a right candidate [math]\displaystyle{ G }[/math] of order [math]\displaystyle{ m \times n }[/math] such that for all [math]\displaystyle{ y \in \mathcal R(A), }[/math]

[math]\displaystyle{ AGy = y. }[/math][4]

That is, [math]\displaystyle{ x=Gy }[/math] is a solution of the linear system [math]\displaystyle{ Ax = y }[/math]. Equivalently, we need a matrix [math]\displaystyle{ G }[/math] of order [math]\displaystyle{ m\times n }[/math] such that

[math]\displaystyle{ AGA = A. }[/math]

Hence we can define the generalized inverse as follows: Given an [math]\displaystyle{ m \times n }[/math] matrix [math]\displaystyle{ A }[/math], an [math]\displaystyle{ n \times m }[/math] matrix [math]\displaystyle{ G }[/math] is said to be a generalized inverse of [math]\displaystyle{ A }[/math] if [math]\displaystyle{ AGA = A. }[/math][1][2][3] The matrix [math]\displaystyle{ A^{-1} }[/math] has been termed a regular inverse of [math]\displaystyle{ A }[/math] by some authors.[5]

Types

Important types of generalized inverse include:

  • One-sided inverse (right inverse or left inverse)
    • Right inverse: If the matrix [math]\displaystyle{ A }[/math] has dimensions [math]\displaystyle{ n \times m }[/math] and [math]\displaystyle{ \textrm{rank} (A) = n }[/math], then there exists an [math]\displaystyle{ m \times n }[/math] matrix [math]\displaystyle{ A_{\mathrm{R}}^{-1} }[/math] called the right inverse of [math]\displaystyle{ A }[/math] such that [math]\displaystyle{ A A_{\mathrm{R}}^{-1} = I_n }[/math], where [math]\displaystyle{ I_n }[/math] is the [math]\displaystyle{ n \times n }[/math] identity matrix.
    • Left inverse: If the matrix [math]\displaystyle{ A }[/math] has dimensions [math]\displaystyle{ n \times m }[/math] and [math]\displaystyle{ \textrm{rank} (A) = m }[/math], then there exists an [math]\displaystyle{ m \times n }[/math] matrix [math]\displaystyle{ A_{\mathrm{L}}^{-1} }[/math] called the left inverse of [math]\displaystyle{ A }[/math] such that [math]\displaystyle{ A_{\mathrm{L}}^{-1} A = I_m }[/math], where [math]\displaystyle{ I_m }[/math] is the [math]\displaystyle{ m \times m }[/math] identity matrix.[6]
  • Bott–Duffin inverse
  • Drazin inverse
  • Moore–Penrose inverse

Some generalized inverses are defined and classified based on the Penrose conditions:

  1. [math]\displaystyle{ A A^\mathrm{g} A = A }[/math]
  2. [math]\displaystyle{ A^\mathrm{g} A A^\mathrm{g}= A^\mathrm{g} }[/math]
  3. [math]\displaystyle{ (A A^\mathrm{g})^* = A A^\mathrm{g} }[/math]
  4. [math]\displaystyle{ (A^\mathrm{g} A)^* = A^\mathrm{g} A, }[/math]

where [math]\displaystyle{ {}^* }[/math] denotes conjugate transpose. If [math]\displaystyle{ A^\mathrm{g} }[/math] satisfies the first condition, then it is a generalized inverse of [math]\displaystyle{ A }[/math]. If it satisfies the first two conditions, then it is a reflexive generalized inverse of [math]\displaystyle{ A }[/math]. If it satisfies all four conditions, then it is the pseudoinverse of [math]\displaystyle{ A }[/math], which is denoted by [math]\displaystyle{ A^+ }[/math] and also known as the Moore–Penrose inverse, after the pioneering works by E. H. Moore and Roger Penrose.[2][7][8][9][10][11] It is convenient to define an [math]\displaystyle{ I }[/math]-inverse of [math]\displaystyle{ A }[/math] as an inverse that satisfies the subset [math]\displaystyle{ I \subset \{1, 2, 3, 4\} }[/math] of the Penrose conditions listed above. Relations, such as [math]\displaystyle{ A^{(1, 4)} A A^{(1, 3)} = A^+ }[/math], can be established between these different classes of [math]\displaystyle{ I }[/math]-inverses.[1]

When [math]\displaystyle{ A }[/math] is non-singular, any generalized inverse [math]\displaystyle{ A^\mathrm{g} = A^{-1} }[/math] and is therefore unique. For a singular [math]\displaystyle{ A }[/math], some generalised inverses, such as the Drazin inverse and the Moore–Penrose inverse, are unique, while others are not necessarily uniquely defined.

Examples

Reflexive generalized inverse

Let

[math]\displaystyle{ A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}, \quad G = \begin{bmatrix} -\frac{5}{3} & \frac{2}{3} & 0 \\[4pt] \frac{4}{3} & -\frac{1}{3} & 0 \\[4pt] 0 & 0 & 0 \end{bmatrix}. }[/math]

Since [math]\displaystyle{ \det(A) = 0 }[/math], [math]\displaystyle{ A }[/math] is singular and has no regular inverse. However, [math]\displaystyle{ A }[/math] and [math]\displaystyle{ G }[/math] satisfy Penrose conditions (1) and (2), but not (3) or (4). Hence, [math]\displaystyle{ G }[/math] is a reflexive generalized inverse of [math]\displaystyle{ A }[/math].

One-sided inverse

Let

[math]\displaystyle{ A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}, \quad A_\mathrm{R}^{-1} = \begin{bmatrix} -\frac{17}{18} & \frac{8}{18} \\[4pt] -\frac{2}{18} & \frac{2}{18} \\[4pt] \frac{13}{18} & -\frac{4}{18} \end{bmatrix}. }[/math]

Since [math]\displaystyle{ A }[/math] is not square, [math]\displaystyle{ A }[/math] has no regular inverse. However, [math]\displaystyle{ A_\mathrm{R}^{-1} }[/math] is a right inverse of [math]\displaystyle{ A }[/math]. The matrix [math]\displaystyle{ A }[/math] has no left inverse.

Inverse of other semigroups (or rings)

The element b is a generalized inverse of an element a if and only if [math]\displaystyle{ a \cdot b \cdot a = a }[/math], in any semigroup (or ring, since the multiplication function in any ring is a semigroup).

The generalized inverses of the element 3 in the ring [math]\displaystyle{ \mathbb{Z}/12\mathbb{Z} }[/math] are 3, 7, and 11, since in the ring [math]\displaystyle{ \mathbb{Z}/12\mathbb{Z} }[/math]:

[math]\displaystyle{ 3 \cdot 3 \cdot 3 = 3 }[/math]
[math]\displaystyle{ 3 \cdot 7 \cdot 3 = 3 }[/math]
[math]\displaystyle{ 3 \cdot 11 \cdot 3 = 3 }[/math]

The generalized inverses of the element 4 in the ring [math]\displaystyle{ \mathbb{Z}/12\mathbb{Z} }[/math] are 1, 4, 7, and 10, since in the ring [math]\displaystyle{ \mathbb{Z}/12\mathbb{Z} }[/math]:

[math]\displaystyle{ 4 \cdot 1 \cdot 4 = 4 }[/math]
[math]\displaystyle{ 4 \cdot 4 \cdot 4 = 4 }[/math]
[math]\displaystyle{ 4 \cdot 7 \cdot 4 = 4 }[/math]
[math]\displaystyle{ 4 \cdot 10 \cdot 4 = 4 }[/math]

If an element a in a semigroup (or ring) has an inverse, the inverse must be the only generalized inverse of this element, like the elements 1, 5, 7, and 11 in the ring [math]\displaystyle{ \mathbb{Z}/12\mathbb{Z} }[/math].

In the ring [math]\displaystyle{ \mathbb{Z}/12\mathbb{Z} }[/math], any element is a generalized inverse of 0, however, 2 has no generalized inverse, since there is no b in [math]\displaystyle{ \mathbb{Z}/12\mathbb{Z} }[/math] such that [math]\displaystyle{ 2 \cdot b \cdot 2 = 2 }[/math].

Construction

The following characterizations are easy to verify:

  • A right inverse of a non-square matrix [math]\displaystyle{ A }[/math] is given by [math]\displaystyle{ A_\mathrm{R}^{-1} = A^{\intercal} \left( A A^{\intercal} \right)^{-1} }[/math], provided [math]\displaystyle{ A }[/math] has full row rank.[6]
  • A left inverse of a non-square matrix [math]\displaystyle{ A }[/math] is given by [math]\displaystyle{ A_\mathrm{L}^{-1} = \left(A^{\intercal} A \right)^{-1} A^{\intercal} }[/math], provided [math]\displaystyle{ A }[/math] has full column rank.[6]
  • If [math]\displaystyle{ A = BC }[/math] is a rank factorization, then [math]\displaystyle{ G = C_\mathrm{R}^{-1} B_\mathrm{L}^{-1} }[/math] is a g-inverse of [math]\displaystyle{ A }[/math], where [math]\displaystyle{ C_\mathrm{R}^{-1} }[/math] is a right inverse of [math]\displaystyle{ C }[/math] and [math]\displaystyle{ B_\mathrm{L}^{-1} }[/math] is left inverse of [math]\displaystyle{ B }[/math].
  • If [math]\displaystyle{ A = P \begin{bmatrix}I_r & 0 \\ 0 & 0 \end{bmatrix} Q }[/math] for any non-singular matrices [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math], then [math]\displaystyle{ G = Q^{-1} \begin{bmatrix}I_r & U \\ W & V \end{bmatrix} P^{-1} }[/math] is a generalized inverse of [math]\displaystyle{ A }[/math] for arbitrary [math]\displaystyle{ U, V }[/math] and [math]\displaystyle{ W }[/math].
  • Let [math]\displaystyle{ A }[/math] be of rank [math]\displaystyle{ r }[/math]. Without loss of generality, let[math]\displaystyle{ A = \begin{bmatrix}B & C\\ D & E\end{bmatrix}, }[/math]where [math]\displaystyle{ B_{r \times r} }[/math] is the non-singular submatrix of [math]\displaystyle{ A }[/math]. Then,[math]\displaystyle{ G = \begin{bmatrix} B^{-1} & 0\\ 0 & 0 \end{bmatrix} }[/math]is a generalized inverse of [math]\displaystyle{ A }[/math] if and only if [math]\displaystyle{ E=DB^{-1}C }[/math].

Uses

Any generalized inverse can be used to determine whether a system of linear equations has any solutions, and if so to give all of them. If any solutions exist for the n × m linear system

[math]\displaystyle{ Ax = b }[/math],

with vector [math]\displaystyle{ x }[/math] of unknowns and vector [math]\displaystyle{ b }[/math] of constants, all solutions are given by

[math]\displaystyle{ x = A^\mathrm{g}b + \left[I - A^\mathrm{g}A\right]w }[/math],

parametric on the arbitrary vector [math]\displaystyle{ w }[/math], where [math]\displaystyle{ A^\mathrm{g} }[/math] is any generalized inverse of [math]\displaystyle{ A }[/math]. Solutions exist if and only if [math]\displaystyle{ A^\mathrm{g}b }[/math] is a solution, that is, if and only if [math]\displaystyle{ AA^\mathrm{g}b = b }[/math]. If A has full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique.[12]

Generalized inverses of matrices

The generalized inverses of matrices can be characterized as follows. Let [math]\displaystyle{ A \in \mathbb{R}^{m \times n} }[/math], and

[math]\displaystyle{ A = U \begin{bmatrix} \Sigma_1 & 0 \\ 0 & 0 \end{bmatrix} V^\operatorname{T} }[/math]

be its singular-value decomposition. Then for any generalized inverse [math]\displaystyle{ A^g }[/math], there exist[1] matrices [math]\displaystyle{ X }[/math], [math]\displaystyle{ Y }[/math], and [math]\displaystyle{ Z }[/math] such that

[math]\displaystyle{ A^g = V \begin{bmatrix} \Sigma_1^{-1} & X \\ Y & Z \end{bmatrix} U^\operatorname{T}. }[/math]

Conversely, any choice of [math]\displaystyle{ X }[/math], [math]\displaystyle{ Y }[/math], and [math]\displaystyle{ Z }[/math] for matrix of this form is a generalized inverse of [math]\displaystyle{ A }[/math].[1] The [math]\displaystyle{ \{1,2\} }[/math]-inverses are exactly those for which [math]\displaystyle{ Z = Y \Sigma_1 X }[/math], the [math]\displaystyle{ \{1,3\} }[/math]-inverses are exactly those for which [math]\displaystyle{ X = 0 }[/math], and the [math]\displaystyle{ \{1,4\} }[/math]-inverses are exactly those for which [math]\displaystyle{ Y = 0 }[/math]. In particular, the pseudoinverse is given by [math]\displaystyle{ X = Y = Z = 0 }[/math]:

[math]\displaystyle{ A^+ = V \begin{bmatrix} \Sigma_1^{-1} & 0 \\ 0 & 0 \end{bmatrix} U^\operatorname{T}. }[/math]

Transformation consistency properties

In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse, [math]\displaystyle{ A^+, }[/math] satisfies the following definition of consistency with respect to transformations involving unitary matrices U and V:

[math]\displaystyle{ (UAV)^+ = V^* A^+ U^* }[/math].

The Drazin inverse, [math]\displaystyle{ A^\mathrm{D} }[/math] satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrix S:

[math]\displaystyle{ \left(SAS^{-1}\right)^\mathrm{D} = S A^\mathrm{D} S^{-1} }[/math].

The unit-consistent (UC) inverse,[13] [math]\displaystyle{ A^\mathrm{U}, }[/math] satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matrices D and E:

[math]\displaystyle{ (DAE)^\mathrm{U} = E^{-1} A^\mathrm{U} D^{-1} }[/math].

The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers.

See also

Citations

  1. 1.0 1.1 1.2 1.3 1.4 1.5 Ben-Israel & Greville 2003, pp. 2, 7
  2. 2.0 2.1 2.2 Nakamura 1991, pp. 41–42
  3. 3.0 3.1 Rao & Mitra 1971, pp. vii, 20
  4. Rao & Mitra 1971, p. 24
  5. Rao & Mitra 1971, pp. 19–20
  6. 6.0 6.1 6.2 Rao & Mitra 1971, p. 19
  7. Rao & Mitra 1971, pp. 20, 28, 50–51
  8. Ben-Israel & Greville 2003, p. 7
  9. Campbell & Meyer 1991, p. 10
  10. James 1978, p. 114
  11. Nakamura 1991, p. 42
  12. James 1978, pp. 109–110
  13. Uhlmann 2018

Sources

Textbook

Publication