Diagonally dominant matrix
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if
- [math]\displaystyle{ |a_{ii}| \geq \sum_{j\neq i} |a_{ij}| \quad\text{for all } i \, }[/math]
where aij denotes the entry in the ith row and jth column.
This definition uses a weak inequality, and is therefore sometimes called weak diagonal dominance. If a strict inequality (>) is used, this is called strict diagonal dominance. The unqualified term diagonal dominance can mean both strict and weak diagonal dominance, depending on the context.[1]
Variations
The definition in the first paragraph sums entries across each row. It is therefore sometimes called row diagonal dominance. If one changes the definition to sum down each column, this is called column diagonal dominance.
Any strictly diagonally dominant matrix is trivially a weakly chained diagonally dominant matrix. Weakly chained diagonally dominant matrices are nonsingular and include the family of irreducibly diagonally dominant matrices. These are irreducible matrices that are weakly diagonally dominant, but strictly diagonally dominant in at least one row.
Examples
The matrix
- [math]\displaystyle{ A = \begin{bmatrix} 3 & -2 & 1\\ 1 & 3 & 2\\ -1 & 2 & 4\end{bmatrix} }[/math]
is diagonally dominant because
- [math]\displaystyle{ |a_{11}| \ge |a_{12}| + |a_{13}| }[/math] since [math]\displaystyle{ |+3| \ge |-2| + |+1| }[/math]
- [math]\displaystyle{ |a_{22}| \ge |a_{21}| + |a_{23}| }[/math] since [math]\displaystyle{ |3| \ge |+1| + |+2| }[/math]
- [math]\displaystyle{ |a_{33}| \ge |a_{31}| + |a_{32}| }[/math] since [math]\displaystyle{ |+4| \ge |-1| + |+2| }[/math].
The matrix
- [math]\displaystyle{ B = \begin{bmatrix} -2 & 2 & 1\\ 1 & 3 & 2\\ 1 & -2 & 0\end{bmatrix} }[/math]
is not diagonally dominant because
- [math]\displaystyle{ |b_{11}| \lt |b_{12}| + |b_{13}| }[/math] since [math]\displaystyle{ |-2| \lt |+2| + |+1| }[/math]
- [math]\displaystyle{ |b_{22}| \ge |b_{21}| + |b_{23}| }[/math] since [math]\displaystyle{ |+3| \ge |+1| + |+2| }[/math]
- [math]\displaystyle{ |b_{33}| \lt |b_{31}| + |b_{32}| }[/math] since [math]\displaystyle{ |+0| \lt |+1| + |-2| }[/math].
That is, the first and third rows fail to satisfy the diagonal dominance condition.
The matrix
- [math]\displaystyle{ C = \begin{bmatrix} -4 & 2 & 1\\ 1 & 6 & 2\\ 1 & -2 & 5\end{bmatrix} }[/math]
is strictly diagonally dominant because
- [math]\displaystyle{ |c_{11}| \gt |c_{12}| + |c_{13}| }[/math] since [math]\displaystyle{ |-4| \gt |+2| + |+1| }[/math]
- [math]\displaystyle{ |c_{22}| \gt |c_{21}| + |c_{23}| }[/math] since [math]\displaystyle{ |+6| \gt |+1| + |+2| }[/math]
- [math]\displaystyle{ |c_{33}| \gt |c_{31}| + |c_{32}| }[/math] since [math]\displaystyle{ |+5| \gt |+1| + |-2| }[/math].
Applications and properties
The following results can be proved trivially from Gershgorin's circle theorem. Gershgorin's circle theorem itself has a very short proof.
A strictly diagonally dominant matrix (or an irreducibly diagonally dominant matrix[2]) is non-singular.
A Hermitian diagonally dominant matrix [math]\displaystyle{ A }[/math] with real non-negative diagonal entries is positive semidefinite. This follows from the eigenvalues being real, and Gershgorin's circle theorem. If the symmetry requirement is eliminated, such a matrix is not necessarily positive semidefinite. For example, consider
- [math]\displaystyle{ \begin{pmatrix}-2&2&1\end{pmatrix}\begin{pmatrix} 1&1&0\\ 1&1&0\\ 1&0&1\end{pmatrix}\begin{pmatrix}-2\\2\\1\end{pmatrix}\lt 0. }[/math]
However, the real parts of its eigenvalues remain non-negative by Gershgorin's circle theorem.
Similarly, a Hermitian strictly diagonally dominant matrix with real positive diagonal entries is positive definite.
No (partial) pivoting is necessary for a strictly column diagonally dominant matrix when performing Gaussian elimination (LU factorization).
The Jacobi and Gauss–Seidel methods for solving a linear system converge if the matrix is strictly (or irreducibly) diagonally dominant.
Many matrices that arise in finite element methods are diagonally dominant.
A slight variation on the idea of diagonal dominance is used to prove that the pairing on diagrams without loops in the Temperley–Lieb algebra is nondegenerate.[3] For a matrix with polynomial entries, one sensible definition of diagonal dominance is if the highest power of [math]\displaystyle{ q }[/math] appearing in each row appears only on the diagonal. (The evaluations of such a matrix at large values of [math]\displaystyle{ q }[/math] are diagonally dominant in the above sense.)
Notes
References
- Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations. ISBN 0-8018-5414-8.
- Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis (Paperback ed.). Cambridge University Press. ISBN 0-521-38632-2.
External links
- PlanetMath: Diagonal dominance definition
- PlanetMath: Properties of diagonally dominant matrices
- Mathworld
Original source: https://en.wikipedia.org/wiki/Diagonally dominant matrix.
Read more |