Tridiagonal matrix

From HandWiki
Short description: Matrix with nonzero elements on the main diagonal and the diagonals above and below it

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal:

[math]\displaystyle{ \begin{pmatrix} 1 & 4 & 0 & 0 \\ 3 & 4 & 1 & 0 \\ 0 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 \\ \end{pmatrix}. }[/math]

The determinant of a tridiagonal matrix is given by the continuant of its elements.[1]

An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.

Properties

A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix.[2] In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that p + q/2 = n — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.[3]

The set of all n × n tridiagonal matrices forms a 3n-2 dimensional vector space.

Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.

Determinant

Main page: Continuant (mathematics)

The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation.[4] Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let

[math]\displaystyle{ f_n = \begin{vmatrix} a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{vmatrix}. }[/math]

The sequence (fi) is called the continuant and satisfies the recurrence relation

[math]\displaystyle{ f_n = a_n f_{n-1} - c_{n-1}b_{n-1}f_{n-2} }[/math]

with initial values f0 = 1 and f−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix.

Inversion

The inverse of a non-singular tridiagonal matrix T

[math]\displaystyle{ T = \begin{pmatrix} a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{pmatrix} }[/math]

is given by

[math]\displaystyle{ (T^{-1})_{ij} = \begin{cases} (-1)^{i+j}b_i \cdots b_{j-1} \theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i \lt j\\ \theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i = j\\ (-1)^{i+j}c_j \cdots c_{i-1} \theta_{j-1} \phi_{i+1}/\theta_n & \text{ if } i \gt j\\ \end{cases} }[/math]

where the θi satisfy the recurrence relation

[math]\displaystyle{ \theta_i = a_i \theta_{i-1} - b_{i-1}c_{i-1}\theta_{i-2} \qquad i=2,3,\ldots,n }[/math]

with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy

[math]\displaystyle{ \phi_i = a_i \phi_{i+1} - b_i c_i \phi_{i+2} \qquad i=n-1,\ldots,1 }[/math]

with initial conditions ϕn+1 = 1 and ϕn = an.[5][6]

Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal[7] or Toeplitz matrices[8] and for the general case as well.[9][10]

In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa.[11]

Solution of linear system

Main page: Tridiagonal matrix algorithm

A system of equations Ax = b for [math]\displaystyle{ b\in \R^n }[/math] can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations.[12]

Eigenvalues

When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely:[13][14]

[math]\displaystyle{ a + 2 \sqrt{bc} \cos \left (\frac{k\pi}{n+1} \right ), \qquad k=1, \ldots, n. }[/math]

A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero.[15] Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring [math]\displaystyle{ O(n^2) }[/math] operations for a matrix of size [math]\displaystyle{ n\times n }[/math], although fast algorithms exist which (without parallel computation) require only [math]\displaystyle{ O(n\log n) }[/math].[16]

As a side note, an unreduced symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.[17]

Similarity to symmetric tridiagonal matrix

For unsymmetric or nonsymmetric tridiagonal matrices one can compute the eigendecomposition using a similarity transformation. Given a real tridiagonal, nonsymmetric matrix

[math]\displaystyle{ T = \begin{pmatrix} a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{pmatrix} }[/math]

where [math]\displaystyle{ b_i \neq c_i }[/math]. Assume that each product of off-diagonal entries is strictly positive [math]\displaystyle{ b_i c_i \gt 0 }[/math] and define a transformation matrix [math]\displaystyle{ D }[/math] by

[math]\displaystyle{ D := \operatorname{diag}(\delta_1 , \dots, \delta_n) \quad \text{for} \quad \delta_i := \begin{cases} 1 & , \, i=1 \\ \sqrt{\frac{c_{i-1} \dots c_1}{b_{i-1} \dots b_1}} & , \, i=2,\dots,n \,. \end{cases} }[/math]

The similarity transformation [math]\displaystyle{ D^{-1} T D }[/math] yields a symmetric tridiagonal matrix [math]\displaystyle{ J }[/math] by:[18]

[math]\displaystyle{ J:=D^{-1} T D = \begin{pmatrix} a_1 & \sgn b_1 \, \sqrt{b_1 c_1} \\ \sgn b_1 \, \sqrt{b_1 c_1} & a_2 & \sgn b_2 \, \sqrt{b_2 c_2} \\ & \sgn b_2 \, \sqrt{b_2 c_2} & \ddots & \ddots \\ & & \ddots & \ddots & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} \\ & & & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} & a_n \end{pmatrix} \,. }[/math]

Note that [math]\displaystyle{ T }[/math] and [math]\displaystyle{ J }[/math] have the same eigenvalues.

Computer programming

A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step.[19]

A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional arrays, one of length n containing the diagonal elements, and two of length n − 1 containing the subdiagonal and superdiagonal elements.

Applications

The discretization in space of the one-dimensional diffusion or heat equation

[math]\displaystyle{ \frac{\partial u(t,x)}{\partial t} = \alpha \frac{\partial^2 u(t,x)}{\partial x^2} }[/math]

using second order central finite differences results in

[math]\displaystyle{ \begin{pmatrix} \frac{\partial u_{1}(t)}{\partial t} \\ \frac{\partial u_{2}(t)}{\partial t} \\ \vdots \\ \frac{\partial u_{N}(t)}{\partial t} \end{pmatrix} = \frac{\alpha}{\Delta x^2} \begin{pmatrix} -2 & 1 & 0 & \ldots & 0 \\ 1 & -2 & 1 & \ddots & \vdots \\ 0 & \ddots & \ddots & \ddots & 0 \\ \vdots & & 1 & -2 & 1 \\ 0 & \ldots & 0 & 1 & -2 \end{pmatrix} \begin{pmatrix} u_{1}(t) \\ u_{2}(t) \\ \vdots \\ u_{N}(t) \\ \end{pmatrix} }[/math]

with discretization constant [math]\displaystyle{ \Delta x }[/math]. The matrix is tridiagonal with [math]\displaystyle{ a_{i}=-2 }[/math] and [math]\displaystyle{ b_{i}=c_{i}=1 }[/math]. Note: no boundary conditions are used here.

See also

Notes

  1. Thomas Muir (1960). A treatise on the theory of determinants. Dover Publications. pp. 516–525. https://archive.org/details/treatiseontheory0000muir. 
  2. Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. p. 28. ISBN 0521386322. 
  3. Horn & Johnson, page 174
  4. El-Mikkawy, M. E. A. (2004). "On the inverse of a general tridiagonal matrix". Applied Mathematics and Computation 150 (3): 669–679. doi:10.1016/S0096-3003(03)00298-4. 
  5. Da Fonseca, C. M. (2007). "On the eigenvalues of some tridiagonal matrices". Journal of Computational and Applied Mathematics 200: 283–286. doi:10.1016/j.cam.2005.08.047. 
  6. Usmani, R. A. (1994). "Inversion of a tridiagonal jacobi matrix". Linear Algebra and its Applications 212-213: 413–414. doi:10.1016/0024-3795(94)90414-6. 
  7. Hu, G. Y.; O'Connell, R. F. (1996). "Analytical inversion of symmetric tridiagonal matrices". Journal of Physics A: Mathematical and General 29 (7): 1511. doi:10.1088/0305-4470/29/7/020. 
  8. Huang, Y.; McColl, W. F. (1997). "Analytical inversion of general tridiagonal matrices". Journal of Physics A: Mathematical and General 30 (22): 7919. doi:10.1088/0305-4470/30/22/026. 
  9. Mallik, R. K. (2001). "The inverse of a tridiagonal matrix". Linear Algebra and its Applications 325: 109–139. doi:10.1016/S0024-3795(00)00262-7. 
  10. Kılıç, E. (2008). "Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions". Applied Mathematics and Computation 197: 345–357. doi:10.1016/j.amc.2007.07.046. 
  11. Raf Vandebril; Marc Van Barel; Nicola Mastronardi (2008). Matrix Computations and Semiseparable Matrices. Volume I: Linear Systems. JHU Press. Theorem 1.38, p. 41. ISBN 978-0-8018-8714-7. 
  12. Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). The Johns Hopkins University Press. ISBN 0-8018-5414-8. 
  13. Noschese, S.; Pasquini, L.; Reichel, L. (2013). "Tridiagonal Toeplitz matrices: Properties and novel applications". Numerical Linear Algebra with Applications 20 (2): 302. doi:10.1002/nla.1811. 
  14. This can also be written as [math]\displaystyle{ a + 2 \sqrt{bc} \cos(k \pi / {(n+1)}) }[/math] because [math]\displaystyle{ \cos(x) = -\cos(\pi-x) }[/math], as is done in: Kulkarni, D.; Schmidt, D.; Tsui, S. K. (1999). "Eigenvalues of tridiagonal pseudo-Toeplitz matrices". Linear Algebra and its Applications 297: 63. doi:10.1016/S0024-3795(99)00114-7. https://hal.archives-ouvertes.fr/hal-01461924/file/KST.pdf. 
  15. Parlett, B.N. (1980). The Symmetric Eigenvalue Problem. Prentice Hall, Inc.. 
  16. Coakley, E.S.; Rokhlin, V. (2012). "A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices". Applied and Computational Harmonic Analysis 34 (3): 379–414. doi:10.1016/j.acha.2012.06.003. 
  17. Dhillon, Inderjit Singh. A New O(n 2 ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem. p. 8. http://www.cs.utexas.edu/~inderjit/public_papers/thesis.pdf. 
  18. "www.math.hkbu.edu.hk math lecture". http://www.math.hkbu.edu.hk/ICM/LecturesAndSeminars/08OctMaterials/1/Slide3.pdf. 
  19. Eidelman, Yuli; Gohberg, Israel; Gemignani, Luca (2007-01-01). "On the fast reduction of a quasiseparable matrix to Hessenberg and tridiagonal forms" (in en). Linear Algebra and its Applications 420 (1): 86–101. doi:10.1016/j.laa.2006.06.028. ISSN 0024-3795. https://www.sciencedirect.com/science/article/pii/S0024379506003041. 

External links