Hankel matrix
In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example, [math]\displaystyle{ \qquad\begin{bmatrix} a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end{bmatrix}. }[/math]
More generally, a Hankel matrix is any [math]\displaystyle{ n \times n }[/math] matrix [math]\displaystyle{ A }[/math] of the form
[math]\displaystyle{ A = \begin{bmatrix} a_0 & a_1 & a_2 & \ldots & a_{n-1} \\ a_1 & a_2 & & &\vdots \\ a_2 & & & & a_{2n-4} \\ \vdots & & & a_{2n-4} & a_{2n-3} \\ a_{n-1} & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} \end{bmatrix}. }[/math]
In terms of the components, if the [math]\displaystyle{ i,j }[/math] element of [math]\displaystyle{ A }[/math] is denoted with [math]\displaystyle{ A_{ij} }[/math], and assuming [math]\displaystyle{ i \le j }[/math], then we have [math]\displaystyle{ A_{i,j} = A_{i+k,j-k} }[/math] for all [math]\displaystyle{ k = 0,...,j-i. }[/math]
Properties
- Any Hankel matrix is symmetric.
- Let [math]\displaystyle{ J_n }[/math] be the [math]\displaystyle{ n \times n }[/math] exchange matrix. If [math]\displaystyle{ H }[/math] is an [math]\displaystyle{ m \times n }[/math] Hankel matrix, then [math]\displaystyle{ H = T J_n }[/math] where [math]\displaystyle{ T }[/math] is an [math]\displaystyle{ m \times n }[/math] Toeplitz matrix.
- The Hilbert matrix is an example of a Hankel matrix.
- The determinant of a Hankel matrix is called a catalecticant.
Hankel operator
Given a formal Laurent series
- [math]\displaystyle{ f(z) = \sum_{n=-\infty}^N a_n z^n }[/math]
the corresponding Hankel operator is defined as[2]
- [math]\displaystyle{ H_f : \mathbf C[z] \to \mathbf z^{-1} \mathbf Cz^{-1}, }[/math]
This takes a polynomial [math]\displaystyle{ g \in \mathbf C[z] }[/math] and sends it to the product [math]\displaystyle{ fg }[/math], but discards all powers of [math]\displaystyle{ z }[/math] with a non-negative exponent, so as to give an element in [math]\displaystyle{ z^{-1} \mathbf Cz^{-1} }[/math], the formal power series with strictly negative exponents. The map [math]\displaystyle{ H_f }[/math] is in a natural way [math]\displaystyle{ \mathbf C[z] }[/math]-linear, and its matrix with respect to the elements [math]\displaystyle{ 1, z, z^2, \dots \in \mathbf C[z] }[/math] and [math]\displaystyle{ z^{-1}, z^{-2}, \dots \in z^{-1}\mathbf Cz^{-1} }[/math] is the Hankel matrix
- [math]\displaystyle{ \begin{bmatrix} a_1 & a_2 & \ldots \\ a_2 & a_3 & \ldots \\ a_3 & a_4 & \ldots \\ \vdots & \vdots & \ddots \end{bmatrix}. }[/math]
Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if [math]\displaystyle{ f }[/math] is a rational function; that is, a fraction of two polynomials
- [math]\displaystyle{ f(z) = \frac{p(z)}{q(z)}. }[/math]
Approximations
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.
Note that the matrix [math]\displaystyle{ A }[/math] does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.
Hankel matrix transform
The Hankel matrix transform, or simply Hankel transform, of a sequence [math]\displaystyle{ b_k }[/math] is the sequence of the determinants of the Hankel matrices formed from [math]\displaystyle{ b_k }[/math]. Given an integer [math]\displaystyle{ n\gt 0 }[/math], define the corresponding [math]\displaystyle{ n\times n }[/math]–dimensional Hankel matrix [math]\displaystyle{ B_n }[/math] as having the matrix elements [math]\displaystyle{ [B_n]_{i,j}=b_{i+j}. }[/math] Then, the sequence [math]\displaystyle{ h_n }[/math] given by
- [math]\displaystyle{ h_n = \det B_n }[/math]
is the Hankel transform of the sequence [math]\displaystyle{ b_k. }[/math] The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes
- [math]\displaystyle{ c_n = \sum_{k=0}^n {n \choose k} b_k }[/math]
as the binomial transform of the sequence [math]\displaystyle{ b_n }[/math], then one has [math]\displaystyle{ \det B_n = \det C_n. }[/math]
Applications of Hankel matrices
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.
Method of moments for polynomial distributions
The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[5]
Positive Hankel matrices and the Hamburger moment problems
See also
- Cauchy matrix
- Jacobi operator
- Toeplitz matrix, an "upside down" (that is, row-reversed) Hankel matrix
- Vandermonde matrix
Notes
- ↑ Yasuda, M. (2003). "A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices". SIAM J. Matrix Anal. Appl. 25 (3): 601–605. doi:10.1137/S0895479802418835.
- ↑ Fuhrmann 2012, §8.3
- ↑ Aoki, Masanao (1983). "Prediction of Time Series". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 38–47. ISBN 0-387-12696-1. https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA38.
- ↑ Aoki, Masanao (1983). "Rank determination of Hankel matrices". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 67–68. ISBN 0-387-12696-1. https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA67.
- ↑ J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573
References
- Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
- Fuhrmann, Paul A. (2012). A polynomial approach to linear algebra. Universitext (2 ed.). New York, NY: Springer. doi:10.1007/978-1-4614-0338-8. ISBN 978-1-4614-0337-1.
- Victor Y. Pan (2001). Structured matrices and polynomials: unified superfast algorithms. Birkhäuser. ISBN 0817642404.
- J.R. Partington (1988). An introduction to Hankel operators. LMS Student Texts. 13. Cambridge University Press. ISBN 0-521-36791-3.
Original source: https://en.wikipedia.org/wiki/Hankel matrix.
Read more |