Hankel matrix
In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.: [math]\displaystyle{ \qquad\begin{bmatrix} a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end{bmatrix}. }[/math]
More generally, a Hankel matrix is any [math]\displaystyle{ n \times n }[/math] matrix [math]\displaystyle{ A }[/math] of the form
[math]\displaystyle{ A = \begin{bmatrix} a_{0} & a_{1} & a_{2} & \ldots & \ldots &a_{n-1} \\ a_{1} & a_2 & & & &\vdots \\ a_{2} & & & & & \vdots \\ \vdots & & & & & a_{2n-4}\\ \vdots & & & & a_{2n-4}& a_{2n-3} \\ a_{n-1} & \ldots & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} \end{bmatrix}. }[/math]
In terms of the components, if the [math]\displaystyle{ i,j }[/math] element of [math]\displaystyle{ A }[/math] is denoted with [math]\displaystyle{ A_{ij} }[/math], and assuming [math]\displaystyle{ i\le j }[/math], then we have [math]\displaystyle{ A_{i,j} = A_{i+k,j-k} }[/math] for all [math]\displaystyle{ k = 0,...,j-i. }[/math]
Properties
- The Hankel matrix is a symmetric matrix.
- Let [math]\displaystyle{ J_n }[/math] be the [math]\displaystyle{ n \times n }[/math] exchange matrix. If [math]\displaystyle{ H }[/math] is a [math]\displaystyle{ m \times n }[/math] Hankel matrix, then [math]\displaystyle{ H = T J_n }[/math] where [math]\displaystyle{ T }[/math] is a [math]\displaystyle{ m \times n }[/math] Toeplitz matrix.
- The Hilbert matrix is an example of a Hankel matrix.
Hankel operator
A Hankel operator on a Hilbert space is one whose matrix is a (possibly infinite) Hankel matrix with respect to an orthonormal basis. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix [math]\displaystyle{ A }[/math] must satisfy, for all rows [math]\displaystyle{ i }[/math] and columns [math]\displaystyle{ j }[/math], [math]\displaystyle{ (A_{i,j})_{i,j \ge 1} }[/math]. Note that every entry [math]\displaystyle{ A_{i,j} }[/math] depends only on [math]\displaystyle{ i+j }[/math].
Let the corresponding Hankel Operator be [math]\displaystyle{ H_\alpha }[/math]. Given a Hankel matrix [math]\displaystyle{ A }[/math], the corresponding Hankel operator is then defined as [math]\displaystyle{ H_\alpha(u)= Au }[/math].
We are often interested in Hankel operators [math]\displaystyle{ H_\alpha: \ell^{2}\left(\mathbb{Z}^{+} \cup\{0\}\right) \rightarrow \ell^{2}\left(\mathbb{Z}^{+} \cup\{0\}\right) }[/math] over the Hilbert space [math]\displaystyle{ \ell^{2}(\mathbf Z) }[/math], the space of square integrable bilateral complex sequences. For any [math]\displaystyle{ u \in \ell^{2}(\mathbf Z) }[/math], we have
[math]\displaystyle{ \|u\|_{\ell^{2}(z)}^{2} = \sum_{n=-\infty}^{\infty}\left|u_{n}\right|^{2} }[/math]
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.
Note that the matrix [math]\displaystyle{ A }[/math] does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.
The determinant of a Hankel matrix is called a catalecticant.
Hankel matrix transform
The Hankel matrix transform, or simply Hankel transform, produces the sequence of the determinants of the Hankel matrices formed from the given sequence. Namely, the sequence [math]\displaystyle{ \{h_n\}_{n\ge 0} }[/math] is the Hankel transform of the sequence [math]\displaystyle{ \{b_n\}_{n\ge 0} }[/math] when [math]\displaystyle{ h_n = \det (b_{i+j-2})_{1 \le i,j \le n+1}. }[/math]
The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes
[math]\displaystyle{ c_n = \sum_{k=0}^n {n \choose k} b_k }[/math]
as the binomial transform of the sequence [math]\displaystyle{ \{b_n\} }[/math], then one has
[math]\displaystyle{ \det (b_{i+j-2})_{1 \le i,j \le n+1} = \det (c_{i+j-2})_{1 \le i,j \le n+1}. }[/math]
Applications of Hankel matrices
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[2] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[3] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.
Method of moments for polynomial distributions
The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[4]
Positive Hankel matrices and the Hamburger moment problems
See also
- Toeplitz matrix, an "upside down" (i.e., row-reversed) Hankel matrix
- Cauchy matrix
- Vandermonde matrix
Notes
- ↑ Yasuda, M. (2003). "A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices". SIAM J. Matrix Anal. Appl. 25 (3): 601–605. doi:10.1137/S0895479802418835.
- ↑ Aoki, Masanao (1983). "Prediction of Time Series". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 38–47. ISBN 0-387-12696-1. https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA38.
- ↑ Aoki, Masanao (1983). "Rank determination of Hankel matrices". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 67–68. ISBN 0-387-12696-1. https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA67.
- ↑ J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573
References
- Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
- Victor Y. Pan (2001). Structured matrices and polynomials: unified superfast algorithms. Birkhäuser. ISBN 0817642404.
- J.R. Partington (1988). An introduction to Hankel operators. LMS Student Texts. 13. Cambridge University Press. ISBN 0-521-36791-3.
- P. Jain and R.B. Pachori, An iterative approach for decomposition of multi-component non-stationary signals based on eigenvalue decomposition of the Hankel matrix, Journal of the Franklin Institute, vol. 352, issue 10, pp. 4017–4044, October 2015.
- P. Jain and R.B. Pachori, Event-based method for instantaneous fundamental frequency estimation from voiced speech based on eigenvalue decomposition of Hankel matrix, IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 22. issue 10, pp. 1467–1482, October 2014.
- R.R. Sharma and R.B. Pachori, Time-frequency representation using IEVDHM-HT with application to classification of epileptic EEG signals, IET Science, Measurement & Technology, vol. 12, issue 01, pp. 72–82, January 2018.
![]() | Original source: https://en.wikipedia.org/wiki/Hankel matrix.
Read more |