Hankel matrix

From HandWiki

In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.: [math]\displaystyle{ \qquad\begin{bmatrix} a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end{bmatrix}. }[/math]

More generally, a Hankel matrix is any [math]\displaystyle{ n \times n }[/math] matrix [math]\displaystyle{ A }[/math] of the form

[math]\displaystyle{ A = \begin{bmatrix} a_{0} & a_{1} & a_{2} & \ldots & \ldots &a_{n-1} \\ a_{1} & a_2 & & & &\vdots \\ a_{2} & & & & & \vdots \\ \vdots & & & & & a_{2n-4}\\ \vdots & & & & a_{2n-4}& a_{2n-3} \\ a_{n-1} & \ldots & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} \end{bmatrix}. }[/math]

In terms of the components, if the [math]\displaystyle{ i,j }[/math] element of [math]\displaystyle{ A }[/math] is denoted with [math]\displaystyle{ A_{ij} }[/math], and assuming [math]\displaystyle{ i\le j }[/math], then we have [math]\displaystyle{ A_{i,j} = A_{i+k,j-k} }[/math] for all [math]\displaystyle{ k = 0,...,j-i. }[/math]


  • The Hankel matrix is a symmetric matrix.
  • Let [math]\displaystyle{ J_n }[/math] be the [math]\displaystyle{ n \times n }[/math] exchange matrix. If [math]\displaystyle{ H }[/math] is a [math]\displaystyle{ m \times n }[/math] Hankel matrix, then [math]\displaystyle{ H = T J_n }[/math] where [math]\displaystyle{ T }[/math] is a [math]\displaystyle{ m \times n }[/math] Toeplitz matrix.
    • If [math]\displaystyle{ T }[/math] is real symmetric, then [math]\displaystyle{ H = T J_n }[/math] will have the same eigenvalues as [math]\displaystyle{ T }[/math] up to sign.[1]
  • The Hilbert matrix is an example of a Hankel matrix.

Hankel operator

A Hankel operator on a Hilbert space is one whose matrix is a (possibly infinite) Hankel matrix with respect to an orthonormal basis. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix [math]\displaystyle{ A }[/math] must satisfy, for all rows [math]\displaystyle{ i }[/math] and columns [math]\displaystyle{ j }[/math], [math]\displaystyle{ (A_{i,j})_{i,j \ge 1} }[/math]. Note that every entry [math]\displaystyle{ A_{i,j} }[/math] depends only on [math]\displaystyle{ i+j }[/math].

Let the corresponding Hankel Operator be [math]\displaystyle{ H_\alpha }[/math]. Given a Hankel matrix [math]\displaystyle{ A }[/math], the corresponding Hankel operator is then defined as [math]\displaystyle{ H_\alpha(u)= Au }[/math].

We are often interested in Hankel operators [math]\displaystyle{ H_\alpha: \ell^{2}\left(\mathbb{Z}^{+} \cup\{0\}\right) \rightarrow \ell^{2}\left(\mathbb{Z}^{+} \cup\{0\}\right) }[/math] over the Hilbert space [math]\displaystyle{ \ell^{2}(\mathbf Z) }[/math], the space of square integrable bilateral complex sequences. For any [math]\displaystyle{ u \in \ell^{2}(\mathbf Z) }[/math], we have

[math]\displaystyle{ \|u\|_{\ell^{2}(z)}^{2} = \sum_{n=-\infty}^{\infty}\left|u_{n}\right|^{2} }[/math]

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix [math]\displaystyle{ A }[/math] does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

The determinant of a Hankel matrix is called a catalecticant.

Hankel matrix transform

The Hankel matrix transform, or simply Hankel transform, produces the sequence of the determinants of the Hankel matrices formed from the given sequence. Namely, the sequence [math]\displaystyle{ \{h_n\}_{n\ge 0} }[/math] is the Hankel transform of the sequence [math]\displaystyle{ \{b_n\}_{n\ge 0} }[/math] when [math]\displaystyle{ h_n = \det (b_{i+j-2})_{1 \le i,j \le n+1}. }[/math]

The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes

[math]\displaystyle{ c_n = \sum_{k=0}^n {n \choose k} b_k }[/math]

as the binomial transform of the sequence [math]\displaystyle{ \{b_n\} }[/math], then one has

[math]\displaystyle{ \det (b_{i+j-2})_{1 \le i,j \le n+1} = \det (c_{i+j-2})_{1 \le i,j \le n+1}. }[/math]

Applications of Hankel matrices

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[2] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[3] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

Method of moments for polynomial distributions

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[4]

Positive Hankel matrices and the Hamburger moment problems

See also


  1. Yasuda, M. (2003). "A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices". SIAM J. Matrix Anal. Appl. 25 (3): 601–605. doi:10.1137/S0895479802418835. 
  2. Aoki, Masanao (1983). "Prediction of Time Series". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 38–47. ISBN 0-387-12696-1. https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA38. 
  3. Aoki, Masanao (1983). "Rank determination of Hankel matrices". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 67–68. ISBN 0-387-12696-1. https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA67. 
  4. J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573