# Hankel matrix

In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.: $\displaystyle{ \qquad\begin{bmatrix} a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end{bmatrix}. }$

More generally, a Hankel matrix is any $\displaystyle{ n \times n }$ matrix $\displaystyle{ A }$ of the form

$\displaystyle{ A = \begin{bmatrix} a_{0} & a_{1} & a_{2} & \ldots & \ldots &a_{n-1} \\ a_{1} & a_2 & & & &\vdots \\ a_{2} & & & & & \vdots \\ \vdots & & & & & a_{2n-4}\\ \vdots & & & & a_{2n-4}& a_{2n-3} \\ a_{n-1} & \ldots & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} \end{bmatrix}. }$

In terms of the components, if the $\displaystyle{ i,j }$ element of $\displaystyle{ A }$ is denoted with $\displaystyle{ A_{ij} }$, and assuming $\displaystyle{ i\le j }$, then we have $\displaystyle{ A_{i,j} = A_{i+k,j-k} }$ for all $\displaystyle{ k = 0,...,j-i. }$

## Properties

• The Hankel matrix is a symmetric matrix.
• Let $\displaystyle{ J_n }$ be the $\displaystyle{ n \times n }$ exchange matrix. If $\displaystyle{ H }$ is a $\displaystyle{ m \times n }$ Hankel matrix, then $\displaystyle{ H = T J_n }$ where $\displaystyle{ T }$ is a $\displaystyle{ m \times n }$ Toeplitz matrix.
• If $\displaystyle{ T }$ is real symmetric, then $\displaystyle{ H = T J_n }$ will have the same eigenvalues as $\displaystyle{ T }$ up to sign.
• The Hilbert matrix is an example of a Hankel matrix.

## Hankel operator

A Hankel operator on a Hilbert space is one whose matrix is a (possibly infinite) Hankel matrix with respect to an orthonormal basis. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix $\displaystyle{ A }$ must satisfy, for all rows $\displaystyle{ i }$ and columns $\displaystyle{ j }$, $\displaystyle{ (A_{i,j})_{i,j \ge 1} }$. Note that every entry $\displaystyle{ A_{i,j} }$ depends only on $\displaystyle{ i+j }$.

Let the corresponding Hankel Operator be $\displaystyle{ H_\alpha }$. Given a Hankel matrix $\displaystyle{ A }$, the corresponding Hankel operator is then defined as $\displaystyle{ H_\alpha(u)= Au }$.

We are often interested in Hankel operators $\displaystyle{ H_\alpha: \ell^{2}\left(\mathbb{Z}^{+} \cup\{0\}\right) \rightarrow \ell^{2}\left(\mathbb{Z}^{+} \cup\{0\}\right) }$ over the Hilbert space $\displaystyle{ \ell^{2}(\mathbf Z) }$, the space of square integrable bilateral complex sequences. For any $\displaystyle{ u \in \ell^{2}(\mathbf Z) }$, we have

$\displaystyle{ \|u\|_{\ell^{2}(z)}^{2} = \sum_{n=-\infty}^{\infty}\left|u_{n}\right|^{2} }$

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix $\displaystyle{ A }$ does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

The determinant of a Hankel matrix is called a catalecticant.

## Hankel matrix transform

The Hankel matrix transform, or simply Hankel transform, produces the sequence of the determinants of the Hankel matrices formed from the given sequence. Namely, the sequence $\displaystyle{ \{h_n\}_{n\ge 0} }$ is the Hankel transform of the sequence $\displaystyle{ \{b_n\}_{n\ge 0} }$ when $\displaystyle{ h_n = \det (b_{i+j-2})_{1 \le i,j \le n+1}. }$

The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes

$\displaystyle{ c_n = \sum_{k=0}^n {n \choose k} b_k }$

as the binomial transform of the sequence $\displaystyle{ \{b_n\} }$, then one has

$\displaystyle{ \det (b_{i+j-2})_{1 \le i,j \le n+1} = \det (c_{i+j-2})_{1 \le i,j \le n+1}. }$

## Applications of Hankel matrices

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired. The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization. The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

### Method of moments for polynomial distributions

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.