# Hankel matrix

In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example, $\displaystyle{ \qquad\begin{bmatrix} a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end{bmatrix}. }$

More generally, a Hankel matrix is any $\displaystyle{ n \times n }$ matrix $\displaystyle{ A }$ of the form

$\displaystyle{ A = \begin{bmatrix} a_0 & a_1 & a_2 & \ldots & a_{n-1} \\ a_1 & a_2 & & &\vdots \\ a_2 & & & & a_{2n-4} \\ \vdots & & & a_{2n-4} & a_{2n-3} \\ a_{n-1} & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} \end{bmatrix}. }$

In terms of the components, if the $\displaystyle{ i,j }$ element of $\displaystyle{ A }$ is denoted with $\displaystyle{ A_{ij} }$, and assuming $\displaystyle{ i \le j }$, then we have $\displaystyle{ A_{i,j} = A_{i+k,j-k} }$ for all $\displaystyle{ k = 0,...,j-i. }$

## Properties

• Any Hankel matrix is symmetric.
• Let $\displaystyle{ J_n }$ be the $\displaystyle{ n \times n }$ exchange matrix. If $\displaystyle{ H }$ is an $\displaystyle{ m \times n }$ Hankel matrix, then $\displaystyle{ H = T J_n }$ where $\displaystyle{ T }$ is an $\displaystyle{ m \times n }$ Toeplitz matrix.
• If $\displaystyle{ T }$ is real symmetric, then $\displaystyle{ H = T J_n }$ will have the same eigenvalues as $\displaystyle{ T }$ up to sign.[1]
• The Hilbert matrix is an example of a Hankel matrix.
• The determinant of a Hankel matrix is called a catalecticant.

## Hankel operator

Given a formal Laurent series

$\displaystyle{ f(z) = \sum_{n=-\infty}^N a_n z^n }$

the corresponding Hankel operator is defined as[2]

$\displaystyle{ H_f : \mathbf C[z] \to \mathbf z^{-1} \mathbf Cz^{-1}, }$

This takes a polynomial $\displaystyle{ g \in \mathbf C[z] }$ and sends it to the product $\displaystyle{ fg }$, but discards all powers of $\displaystyle{ z }$ with a non-negative exponent, so as to give an element in $\displaystyle{ z^{-1} \mathbf Cz^{-1} }$, the formal power series with strictly negative exponents. The map $\displaystyle{ H_f }$ is in a natural way $\displaystyle{ \mathbf C[z] }$-linear, and its matrix with respect to the elements $\displaystyle{ 1, z, z^2, \dots \in \mathbf C[z] }$ and $\displaystyle{ z^{-1}, z^{-2}, \dots \in z^{-1}\mathbf Cz^{-1} }$ is the Hankel matrix

$\displaystyle{ \begin{bmatrix} a_1 & a_2 & \ldots \\ a_2 & a_3 & \ldots \\ a_3 & a_4 & \ldots \\ \vdots & \vdots & \ddots \end{bmatrix}. }$

Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if $\displaystyle{ f }$ is a rational function; that is, a fraction of two polynomials

$\displaystyle{ f(z) = \frac{p(z)}{q(z)}. }$

## Approximations

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix $\displaystyle{ A }$ does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

## Hankel matrix transform

The Hankel matrix transform, or simply Hankel transform, of a sequence $\displaystyle{ b_k }$ is the sequence of the determinants of the Hankel matrices formed from $\displaystyle{ b_k }$. Given an integer $\displaystyle{ n\gt 0 }$, define the corresponding $\displaystyle{ n\times n }$–dimensional Hankel matrix $\displaystyle{ B_n }$ as having the matrix elements $\displaystyle{ [B_n]_{i,j}=b_{i+j}. }$ Then, the sequence $\displaystyle{ h_n }$ given by

$\displaystyle{ h_n = \det B_n }$

is the Hankel transform of the sequence $\displaystyle{ b_k. }$ The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes

$\displaystyle{ c_n = \sum_{k=0}^n {n \choose k} b_k }$

as the binomial transform of the sequence $\displaystyle{ b_n }$, then one has $\displaystyle{ \det B_n = \det C_n. }$

## Applications of Hankel matrices

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

### Method of moments for polynomial distributions

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[5]