# Hamburger moment problem

In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that

$\displaystyle{ m_n = \int_{-\infty}^\infty x^n\,d \mu(x) \text{ ?} }$

In other words, an affirmative answer to the problem means that (m0, m1, m2, ...) is the sequence of moments of some positive Borel measure μ.

The Stieltjes moment problem, Vorobyev moment problem, and the Hausdorff moment problem are similar but replace the real line by $\displaystyle{ [0,+\infty) }$ (Stieltjes and Vorobyev; but Vorobyev formulates the problem in the terms of matrix theory), or a bounded interval (Hausdorff).

## Characterization

The Hamburger moment problem is solvable (that is, (mn) is a sequence of moments) if and only if the corresponding Hankel kernel on the nonnegative integers

$\displaystyle{ A = \left(\begin{matrix} m_0 & m_1 & m_2 & \cdots \\ m_1 & m_2 & m_3 & \cdots \\ m_2 & m_3 & m_4 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{matrix}\right) }$

is positive definite, i.e.,

$\displaystyle{ \sum_{j,k\ge0}m_{j+k}c_j\overline{c_k}\ge0 }$

for every arbitrary sequence (cj)j ≥ 0 of complex numbers that are finitary (i.e. cj = 0 except for finitely many values of j).

For the "only if" part of the claims simply note that

$\displaystyle{ \sum_{j,k\ge0}m_{j+k}c_j \overline{c_k} = \int_{-\infty}^\infty \left|\sum_{j\geq 0} c_j x^j\right|^2\,d \mu(x) }$

which is non-negative if $\displaystyle{ \mu }$ is non-negative.

We sketch an argument for the converse. Let Z+ be the nonnegative integers and F0(Z+) denote the family of complex valued sequences with finitary support. The positive Hankel kernel A induces a (possibly degenerate) sesquilinear product on the family of complex-valued sequences with finite support. This in turn gives a Hilbert space

$\displaystyle{ (\mathcal{H}, \langle\; , \; \rangle) }$

whose typical element is an equivalence class denoted by [f].

Let en be the element in F0(Z+) defined by en(m) = δnm. One notices that

$\displaystyle{ \langle [e_{n+1}], [e_m] \rangle = A_{m,n+1} = m_{m+n+1} = \langle [e_n], [e_{m+1}]\rangle. }$

Therefore, the "shift" operator T on $\displaystyle{ \mathcal{H} }$, with T[en] = [en + 1], is symmetric.

On the other hand, the desired expression

$\displaystyle{ m_n = \int_{-\infty}^\infty x^n\,d \mu(x) }$

suggests that μ is the spectral measure of a self-adjoint operator. (More precisely stated, μ is the spectral measure for an operator $\displaystyle{ \overline{T} }$ defined below and the vector [1], (Reed Simon)). If we can find a "function model" such that the symmetric operator T is multiplication by x, then the spectral resolution of a self-adjoint extension of T proves the claim.

A function model is given by the natural isomorphism from F0(Z+) to the family of polynomials, in one single real variable and complex coefficients: for n ≥ 0, identify en with xn. In the model, the operator T is multiplication by x and a densely defined symmetric operator. It can be shown that T always has self-adjoint extensions. Let $\displaystyle{ \overline{T} }$ be one of them and μ be its spectral measure. So

$\displaystyle{ \langle \overline{T}^n [1], [1] \rangle = \int x^n d \mu(x). }$

On the other hand,

$\displaystyle{ \langle \overline{T}^n [1], [1] \rangle = \langle T^n [e_0], [e_0] \rangle = m_n. }$

For an alternative proof of the existence that only uses Stieltjes integrals, see also,[1] in particular theorem 3.2.

### Uniqueness of solutions

The solutions form a convex set, so the problem has either infinitely many solutions or a unique solution.

Consider the (n + 1) × (n + 1) Hankel matrix

$\displaystyle{ \Delta_n = \left[\begin{matrix} m_0 & m_1 & m_2 & \cdots & m_{n} \\ m_1 & m_2 & m_3 & \cdots & m_{n+1} \\ m_2 & m_3 & m_4 & \cdots & m_{n+2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ m_{n} & m_{n+1} & m_{n+2} & \cdots & m_{2n} \end{matrix}\right]. }$

Positivity of A means that for each n, det(Δn) ≥ 0. If det(Δn) = 0, for some n, then

$\displaystyle{ (\mathcal{H}, \langle \; , \; \rangle) }$

is finite-dimensional and T is self-adjoint. So in this case the solution to the Hamburger moment problem is unique and μ, being the spectral measure of T, has finite support.

More generally, the solution is unique if there are constants C and D such that for all n, |mn| ≤ CDnn! (Reed Simon). This follows from the more general Carleman's condition.

There are examples where the solution is not unique; see e.g.[2]

## Further results

One can see that the Hamburger moment problem is intimately related to orthogonal polynomials on the real line. The Gram–Schmidt procedure gives a basis of orthogonal polynomials in which the operator: $\displaystyle{ \overline{T} }$ has a tridiagonal Jacobi matrix representation. This in turn leads to a tridiagonal model of positive Hankel kernels.

An explicit calculation of the Cayley transform of T shows the connection with what is called the Nevanlinna class of analytic functions on the left half plane. Passing to the non-commutative setting, this motivates Krein's formula which parametrizes the extensions of partial isometries.

The cumulative distribution function and the probability density function can often be found by applying the inverse Laplace transform to the moment generating function

$\displaystyle{ m(t) = \sum_{n=0}m_n\frac{t^n}{n!}, }$

provided that this function converges.

## References

• Chihara, T.S. (1978), An Introduction to Orthogonal Polynomials, Gordon and Breach, Science Publishers, ISBN 0-677-04150-0
• Reed, Michael; Simon, Barry (1975), Fourier Analysis, Self-Adjointness, Methods of modern mathematical physics, 2, Academic Press, pp. 145, 205, ISBN 0-12-585002-6
• Shohat, J. A.; Tamarkin, J. D. (1943), The Problem of Moments, New York: American mathematical society, ISBN 0-8218-1501-6 .

1. Chihara 1978, p. 56.
2. Chihara 1978, p. 73.