Extensions of symmetric operators

From HandWiki
Short description: Operation on self-adjoint operators

In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems.

This article discusses a few related problems of this type. The unifying theme is that each problem has an operator-theoretic characterization which gives a corresponding parametrization of solutions. More specifically, finding self-adjoint extensions, with various requirements, of symmetric operators is equivalent to finding unitary extensions of suitable partial isometries.

Symmetric operators

Let [math]\displaystyle{ H }[/math] be a Hilbert space. A linear operator [math]\displaystyle{ A }[/math] acting on [math]\displaystyle{ H }[/math] with dense domain [math]\displaystyle{ \operatorname{dom}(A) }[/math] is symmetric if

[math]\displaystyle{ \langle Ax, y\rangle = \langle x, A y\rangle, \quad \forall x,y\in\operatorname{dom}(A). }[/math]

If [math]\displaystyle{ \operatorname{dom}(A) = H }[/math], the Hellinger-Toeplitz theorem says that [math]\displaystyle{ A }[/math] is a bounded operator, in which case [math]\displaystyle{ A }[/math] is self-adjoint and the extension problem is trivial. In general, a symmetric operator is self-adjoint if the domain of its adjoint, [math]\displaystyle{ \operatorname{dom}(A^*) }[/math], lies in [math]\displaystyle{ \operatorname{dom}(A) }[/math].

When dealing with unbounded operators, it is often desirable to be able to assume that the operator in question is closed. In the present context, it is a convenient fact that every symmetric operator [math]\displaystyle{ A }[/math] is closable. That is, [math]\displaystyle{ A }[/math] has the smallest closed extension, called the closure of [math]\displaystyle{ A }[/math]. This can be shown by invoking the symmetric assumption and Riesz representation theorem. Since [math]\displaystyle{ A }[/math] and its closure have the same closed extensions, it can always be assumed that the symmetric operator of interest is closed.

In the next section, a symmetric operator will be assumed to be densely defined and closed.

Self-adjoint extensions of symmetric operators

If an operator [math]\displaystyle{ A }[/math] on the Hilbert space [math]\displaystyle{ H }[/math] is symmetric, when does it have self-adjoint extensions? An operator that has a unique self-adjoint extension is said to be essentially self-adjoint; equivalently, an operator is essentially self-adjoint if its closure (the operator whose graph is the closure of the graph of [math]\displaystyle{ A }[/math]) is self-adjoint. In general, a symmetric operator could have many self-adjoint extensions or none at all. Thus, we would like a classification of its self-adjoint extensions.

The first basic criterion for essential self-adjointness is the following:[1]

Theorem —  If [math]\displaystyle{ A }[/math] is a symmetric operator on [math]\displaystyle{ H }[/math], then [math]\displaystyle{ A }[/math] is essentially self-adjoint if and only if the range of the operators [math]\displaystyle{ A-i }[/math] and [math]\displaystyle{ A+i }[/math] are dense in [math]\displaystyle{ H }[/math].

Equivalently, [math]\displaystyle{ A }[/math] is essentially self-adjoint if and only if the operators [math]\displaystyle{ A^* \pm i }[/math] have trivial kernels.[2] That is to say, [math]\displaystyle{ A }[/math] fails to be self-adjoint if and only if [math]\displaystyle{ A^* }[/math] has an eigenvector with complex eigenvalues [math]\displaystyle{ \pm i }[/math].

Another way of looking at the issue is provided by the Cayley transform of a self-adjoint operator and the deficiency indices.[3]

Theorem — Suppose [math]\displaystyle{ A }[/math] is a symmetric operator. Then there is a unique densely defined linear operator [math]\displaystyle{ W(A) : \operatorname{ran}(A + i) \to \operatorname{ran}(A - i) }[/math] such that [math]\displaystyle{ W(A)(Ax + ix) = Ax - ix, \quad x \in \operatorname{dom}(A). }[/math]

[math]\displaystyle{ W(A) }[/math] is isometric on its domain. Moreover, [math]\displaystyle{ \operatorname{ran}(1-W(A)) }[/math] is dense in [math]\displaystyle{ A }[/math].

Conversely, given any densely defined operator [math]\displaystyle{ U }[/math] which is isometric on its (not necessarily closed) domain and such that [math]\displaystyle{ 1-U }[/math] is dense, then there is a (unique) densely defined symmetric operator

[math]\displaystyle{ S(U) : \operatorname{ran}(1 - U) \to \operatorname{ran}(1 + U) }[/math]

such that

[math]\displaystyle{ S(U)(x - Ux) = i(x + U x), \quad x \in \operatorname{dom}(U). }[/math]

The mappings [math]\displaystyle{ W }[/math] and [math]\displaystyle{ S }[/math] are inverses of each other, i.e., [math]\displaystyle{ S(W(A))=A }[/math].

The mapping [math]\displaystyle{ A \mapsto W(A) }[/math] is called the Cayley transform. It associates a partially defined isometry to any symmetric densely defined operator. Note that the mappings [math]\displaystyle{ W }[/math] and [math]\displaystyle{ S }[/math] are monotone: This means that if [math]\displaystyle{ B }[/math] is a symmetric operator that extends the densely defined symmetric operator [math]\displaystyle{ A }[/math], then [math]\displaystyle{ W(B) }[/math] extends [math]\displaystyle{ W(A) }[/math], and similarly for [math]\displaystyle{ S }[/math].

Theorem — A necessary and sufficient condition for [math]\displaystyle{ A }[/math] to be self-adjoint is that its Cayley transform [math]\displaystyle{ W(A) }[/math] be unitary on [math]\displaystyle{ H }[/math].

This immediately gives us a necessary and sufficient condition for [math]\displaystyle{ A }[/math] to have a self-adjoint extension, as follows:

Theorem — A necessary and sufficient condition for [math]\displaystyle{ A }[/math] to have a self-adjoint extension is that [math]\displaystyle{ W(A) }[/math] have a unitary extension to [math]\displaystyle{ H }[/math].

A partially defined isometric operator [math]\displaystyle{ V }[/math] on a Hilbert space [math]\displaystyle{ H }[/math] has a unique isometric extension to the norm closure of [math]\displaystyle{ \operatorname{dom}(V) }[/math]. A partially defined isometric operator with closed domain is called a partial isometry.

Define the deficiency subspaces of A by

[math]\displaystyle{ \begin{align} K_+ &= \operatorname{ran}(A+i)^{\perp}\\ K_- &= \operatorname{ran}(A-i)^{\perp} \end{align} }[/math]

In this language, the description of the self-adjoint extension problem given by the theorem can be restated as follows: a symmetric operator [math]\displaystyle{ A }[/math] has self-adjoint extensions if and only if the deficiency subspaces [math]\displaystyle{ K_{+} }[/math] and [math]\displaystyle{ K_{-} }[/math] have the same dimension.[4]

The deficiency indices of a partial isometry [math]\displaystyle{ V }[/math] are defined as the dimension of the orthogonal complements of the domain and range:

[math]\displaystyle{ \begin{align} n_+(V) &= \dim \operatorname{dom}(V)^\perp \\ n_-(V) &= \dim \operatorname{ran}(V)^\perp \end{align} }[/math]

Theorem — A partial isometry [math]\displaystyle{ V }[/math] has a unitary extension if and only if the deficiency indices are identical. Moreover, [math]\displaystyle{ V }[/math] has a unique unitary extension if and only if the deficiency indices are both zero.

We see that there is a bijection between symmetric extensions of an operator and isometric extensions of its Cayley transform. The symmetric extension is self-adjoint if and only if the corresponding isometric extension is unitary.

A symmetric operator has a unique self-adjoint extension if and only if both its deficiency indices are zero. Such an operator is said to be essentially self-adjoint. Symmetric operators which are not essentially self-adjoint may still have a canonical self-adjoint extension. Such is the case for non-negative symmetric operators (or more generally, operators which are bounded below). These operators always have a canonically defined Friedrichs extension and for these operators we can define a canonical functional calculus. Many operators that occur in analysis are bounded below (such as the negative of the Laplacian operator), so the issue of essential adjointness for these operators is less critical.

Suppose [math]\displaystyle{ A }[/math] is symmetric densely defined. Then any symmetric extension of [math]\displaystyle{ A }[/math] is a restriction of [math]\displaystyle{ A^* }[/math]. Indeed, [math]\displaystyle{ A\subseteq B }[/math] and [math]\displaystyle{ B }[/math] symmetric yields [math]\displaystyle{ B \subseteq A^* }[/math] by applying the definition of [math]\displaystyle{ \operatorname{dom}(A^*) }[/math]. This notion leads to the von Neumann formulae:[5]

Theorem —  Suppose [math]\displaystyle{ A }[/math] is a densely defined symmetric operator, with domain [math]\displaystyle{ \operatorname{dom}(A) }[/math]. Let [math]\displaystyle{ N_\pm = \operatorname{ran}(A \pm i)^\perp, }[/math] be any pair of its deficiency subspaces. Then [math]\displaystyle{ N_\pm = \operatorname{ker}(A^* \mp i), }[/math] and [math]\displaystyle{ \operatorname{dom}\left(A^*\right) = \operatorname{dom}\left(\overline{A}\right) \oplus N_+ \oplus N_-, }[/math] where the decomposition is orthogonal relative to the graph inner product of [math]\displaystyle{ \operatorname{dom}(A^*) }[/math]: [math]\displaystyle{ \langle \xi \mid \eta \rangle_\text{graph} = \langle \xi \mid \eta \rangle + \left\langle A^* \xi \mid A^* \eta \right\rangle . }[/math]

Example

Consider the Hilbert space [math]\displaystyle{ L^2([0,1]) }[/math]. On the subspace of absolutely continuous function that vanish on the boundary, define the operator [math]\displaystyle{ A }[/math] by

[math]\displaystyle{ A f = i \frac{d}{dx} f. }[/math]

Integration by parts shows [math]\displaystyle{ A }[/math] is symmetric. Its adjoint [math]\displaystyle{ A^* }[/math] is the same operator with [math]\displaystyle{ \operatorname{dom}(A^*) }[/math] being the absolutely continuous functions with no boundary condition. We will see that extending A amounts to modifying the boundary conditions, thereby enlarging [math]\displaystyle{ \operatorname{dom}(A) }[/math] and reducing [math]\displaystyle{ \operatorname{dom}(A^*) }[/math], until the two coincide.

Direct calculation shows that [math]\displaystyle{ K_+ }[/math] and [math]\displaystyle{ K_- }[/math] are one-dimensional subspaces given by

[math]\displaystyle{ \begin{align} K_+ &= \operatorname{span} \{\phi_+ = c \cdot e^x \}\\ K_- &= \operatorname{span}\{ \phi_- = c \cdot e^{-x} \} \end{align} }[/math]

where [math]\displaystyle{ c }[/math] is a normalizing constant. The self-adjoint extensions [math]\displaystyle{ A_\alpha }[/math] of [math]\displaystyle{ A }[/math] are parametrized by the circle group [math]\displaystyle{ \mathbb T = \{\alpha \in \mathbb C : |\alpha| = 1 \} }[/math]. For each unitary transformation [math]\displaystyle{ U_\alpha : K_- \to K_+ }[/math] defined by

[math]\displaystyle{ U_\alpha (\phi_-) =\alpha \phi_+ }[/math]

there corresponds an extension [math]\displaystyle{ A_\alpha }[/math] with domain

[math]\displaystyle{ \operatorname{dom}(A_{\alpha}) = \{ f + \beta (\alpha \phi_{-} - \phi_+) | f \in \operatorname{dom}(A) , \; \beta \in \mathbb{C} \}. }[/math]

If [math]\displaystyle{ f \in \operatorname{dom}(A_\alpha) }[/math], then [math]\displaystyle{ f }[/math] is absolutely continuous and

[math]\displaystyle{ \left|\frac{f(0)}{f(1)}\right| = \left|\frac{e\alpha -1}{\alpha - e}\right| = 1. }[/math]

Conversely, if [math]\displaystyle{ f }[/math] is absolutely continuous and [math]\displaystyle{ f(0)=\gamma f(1) }[/math] for some [math]\displaystyle{ \gamma \in \mathbb{T} }[/math], then [math]\displaystyle{ f }[/math] lies in the above domain.

The self-adjoint operators [math]\displaystyle{ A_\alpha }[/math] are instances of the momentum operator in quantum mechanics.

Self-adjoint extension on a larger space

Every partial isometry can be extended, on a possibly larger space, to a unitary operator. Consequently, every symmetric operator has a self-adjoint extension, on a possibly larger space.

Positive symmetric operators

A symmetric operator [math]\displaystyle{ A }[/math] is called positive if

[math]\displaystyle{ \langle A x, x\rangle\ge 0, \quad \forall x\in \operatorname{dom}(A). }[/math]

It is known that for every such [math]\displaystyle{ A }[/math], one has [math]\displaystyle{ \operatorname{dim}K_+ = \operatorname{dim}K_- }[/math]. Therefore, every positive symmetric operator has self-adjoint extensions. The more interesting question in this direction is whether [math]\displaystyle{ A }[/math] has positive self-adjoint extensions.

For two positive operators [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math], we put [math]\displaystyle{ A\leq B }[/math] if

[math]\displaystyle{ (A + 1)^{-1} \ge (B + 1)^{-1} }[/math]

in the sense of bounded operators.

Structure of 2 × 2 matrix contractions

While the extension problem for general symmetric operators is essentially that of extending partial isometries to unitaries, for positive symmetric operators the question becomes one of extending contractions: by "filling out" certain unknown entries of a 2 × 2 self-adjoint contraction, we obtain the positive self-adjoint extensions of a positive symmetric operator.

Before stating the relevant result, we first fix some terminology. For a contraction [math]\displaystyle{ \Gamma }[/math], acting on [math]\displaystyle{ H }[/math], we define its defect operators by

[math]\displaystyle{ \begin{align} &D_{ \Gamma }\; = (1 - \Gamma^*\Gamma )^{\frac{1}{2}}\\ &D_{\Gamma^*} = (1 - \Gamma \Gamma^*)^{\frac{1}{2}} \end{align} }[/math]

The defect spaces of [math]\displaystyle{ \Gamma }[/math] are

[math]\displaystyle{ \begin{align} &\mathcal{D}_{\Gamma}\; = \operatorname{ran}( D_{\Gamma} )\\ &\mathcal{D}_{\Gamma^*} = \operatorname{ran}( D_{\Gamma^*}) \end{align} }[/math]

The defect operators indicate the non-unitarity of [math]\displaystyle{ \Gamma }[/math], while the defect spaces ensure uniqueness in some parameterizations. Using this machinery, one can explicitly describe the structure of general matrix contractions. We will only need the 2 × 2 case. Every 2 × 2 contraction [math]\displaystyle{ \Gamma }[/math] can be uniquely expressed as

[math]\displaystyle{ \Gamma = \begin{bmatrix} \Gamma_1 & D_{\Gamma_1 ^*} \Gamma_2\\ \Gamma_3 D_{\Gamma_1} & - \Gamma_3 \Gamma_1^* \Gamma_2 + D_{\Gamma_3 ^*} \Gamma_4 D_{\Gamma_2} \end{bmatrix} }[/math]

where each [math]\displaystyle{ \Gamma_i }[/math] is a contraction.

Extensions of Positive symmetric operators

The Cayley transform for general symmetric operators can be adapted to this special case. For every non-negative number [math]\displaystyle{ a }[/math],

[math]\displaystyle{ \left|\frac{a-1}{a+1}\right| \le 1. }[/math]

This suggests we assign to every positive symmetric operator [math]\displaystyle{ A }[/math] a contraction

[math]\displaystyle{ C_A : \operatorname{ran}(A + 1) \rightarrow \operatorname{ran}(A-1) \subset H }[/math]

defined by

[math]\displaystyle{ C_A (A+1)x = (A-1)x. \quad \mbox{i.e.} \quad C_A = (A-1)(A+1)^{-1}.\, }[/math]

which have matrix representation[clarification needed]

[math]\displaystyle{ C_A = \begin{bmatrix} \Gamma_1 \\ \Gamma_3 D_{\Gamma_1} \end{bmatrix} : \operatorname{ran}(A+1) \rightarrow \begin{matrix} \operatorname{ran}(A+1) \\ \oplus \\ \operatorname{ran}(A+1)^{\perp} \end{matrix}. }[/math]

It is easily verified that the [math]\displaystyle{ \Gamma_1 }[/math] entry, [math]\displaystyle{ C_A }[/math] projected onto [math]\displaystyle{ \operatorname{ran}(A+1)=\operatorname{dom}(C_A) }[/math], is self-adjoint. The operator [math]\displaystyle{ A }[/math] can be written as

[math]\displaystyle{ A = (1+ C_A)(1 - C_A)^{-1} \, }[/math]

with [math]\displaystyle{ \operatorname{dom}(A)=\operatorname{ran}(C_A -1) }[/math]. If [math]\displaystyle{ \tilde{C} }[/math] is a contraction that extends [math]\displaystyle{ C_A }[/math] and its projection onto its domain is self-adjoint, then it is clear that its inverse Cayley transform

[math]\displaystyle{ \tilde{A} = ( 1 + \tilde{C} ) ( 1 - \tilde{C} )^{-1} }[/math]

defined on [math]\displaystyle{ \operatorname{ran}( 1 - \tilde{C}) }[/math] is a positive symmetric extension of [math]\displaystyle{ A }[/math]. The symmetric property follows from its projection onto its own domain being self-adjoint and positivity follows from contractivity. The converse is also true: given a positive symmetric extension of [math]\displaystyle{ A }[/math], its Cayley transform is a contraction satisfying the stated "partial" self-adjoint property.

Theorem — The positive symmetric extensions of [math]\displaystyle{ A }[/math] are in one-to-one correspondence with the extensions of its Cayley transform where, if [math]\displaystyle{ C }[/math] is such an extension, we require [math]\displaystyle{ C }[/math] projected onto [math]\displaystyle{ \operatorname{dom}(C) }[/math] be self-adjoint.

The unitarity criterion of the Cayley transform is replaced by self-adjointness for positive operators.

Theorem — A symmetric positive operator [math]\displaystyle{ A }[/math] is self-adjoint if and only if its Cayley transform is a self-adjoint contraction defined on all of [math]\displaystyle{ H }[/math], i.e. when [math]\displaystyle{ \operatorname{ran}(A +1) = H }[/math].

Therefore, finding self-adjoint extension for a positive symmetric operator becomes a "matrix completion problem". Specifically, we need to embed the column contraction [math]\displaystyle{ C_A }[/math] into a 2 × 2 self-adjoint contraction. This can always be done and the structure of such contractions gives a parametrization of all possible extensions.

By the preceding subsection, all self-adjoint extensions of [math]\displaystyle{ C_A }[/math] takes the form

[math]\displaystyle{ \tilde{C}(\Gamma_4) = \begin{bmatrix} \Gamma_1 & D_{\Gamma_1} \Gamma_3 ^* \\ \Gamma_3 D_{\Gamma_1} & - \Gamma_3 \Gamma_1 \Gamma_3^* + D_{\Gamma_3^*} \Gamma_4 D_{\Gamma_3^*} \end{bmatrix}. }[/math]

So the self-adjoint positive extensions of [math]\displaystyle{ A }[/math] are in bijective correspondence with the self-adjoint contractions [math]\displaystyle{ \Gamma_4 }[/math] on the defect space [math]\displaystyle{ \mathcal{D}_{\Gamma_3^*} }[/math] of [math]\displaystyle{ \Gamma_3 }[/math]. The contractions [math]\displaystyle{ \tilde{C}(-1) }[/math] and [math]\displaystyle{ \tilde{C}(1) }[/math] give rise to positive extensions [math]\displaystyle{ A_0 }[/math] and [math]\displaystyle{ A_{\infty} }[/math] respectively. These are the smallest and largest positive extensions of [math]\displaystyle{ A }[/math] in the sense that

[math]\displaystyle{ A_0 \leq B \leq A_{\infty} }[/math]

for any positive self-adjoint extension [math]\displaystyle{ B }[/math] of [math]\displaystyle{ A }[/math]. The operator [math]\displaystyle{ A_\infty }[/math] is the Friedrichs extension of [math]\displaystyle{ A }[/math] and [math]\displaystyle{ A_0 }[/math] is the von Neumann-Krein extension of [math]\displaystyle{ A }[/math].

Similar results can be obtained for accretive operators.

Notes

  1. Hall 2013 Theorem 9.21
  2. Hall 2013 Corollary 9.22
  3. Rudin 1991, p. 356-357 §13.17.
  4. Jørgensen, Kornelson & Shuman 2011, p. 85.
  5. Akhiezer 1981, p. 354.

References

  • Akhiezer, Naum Ilʹich (1981). Theory of Linear Operators in Hilbert Space. Boston: Pitman. ISBN 0-273-08496-8. 
  • A. Alonso and B. Simon, The Birman-Krein-Vishik theory of self-adjoint extensions of semibounded operators. J. Operator Theory 4 (1980), 251-270.
  • Gr. Arsene and A. Gheondea, Completing matrix contractions, J. Operator Theory 7 (1982), 179-189.
  • N. Dunford and J.T. Schwartz, Linear Operators, Part II, Interscience, 1958.
  • Hall, B. C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer, ISBN 978-1461471158 
  • Jørgensen, Palle E. T.; Kornelson, Keri A.; Shuman, Karen L. (2011). Iterated Function Systems, Moments, and Transformations of Infinite Matrices. Providence, RI: American Mathematical Soc.. ISBN 978-0-8218-5248-4. 
  • Reed, M.; Simon, B. (1980). Methods of Modern Mathematical Physics: Vol 1: Functional analysis. Academic Press. ISBN 978-0-12-585050-6. 
  • Reed, M.; Simon, B. (1972), Methods of Mathematical Physics: Vol 2: Fourier Analysis, Self-Adjointness, Academic Press 
  • Rudin, Walter (1991). Functional Analysis. Boston, Mass.: McGraw-Hill Science, Engineering & Mathematics. ISBN 978-0-07-054236-5.