Functional calculus

From HandWiki

In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. It is now a branch (more accurately, several related areas) of the field of functional analysis, connected with spectral theory. (Historically, the term was also used synonymously with calculus of variations; this usage is obsolete, except for functional derivative. Sometimes it is used in relation to types of functional equations, or in logic for systems of predicate calculus.) If [math]\displaystyle{ f }[/math] is a function, say a numerical function of a real number, and [math]\displaystyle{ M }[/math] is an operator, there is no particular reason why the expression [math]\displaystyle{ f(M) }[/math] should make sense. If it does, then we are no longer using [math]\displaystyle{ f }[/math] on its original function domain. In the tradition of operational calculus, algebraic expressions in operators are handled irrespective of their meaning. This passes nearly unnoticed if we talk about 'squaring a matrix', though, which is the case of [math]\displaystyle{ f(x) = x^2 }[/math] and [math]\displaystyle{ M }[/math] an [math]\displaystyle{ n\times n }[/math] matrix. The idea of a functional calculus is to create a principled approach to this kind of overloading of the notation.

The most immediate case is to apply polynomial functions to a square matrix, extending what has just been discussed. In the finite-dimensional case, the polynomial functional calculus yields quite a bit of information about the operator. For example, consider the family of polynomials which annihilates an operator [math]\displaystyle{ T }[/math]. This family is an ideal in the ring of polynomials. Furthermore, it is a nontrivial ideal: let [math]\displaystyle{ n }[/math] be the finite dimension of the algebra of matrices, then [math]\displaystyle{ \{I, T, T^2, \ldots, T^n \} }[/math] is linearly dependent. So [math]\displaystyle{ \sum_{i=0}^n \alpha_i T^i = 0 }[/math] for some scalars [math]\displaystyle{ \alpha_i }[/math], not all equal to 0. This implies that the polynomial [math]\displaystyle{ \sum_{i=0}^n \alpha_i x^i }[/math] lies in the ideal. Since the ring of polynomials is a principal ideal domain, this ideal is generated by some polynomial [math]\displaystyle{ m }[/math]. Multiplying by a unit if necessary, we can choose [math]\displaystyle{ m }[/math] to be monic. When this is done, the polynomial [math]\displaystyle{ m }[/math] is precisely the minimal polynomial of [math]\displaystyle{ T }[/math]. This polynomial gives deep information about [math]\displaystyle{ T }[/math]. For instance, a scalar [math]\displaystyle{ \alpha }[/math] is an eigenvalue of [math]\displaystyle{ T }[/math] if and only if [math]\displaystyle{ \alpha }[/math] is a root of [math]\displaystyle{ m }[/math]. Also, sometimes [math]\displaystyle{ m }[/math] can be used to calculate the exponential of [math]\displaystyle{ T }[/math] efficiently.

The polynomial calculus is not as informative in the infinite-dimensional case. Consider the unilateral shift with the polynomials calculus; the ideal defined above is now trivial. Thus one is interested in functional calculi more general than polynomials. The subject is closely linked to spectral theory, since for a diagonal matrix or multiplication operator, it is rather clear what the definitions should be.

See also


External links