Operator (mathematics)
In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space (possibly and sometimes required to be the same space). There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly (for example in the case of an integral operator), and may be extended so as to act on related objects (an operator that acts on functions may act also on differential equations whose solutions are functions that satisfy the equation). See Operator for other examples.
The most basic operators are linear maps, which act on vector spaces. Linear operators refer to linear maps whose domain and range are the same space, for example from [math]\displaystyle{ \ \mathbb{R}^n\ }[/math] to [math]\displaystyle{ \ \mathbb{R}^n ~. }[/math][1][2][lower-alpha 1] Such operators often preserve properties, such as continuity. For example, differentiation and indefinite integration are linear operators; operators that are built from them are called differential operators, integral operators or integro-differential operators.
Operator is also used for denoting the symbol of a mathematical operation. This is related with the meaning of "operator" in computer programming; see Operator (computer programming).
Linear operators
The most common kind of operators encountered are linear operators. Let U and V be vector spaces over some field K . A mapping [math]\displaystyle{ \ \operatorname{A} : U \to V\ }[/math] is linear if [math]\displaystyle{ \ \operatorname{A}\left( \alpha \mathbf{x} + \beta \mathbf{y} \right) = \alpha \operatorname{A} \mathbf{x} + \beta \operatorname{A} \mathbf{y}\ }[/math] for all x and y in U, and for all α, β in K . This means that a linear operator preserves vector space operations, in the sense that it does not matter whether you apply the linear operator before or after the operations of addition and scalar multiplication. In more technical words, linear operators are morphisms between vector spaces. In the finite-dimensional case linear operators can be represented by matrices in the following way. Let K be a field, and [math]\displaystyle{ U }[/math] and V be finite-dimensional vector spaces over K . Let us select a basis [math]\displaystyle{ \ \mathbf{u}_1, \ldots, \mathbf{u}_n\ }[/math] in U and [math]\displaystyle{ \ \mathbf{v}_1, \ldots, \mathbf{v}_m\ }[/math] in V . Then let [math]\displaystyle{ \ \mathbf{x} = x^i \mathbf{u}_i }[/math] be an arbitrary vector in [math]\displaystyle{ U }[/math] (assuming Einstein convention), and [math]\displaystyle{ \ \operatorname{A}: U \to V\ }[/math] be a linear operator. Then[math]\displaystyle{ \ \operatorname{A}\mathbf{x} = x^i \operatorname{A}\mathbf{u}_i = x^i \left( \operatorname{A}\mathbf{u}_i \right)^j \mathbf{v}_j ~. }[/math] Then [math]\displaystyle{ \ a_i^j \equiv \left( \operatorname{A}\mathbf{u}_i \right)^j\ , }[/math] with all [math]\displaystyle{ \ a_i^j\in K\ , }[/math] is the matrix form of the operator [math]\displaystyle{ \operatorname{A} }[/math] in the fixed basis [math]\displaystyle{ \ \{ \mathbf{u}_i \}_{i=1}^n ~. }[/math] The tensor [math]\displaystyle{ \ a_i^j\ }[/math] does not depend on the choice of [math]\displaystyle{ \ x\ , }[/math] and [math]\displaystyle{ \; \operatorname{A}\mathbf{x} = \mathbf{y}\; }[/math] if [math]\displaystyle{ \ a_i^j x^i = y^j ~. }[/math] Thus in fixed bases n-by-m matrices are in bijective correspondence to linear operators from [math]\displaystyle{ \ U\ }[/math] to [math]\displaystyle{ \ V ~. }[/math]
The important concepts directly related to operators between finite-dimensional vector spaces are the ones of rank, determinant, inverse operator, and eigenspace.
Linear operators also play a great role in the infinite-dimensional case. The concepts of rank and determinant cannot be extended to infinite-dimensional matrices. This is why very different techniques are employed when studying linear operators (and operators in general) in the infinite-dimensional case. The study of linear operators in the infinite-dimensional case is known as functional analysis (so called because various classes of functions form interesting examples of infinite-dimensional vector spaces).
The space of sequences of real numbers, or more generally sequences of vectors in any vector space, themselves form an infinite-dimensional vector space. The most important cases are sequences of real or complex numbers, and these spaces, together with linear subspaces, are known as sequence spaces. Operators on these spaces are known as sequence transformations.
Bounded linear operators over a Banach space form a Banach algebra in respect to the standard operator norm. The theory of Banach algebras develops a very general concept of spectra that elegantly generalizes the theory of eigenspaces.
Bounded operators
Let U and V be two vector spaces over the same ordered field (for example, [math]\displaystyle{ \ \mathbb{R}\ }[/math]), and they are equipped with norms. Then a linear operator from U to V is called bounded if there exists c > 0 such that [math]\displaystyle{ \ \|\operatorname{A}\mathbf{x}\|_V \leq c\ \|\mathbf{x}\|_U\ }[/math] for every x in U . Bounded operators form a vector space. On this vector space we can introduce a norm that is compatible with the norms of U and V: [math]\displaystyle{ \ \|\operatorname{A}\| = \inf\{\ c : \|\operatorname{A}\mathbf{x}\|_V \leq c\ \|\mathbf{x}\|_U \} ~. }[/math] In case of operators from U to itself it can be shown that
- [math]\displaystyle{ \ \|\operatorname{A}\operatorname{B}\| \leq \|\operatorname{A}\| \cdot \|\operatorname{B}\| ~. }[/math] [lower-alpha 2]
Any unital normed algebra with this property is called a Banach algebra. It is possible to generalize spectral theory to such algebras. C*-algebras, which are Banach algebras with some additional structure, play an important role in quantum mechanics.
Examples
Analysis (calculus)
From the point of view of functional analysis, calculus is the study of two linear operators: the differential operator [math]\displaystyle{ \frac{\ \mathrm{d}\ }{ \mathrm{d} t } }[/math], and the Volterra operator [math]\displaystyle{ \ \int_0^t ~. }[/math]
Fundamental analysis operators on scalar and vector fields
Three operators are key to vector calculus:
- Grad (gradient), (with operator symbol [math]\displaystyle{ \ \boldsymbol\nabla\ }[/math]) assigns a vector at every point in a scalar field that points in the direction of greatest rate of change of that field and whose norm measures the absolute value of that greatest rate of change.
- Div (divergence), (with operator symbol [math]\displaystyle{ \ \boldsymbol{\nabla \cdot}\ }[/math]) is a vector operator that measures a vector field's divergence from or convergence towards a given point.
- Curl, (with operator symbol [math]\displaystyle{ \ \boldsymbol{\nabla \!\times}\ }[/math]) is a vector operator that measures a vector field's curling (winding around, rotating around) trend about a given point.
As an extension of vector calculus operators to physics, engineering and tensor spaces, grad, div and curl operators also are often associated with tensor calculus as well as vector calculus.[3]
Geometry
In geometry, additional structures on vector spaces are sometimes studied. Operators that map such vector spaces to themselves bijectively are very useful in these studies, they naturally form groups by composition.
For example, bijective operators preserving the structure of a vector space are precisely the invertible linear operators. They form the general linear group under composition. However, they do not form a vector space under operator addition; since, for example, both the identity and −identity are invertible (bijective), but their sum, 0, is not.
Operators preserving the Euclidean metric on such a space form the isometry group, and those that fix the origin form a subgroup known as the orthogonal group. Operators in the orthogonal group that also preserve the orientation of vector tuples form the special orthogonal group, or the group of rotations.
Probability theory
Operators are also involved in probability theory, such as expectation, variance, and covariance, which are used to name both number statistics and the operators which produce them. Indeed, every covariance is basically a dot product: Every variance is a dot product of a vector with itself, and thus is a quadratic norm; every standard deviation is a norm (square root of the quadratic norm); the corresponding cosine to this dot product is the Pearson correlation coefficient; expected value is basically an integral operator (used to measure weighted shapes in the space).
Fourier series and Fourier transform
The Fourier transform is useful in applied mathematics, particularly physics and signal processing. It is another integral operator; it is useful mainly because it converts a function on one (temporal) domain to a function on another (frequency) domain, in a way effectively invertible. No information is lost, as there is an inverse transform operator. In the simple case of periodic functions, this result is based on the theorem that any continuous periodic function can be represented as the sum of a series of sine waves and cosine waves:[math]\displaystyle{ \ f(t) = \frac{\ a_0\ }{2} + \sum_{n=1}^{\infty}\ a_n \cos ( \omega\ n\ t ) + b_n \sin ( \omega\ n\ t )\ }[/math] The tuple ( a0, a1, b1, a2, b2, ... ) is in fact an element of an infinite-dimensional vector space ℓ2 , and thus Fourier series is a linear operator.
When dealing with general function [math]\displaystyle{ \ \mathbb{R} \to \mathbb{C}\ , }[/math] the transform takes on an integral form:
- [math]\displaystyle{ \ f(t) = {1 \over \sqrt{2 \pi}} \int_{-\infty}^{+\infty}{\ g( \omega )\ e^{ i\ \omega\ t }\ \mathrm{d}\ \omega } ~. }[/math]
Laplace transform
The Laplace transform is another integral operator and is involved in simplifying the process of solving differential equations.
Given f = f(s) , it is defined by:[math]\displaystyle{ \ F(s) = \operatorname\mathcal{L}\{f\}(s) =\int_0^\infty\ e^{-s\ t}\ f(t)\ \mathrm{d}\ t ~. }[/math]
Footnotes
- ↑ : (1) A linear transformation from V to V is called a linear operator on V.
The set of all linear operators on V is denoted ℒ(V) . A linear operator on a real vector space is called a real operator and a linear operator on a complex vector space is called a complex operator. ... We should also mention that some authors use the term linear operator for any linear transformation from V to W. ...
- (2) endomorphism for linear operator ...
- (6) automorphism for bijective linear operator.
- — Roman (2008)[2]
- ↑ In this expression, the raised dot merely represents multiplication in whatever scalar field is used with V .
See also
- Function
- Operator algebra
- List of operators
References
- ↑ Rudin, Walter (1976). "Chapter 9: Functions of several variables". Principles of Mathematical Analysis (3rd ed.). McGraw-Hill. p. 207. ISBN 0-07-054235-X. "Linear transformations of X into X are often called linear operators on X ."
- ↑ 2.0 2.1 Roman, Steven (2008). "Chapter 2: Linear Transformations". Advanced Linear Algebra (3rd ed.). Springer. p. 59. ISBN 978-0-387-72828-5.
- ↑ Schey, H.M. (2005). Div, Grad, Curl, and All That. New York, NY: W.W. Norton. ISBN 0-393-92516-1.
Original source: https://en.wikipedia.org/wiki/Operator (mathematics).
Read more |