Distribution (mathematics)

From HandWiki
Revision as of 21:03, 6 February 2024 by Nautica (talk | contribs) (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Mathematical analysis term similar to generalized function


Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions (weak solutions) than classical solutions, or where appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are singular, such as the Dirac delta function.

A function [math]\displaystyle{ f }[/math] is normally thought of as acting on the points in the function domain by "sending" a point [math]\displaystyle{ x }[/math] in the domain to the point [math]\displaystyle{ f(x). }[/math] Instead of acting on points, distribution theory reinterprets functions such as [math]\displaystyle{ f }[/math] as acting on test functions in a certain way. In applications to physics and engineering, test functions are usually infinitely differentiable complex-valued (or real-valued) functions with compact support that are defined on some given non-empty open subset [math]\displaystyle{ U \subseteq \R^n }[/math]. (Bump functions are examples of test functions.) The set of all such test functions forms a vector space that is denoted by [math]\displaystyle{ C_c^\infty(U) }[/math] or [math]\displaystyle{ \mathcal{D}(U). }[/math]

Most commonly encountered functions, including all continuous maps [math]\displaystyle{ f : \R \to \R }[/math] if using [math]\displaystyle{ U := \R, }[/math] can be canonically reinterpreted as acting via "integration against a test function." Explicitly, this means that such a function [math]\displaystyle{ f }[/math] "acts on" a test function [math]\displaystyle{ \psi \in \mathcal{D}(\R) }[/math] by "sending" it to the number [math]\displaystyle{ \int_\R f \, \psi \, dx, }[/math] which is often denoted by [math]\displaystyle{ D_f(\psi). }[/math] This new action [math]\displaystyle{ \psi \mapsto D_f(\psi) }[/math] of [math]\displaystyle{ f }[/math] defines a scalar-valued map [math]\displaystyle{ D_f : \mathcal{D}(\R) \to \Complex, }[/math] whose domain is the space of test functions [math]\displaystyle{ \mathcal{D}(\R). }[/math] This functional [math]\displaystyle{ D_f }[/math] turns out to have the two defining properties of what is known as a distribution on [math]\displaystyle{ U = \R }[/math]: it is linear, and it is also continuous when [math]\displaystyle{ \mathcal{D}(\R) }[/math] is given a certain topology called the canonical LF topology. The action (the integration [math]\displaystyle{ \psi \mapsto \int_\R f \, \psi \, dx }[/math]) of this distribution [math]\displaystyle{ D_f }[/math] on a test function [math]\displaystyle{ \psi }[/math] can be interpreted as a weighted average of the distribution on the support of the test function, even if the values of the distribution at a single point are not well-defined. Distributions like [math]\displaystyle{ D_f }[/math] that arise from functions in this way are prototypical examples of distributions, but there exist many distributions that cannot be defined by integration against any function. Examples of the latter include the Dirac delta function and distributions defined to act by integration of test functions [math]\displaystyle{ \psi \mapsto \int_U \psi d \mu }[/math] against certain measures [math]\displaystyle{ \mu }[/math] on [math]\displaystyle{ U. }[/math] Nonetheless, it is still always possible to reduce any arbitrary distribution down to a simpler family of related distributions that do arise via such actions of integration.

More generally, a distribution on [math]\displaystyle{ U }[/math] is by definition a linear functional on [math]\displaystyle{ C_c^\infty(U) }[/math] that is continuous when [math]\displaystyle{ C_c^\infty(U) }[/math] is given a topology called the canonical LF topology. This leads to the space of (all) distributions on [math]\displaystyle{ U }[/math], usually denoted by [math]\displaystyle{ \mathcal{D}'(U) }[/math] (note the prime), which by definition is the space of all distributions on [math]\displaystyle{ U }[/math] (that is, it is the continuous dual space of [math]\displaystyle{ C_c^\infty(U) }[/math]); it is these distributions that are the main focus of this article.

Definitions of the appropriate topologies on spaces of test functions and distributions are given in the article on spaces of test functions and distributions. This article is primarily concerned with the definition of distributions, together with their properties and some important examples.

History

The practical use of distributions can be traced back to the use of Green's functions in the 1830s to solve ordinary differential equations, but was not formalized until much later. According to (Kolmogorov Fomin), generalized functions originated in the work of Sergei Sobolev (1936) on second-order hyperbolic partial differential equations, and the ideas were developed in somewhat extended form by Laurent Schwartz in the late 1940s. According to his autobiography, Schwartz introduced the term "distribution" by analogy with a distribution of electrical charge, possibly including not only point charges but also dipoles and so on. (Gårding 1997) comments that although the ideas in the transformative book by (Schwartz 1951) were not entirely new, it was Schwartz's broad attack and conviction that distributions would be useful almost everywhere in analysis that made the difference.

Notation

The following notation will be used throughout this article:

  • [math]\displaystyle{ n }[/math] is a fixed positive integer and [math]\displaystyle{ U }[/math] is a fixed non-empty open subset of Euclidean space [math]\displaystyle{ \R^n. }[/math]
  • [math]\displaystyle{ \N = \{0, 1, 2, \ldots\} }[/math] denotes the natural numbers.
  • [math]\displaystyle{ k }[/math] will denote a non-negative integer or [math]\displaystyle{ \infty. }[/math]
  • If [math]\displaystyle{ f }[/math] is a function then [math]\displaystyle{ \operatorname{Dom}(f) }[/math] will denote its domain and the support of [math]\displaystyle{ f, }[/math] denoted by [math]\displaystyle{ \operatorname{supp}(f), }[/math] is defined to be the closure of the set [math]\displaystyle{ \{x \in \operatorname{Dom}(f): f(x) \neq 0\} }[/math] in [math]\displaystyle{ \operatorname{Dom}(f). }[/math]
  • For two functions [math]\displaystyle{ f, g : U \to \Complex, }[/math] the following notation defines a canonical pairing: [math]\displaystyle{ \langle f, g\rangle := \int_U f(x) g(x) \,dx. }[/math]
  • A multi-index of size [math]\displaystyle{ n }[/math] is an element in [math]\displaystyle{ \N^n }[/math] (given that [math]\displaystyle{ n }[/math] is fixed, if the size of multi-indices is omitted then the size should be assumed to be [math]\displaystyle{ n }[/math]). The length of a multi-index [math]\displaystyle{ \alpha = (\alpha_1, \ldots, \alpha_n) \in \N^n }[/math] is defined as [math]\displaystyle{ \alpha_1+\cdots+\alpha_n }[/math] and denoted by [math]\displaystyle{ |\alpha|. }[/math] Multi-indices are particularly useful when dealing with functions of several variables, in particular, we introduce the following notations for a given multi-index [math]\displaystyle{ \alpha = (\alpha_1, \ldots, \alpha_n) \in \N^n }[/math]: [math]\displaystyle{ \begin{align} x^\alpha &= x_1^{\alpha_1} \cdots x_n^{\alpha_n} \\ \partial^\alpha &= \frac{\partial^{|\alpha|}}{\partial x_1^{\alpha_1}\cdots \partial x_n^{\alpha_n}} \end{align} }[/math] We also introduce a partial order of all multi-indices by [math]\displaystyle{ \beta \ge \alpha }[/math] if and only if [math]\displaystyle{ \beta_i \ge \alpha_i }[/math] for all [math]\displaystyle{ 1 \le i\le n. }[/math] When [math]\displaystyle{ \beta \ge \alpha }[/math] we define their multi-index binomial coefficient as: [math]\displaystyle{ \binom{\beta}{\alpha} := \binom{\beta_1}{\alpha_1} \cdots \binom{\beta_n}{\alpha_n}. }[/math]

Definitions of test functions and distributions

In this section, some basic notions and definitions needed to define real-valued distributions on U are introduced. Further discussion of the topologies on the spaces of test functions and distributions is given in the article on spaces of test functions and distributions.

Notation:
  1. Let [math]\displaystyle{ k \in \{0, 1, 2, \ldots, \infty\}. }[/math]
  2. Let [math]\displaystyle{ C^k(U) }[/math] denote the vector space of all k-times continuously differentiable real or complex-valued functions on U.
  3. For any compact subset [math]\displaystyle{ K \subseteq U, }[/math] let [math]\displaystyle{ C^k(K) }[/math] and [math]\displaystyle{ C^k(K;U) }[/math] both denote the vector space of all those functions [math]\displaystyle{ f \in C^k(U) }[/math] such that [math]\displaystyle{ \operatorname{supp}(f) \subseteq K. }[/math]
    • If [math]\displaystyle{ f \in C^k(K) }[/math] then the domain of [math]\displaystyle{ f }[/math] is U and not K. So although [math]\displaystyle{ C^k(K) }[/math] depends on both K and U, only K is typically indicated. The justification for this common practice is detailed below. The notation [math]\displaystyle{ C^k(K;U) }[/math] will only be used when the notation [math]\displaystyle{ C^k(K) }[/math] risks being ambiguous.
    • Every [math]\displaystyle{ C^k(K) }[/math] contains the constant 0 map, even if [math]\displaystyle{ K = \varnothing. }[/math]
  4. Let [math]\displaystyle{ C_c^k(U) }[/math] denote the set of all [math]\displaystyle{ f \in C^k(U) }[/math] such that [math]\displaystyle{ f \in C^k(K) }[/math] for some compact subset K of U.
    • Equivalently, [math]\displaystyle{ C_c^k(U) }[/math] is the set of all [math]\displaystyle{ f \in C^k(U) }[/math] such that [math]\displaystyle{ f }[/math] has compact support.
    • [math]\displaystyle{ C_c^k(U) }[/math] is equal to the union of all [math]\displaystyle{ C^k(K) }[/math] as [math]\displaystyle{ K \subseteq U }[/math] ranges over all compact subsets of [math]\displaystyle{ U. }[/math]
    • If [math]\displaystyle{ f }[/math] is a real-valued function on [math]\displaystyle{ U }[/math], then [math]\displaystyle{ f }[/math] is an element of [math]\displaystyle{ C_c^k(U) }[/math] if and only if [math]\displaystyle{ f }[/math] is a [math]\displaystyle{ C^k }[/math] bump function. Every real-valued test function on [math]\displaystyle{ U }[/math] is also a complex-valued test function on [math]\displaystyle{ U. }[/math]
The graph of the bump function [math]\displaystyle{ (x,y) \in \R^2 \mapsto \Psi(r), }[/math] where [math]\displaystyle{ r = \left(x^2 + y^2\right)^\frac{1}{2} }[/math] and [math]\displaystyle{ \Psi(r) = e^{-\frac{1}{1 - r^2}}\cdot\mathbf{1}_{\{|r|\lt 1\}}. }[/math] This function is a test function on [math]\displaystyle{ \R^2 }[/math] and is an element of [math]\displaystyle{ C^\infty_c\left(\R^2\right). }[/math] The support of this function is the closed unit disk in [math]\displaystyle{ \R^2. }[/math] It is non-zero on the open unit disk and it is equal to 0 everywhere outside of it.

For all [math]\displaystyle{ j, k \in \{0, 1, 2, \ldots, \infty\} }[/math] and any compact subsets [math]\displaystyle{ K }[/math] and [math]\displaystyle{ L }[/math] of [math]\displaystyle{ U }[/math], we have: [math]\displaystyle{ \begin{align} C^k(K) &\subseteq C^k_c(U) \subseteq C^k(U) \\ C^k(K) &\subseteq C^k(L) && \text{if } K \subseteq L \\ C^k(K) &\subseteq C^j(K) && \text{if } j \le k \\ C_c^k(U) &\subseteq C^j_c(U) && \text{if } j \le k \\ C^k(U) &\subseteq C^j(U) && \text{if } j \le k \\ \end{align} }[/math]

Definition: Elements of [math]\displaystyle{ C_c^\infty(U) }[/math] are called test functions on U and [math]\displaystyle{ C_c^\infty(U) }[/math] is called the space of test functions on U. We will use both [math]\displaystyle{ \mathcal{D}(U) }[/math] and [math]\displaystyle{ C_c^\infty(U) }[/math] to denote this space.

Distributions on U are continuous linear functionals on [math]\displaystyle{ C_c^\infty(U) }[/math] when this vector space is endowed with a particular topology called the canonical LF-topology. The following proposition states two necessary and sufficient conditions for the continuity of a linear function on [math]\displaystyle{ C_c^\infty(U) }[/math] that are often straightforward to verify.

Proposition: A linear functional T on [math]\displaystyle{ C_c^\infty(U) }[/math] is continuous, and therefore a distribution, if and only if any of the following equivalent conditions is satisfied:

  1. For every compact subset [math]\displaystyle{ K\subseteq U }[/math] there exist constants [math]\displaystyle{ C\gt 0 }[/math] and [math]\displaystyle{ N\in \N }[/math] (dependent on [math]\displaystyle{ K }[/math]) such that for all [math]\displaystyle{ f \in C_c^\infty(U) }[/math] with support contained in [math]\displaystyle{ K }[/math],[1][2] [math]\displaystyle{ |T(f)| \leq C \sup \{|\partial^\alpha f(x)|: x \in U, |\alpha| \leq N\}. }[/math]
  2. For every compact subset [math]\displaystyle{ K\subseteq U }[/math] and every sequence [math]\displaystyle{ \{f_i\}_{i=1}^\infty }[/math] in [math]\displaystyle{ C_c^\infty(U) }[/math] whose supports are contained in [math]\displaystyle{ K }[/math], if [math]\displaystyle{ \{\partial^\alpha f_i\}_{i=1}^\infty }[/math] converges uniformly to zero on [math]\displaystyle{ U }[/math] for every multi-index [math]\displaystyle{ \alpha }[/math], then [math]\displaystyle{ T(f_i) \to 0. }[/math]

Topology on Ck(U)

We now introduce the seminorms that will define the topology on [math]\displaystyle{ C^k(U). }[/math] Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used.

Suppose [math]\displaystyle{ k \in \{0, 1, 2, \ldots, \infty\} }[/math] and [math]\displaystyle{ K }[/math] is an arbitrary compact subset of [math]\displaystyle{ U. }[/math] Suppose [math]\displaystyle{ i }[/math] an integer such that [math]\displaystyle{ 0 \leq i \leq k }[/math][note 1] and [math]\displaystyle{ p }[/math] is a multi-index with length [math]\displaystyle{ | p|\leq k. }[/math] For [math]\displaystyle{ K \neq \varnothing, }[/math] define:

[math]\displaystyle{ \begin{alignat}{4} \text{ (1) }\ & s_{p,K}(f) &&:= \sup_{x_0 \in K} \left| \partial^p f(x_0) \right| \\[4pt] \text{ (2) }\ & q_{i,K}(f) &&:= \sup_{|p| \leq i} \left(\sup_{x_0 \in K} \left| \partial^p f(x_0) \right|\right) = \sup_{|p| \leq i} \left(s_{p, K}(f)\right) \\[4pt] \text{ (3) }\ & r_{i,K}(f) &&:= \sup_{\stackrel{|p| \leq i}{x_0 \in K}} \left| \partial^p f(x_0) \right| \\[4pt] \text{ (4) }\ & t_{i,K}(f) &&:= \sup_{x_0 \in K} \left(\sum_{|p| \leq i} \left| \partial^p f(x_0) \right|\right) \end{alignat} }[/math]

while for [math]\displaystyle{ K = \varnothing, }[/math] define all the functions above to be the constant 0 map.

All of the functions above are non-negative [math]\displaystyle{ \R }[/math]-valued[note 2] seminorms on [math]\displaystyle{ C^k(U). }[/math] As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology.

Each of the following sets of seminorms [math]\displaystyle{ \begin{alignat}{4} A ~:= \quad &\{q_{i,K} &&: \;K \text{ compact and } \;&&i \in \N \text{ satisfies } \;&&0 \leq i \leq k\} \\ B ~:= \quad &\{r_{i,K} &&: \;K \text{ compact and } \;&&i \in \N \text{ satisfies } \;&&0 \leq i \leq k\} \\ C ~:= \quad &\{t_{i,K} &&: \;K \text{ compact and } \;&&i \in \N \text{ satisfies } \;&&0 \leq i \leq k\} \\ D ~:= \quad &\{s_{p,K} &&: \;K \text{ compact and } \;&&p \in \N^n \text{ satisfies } \;&&|p| \leq k\} \end{alignat} }[/math] generate the same locally convex vector topology on [math]\displaystyle{ C^k(U) }[/math] (so for example, the topology generated by the seminorms in [math]\displaystyle{ A }[/math] is equal to the topology generated by those in [math]\displaystyle{ C }[/math]).

The vector space [math]\displaystyle{ C^k(U) }[/math] is endowed with the locally convex topology induced by any one of the four families [math]\displaystyle{ A, B, C, D }[/math] of seminorms described above. This topology is also equal to the vector topology induced by all of the seminorms in [math]\displaystyle{ A \cup B \cup C \cup D. }[/math]

With this topology, [math]\displaystyle{ C^k(U) }[/math] becomes a locally convex Fréchet space that is not normable. Every element of [math]\displaystyle{ A \cup B \cup C \cup D }[/math] is a continuous seminorm on [math]\displaystyle{ C^k(U). }[/math] Under this topology, a net [math]\displaystyle{ (f_i)_{i\in I} }[/math] in [math]\displaystyle{ C^k(U) }[/math] converges to [math]\displaystyle{ f \in C^k(U) }[/math] if and only if for every multi-index [math]\displaystyle{ p }[/math] with [math]\displaystyle{ |p|\lt k + 1 }[/math] and every compact [math]\displaystyle{ K, }[/math] the net of partial derivatives [math]\displaystyle{ \left(\partial^p f_i\right)_{i \in I} }[/math] converges uniformly to [math]\displaystyle{ \partial^p f }[/math] on [math]\displaystyle{ K. }[/math][3] For any [math]\displaystyle{ k \in \{0, 1, 2, \ldots, \infty\}, }[/math] any (von Neumann) bounded subset of [math]\displaystyle{ C^{k+1}(U) }[/math] is a relatively compact subset of [math]\displaystyle{ C^k(U). }[/math][4] In particular, a subset of [math]\displaystyle{ C^\infty(U) }[/math] is bounded if and only if it is bounded in [math]\displaystyle{ C^i(U) }[/math] for all [math]\displaystyle{ i \in \N. }[/math][4] The space [math]\displaystyle{ C^k(U) }[/math] is a Montel space if and only if [math]\displaystyle{ k = \infty. }[/math][5]

A subset [math]\displaystyle{ W }[/math] of [math]\displaystyle{ C^\infty(U) }[/math] is open in this topology if and only if there exists [math]\displaystyle{ i\in \N }[/math] such that [math]\displaystyle{ W }[/math] is open when [math]\displaystyle{ C^\infty(U) }[/math] is endowed with the subspace topology induced on it by [math]\displaystyle{ C^i(U). }[/math]

Topology on Ck(K)

As before, fix [math]\displaystyle{ k \in \{0, 1, 2, \ldots, \infty\}. }[/math] Recall that if [math]\displaystyle{ K }[/math] is any compact subset of [math]\displaystyle{ U }[/math] then [math]\displaystyle{ C^k(K) \subseteq C^k(U). }[/math]

Assumption: For any compact subset [math]\displaystyle{ K \subseteq U, }[/math] we will henceforth assume that [math]\displaystyle{ C^k(K) }[/math] is endowed with the subspace topology it inherits from the Fréchet space [math]\displaystyle{ C^k(U). }[/math]

If [math]\displaystyle{ k }[/math] is finite then [math]\displaystyle{ C^k(K) }[/math] is a Banach space[6] with a topology that can be defined by the norm [math]\displaystyle{ r_K(f) := \sup_{|p|\lt k} \left( \sup_{x_0 \in K} \left|\partial^p f(x_0)\right| \right). }[/math] And when [math]\displaystyle{ k = 2, }[/math] then [math]\displaystyle{ C^k(K) }[/math] is even a Hilbert space.[6]

Trivial extensions and independence of Ck(K)'s topology from U

Suppose [math]\displaystyle{ U }[/math] is an open subset of [math]\displaystyle{ \R^n }[/math] and [math]\displaystyle{ K \subseteq U }[/math] is a compact subset. By definition, elements of [math]\displaystyle{ C^k(K) }[/math] are functions with domain [math]\displaystyle{ U }[/math] (in symbols, [math]\displaystyle{ C^k(K) \subseteq C^k(U) }[/math]), so the space [math]\displaystyle{ C^k(K) }[/math] and its topology depend on [math]\displaystyle{ U; }[/math] to make this dependence on the open set [math]\displaystyle{ U }[/math] clear, temporarily denote [math]\displaystyle{ C^k(K) }[/math] by [math]\displaystyle{ C^k(K;U). }[/math] Importantly, changing the set [math]\displaystyle{ U }[/math] to a different open subset [math]\displaystyle{ U' }[/math] (with [math]\displaystyle{ K \subseteq U' }[/math]) will change the set [math]\displaystyle{ C^k(K) }[/math] from [math]\displaystyle{ C^k(K;U) }[/math] to [math]\displaystyle{ C^k(K;U'), }[/math][note 3] so that elements of [math]\displaystyle{ C^k(K) }[/math] will be functions with domain [math]\displaystyle{ U' }[/math] instead of [math]\displaystyle{ U. }[/math] Despite [math]\displaystyle{ C^k(K) }[/math] depending on the open set ([math]\displaystyle{ U \text{ or } U' }[/math]), the standard notation for [math]\displaystyle{ C^k(K) }[/math] makes no mention of it. This is justified because, as this subsection will now explain, the space [math]\displaystyle{ C^k(K;U) }[/math] is canonically identified as a subspace of [math]\displaystyle{ C^k(K;U') }[/math] (both algebraically and topologically).

It is enough to explain how to canonically identify [math]\displaystyle{ C^k(K; U) }[/math] with [math]\displaystyle{ C^k(K; U') }[/math] when one of [math]\displaystyle{ U }[/math] and [math]\displaystyle{ U' }[/math] is a subset of the other. The reason is that if [math]\displaystyle{ V }[/math] and [math]\displaystyle{ W }[/math] are arbitrary open subsets of [math]\displaystyle{ \R^n }[/math] containing [math]\displaystyle{ K }[/math] then the open set [math]\displaystyle{ U := V \cap W }[/math] also contains [math]\displaystyle{ K, }[/math] so that each of [math]\displaystyle{ C^k(K; V) }[/math] and [math]\displaystyle{ C^k(K; W) }[/math] is canonically identified with [math]\displaystyle{ C^k(K; V \cap W) }[/math] and now by transitivity, [math]\displaystyle{ C^k(K; V) }[/math] is thus identified with [math]\displaystyle{ C^k(K; W). }[/math] So assume [math]\displaystyle{ U \subseteq V }[/math] are open subsets of [math]\displaystyle{ \R^n }[/math] containing [math]\displaystyle{ K. }[/math]

Given [math]\displaystyle{ f \in C_c^k(U), }[/math] its trivial extension to [math]\displaystyle{ V }[/math] is the function [math]\displaystyle{ F : V \to \Complex }[/math] defined by: [math]\displaystyle{ F(x) = \begin{cases} f(x) & x \in U, \\ 0 & \text{otherwise}. \end{cases} }[/math] This trivial extension belongs to [math]\displaystyle{ C^k(V) }[/math] (because [math]\displaystyle{ f \in C_c^k(U) }[/math] has compact support) and it will be denoted by [math]\displaystyle{ I(f) }[/math] (that is, [math]\displaystyle{ I(f) := F }[/math]). The assignment [math]\displaystyle{ f \mapsto I(f) }[/math] thus induces a map [math]\displaystyle{ I : C_c^k(U) \to C^k(V) }[/math] that sends a function in [math]\displaystyle{ C_c^k(U) }[/math] to its trivial extension on [math]\displaystyle{ V. }[/math] This map is a linear injection and for every compact subset [math]\displaystyle{ K \subseteq U }[/math] (where [math]\displaystyle{ K }[/math] is also a compact subset of [math]\displaystyle{ V }[/math] since [math]\displaystyle{ K \subseteq U \subseteq V }[/math]), [math]\displaystyle{ \begin{alignat}{4} I\left(C^k(K; U)\right) &~=~ C^k(K; V) \qquad \text{ and thus } \\ I\left(C_c^k(U)\right) &~\subseteq~ C_c^k(V). \end{alignat} }[/math] If [math]\displaystyle{ I }[/math] is restricted to [math]\displaystyle{ C^k(K; U) }[/math] then the following induced linear map is a homeomorphism (linear homeomorphisms are called TVS-isomorphisms): [math]\displaystyle{ \begin{alignat}{4} \,& C^k(K; U) && \to \,&& C^k(K;V) \\ & f && \mapsto\,&& I(f) \\ \end{alignat} }[/math] and thus the next map is a topological embedding: [math]\displaystyle{ \begin{alignat}{4} \,& C^k(K; U) && \to \,&& C^k(V) \\ & f && \mapsto\,&& I(f). \\ \end{alignat} }[/math] Using the injection [math]\displaystyle{ I : C_c^k(U) \to C^k(V) }[/math] the vector space [math]\displaystyle{ C_c^k(U) }[/math] is canonically identified with its image in [math]\displaystyle{ C_c^k(V) \subseteq C^k(V). }[/math] Because [math]\displaystyle{ C^k(K; U) \subseteq C_c^k(U), }[/math] through this identification, [math]\displaystyle{ C^k(K; U) }[/math] can also be considered as a subset of [math]\displaystyle{ C^k(V). }[/math] Thus the topology on [math]\displaystyle{ C^k(K;U) }[/math] is independent of the open subset [math]\displaystyle{ U }[/math] of [math]\displaystyle{ \R^n }[/math] that contains [math]\displaystyle{ K, }[/math][7] which justifies the practice of writing [math]\displaystyle{ C^k(K) }[/math] instead of [math]\displaystyle{ C^k(K; U). }[/math]

Canonical LF topology

Main page: Spaces of test functions and distributions

Recall that [math]\displaystyle{ C_c^k(U) }[/math] denotes all functions in [math]\displaystyle{ C^k(U) }[/math] that have compact support in [math]\displaystyle{ U, }[/math] where note that [math]\displaystyle{ C_c^k(U) }[/math] is the union of all [math]\displaystyle{ C^k(K) }[/math] as [math]\displaystyle{ K }[/math] ranges over all compact subsets of [math]\displaystyle{ U. }[/math] Moreover, for each [math]\displaystyle{ k,\, C_c^k(U) }[/math] is a dense subset of [math]\displaystyle{ C^k(U). }[/math] The special case when [math]\displaystyle{ k = \infty }[/math] gives us the space of test functions.

[math]\displaystyle{ C_c^\infty(U) }[/math] is called the space of test functions on [math]\displaystyle{ U }[/math] and it may also be denoted by [math]\displaystyle{ \mathcal{D}(U). }[/math] Unless indicated otherwise, it is endowed with a topology called the canonical LF topology, whose definition is given in the article: Spaces of test functions and distributions.

The canonical LF-topology is not metrizable and importantly, it is strictly finer than the subspace topology that [math]\displaystyle{ C^\infty(U) }[/math] induces on [math]\displaystyle{ C_c^\infty(U). }[/math] However, the canonical LF-topology does make [math]\displaystyle{ C_c^\infty(U) }[/math] into a complete reflexive nuclear[8] Montel[9] bornological barrelled Mackey space; the same is true of its strong dual space (that is, the space of all distributions with its usual topology). The canonical LF-topology can be defined in various ways.

Distributions

As discussed earlier, continuous linear functionals on a [math]\displaystyle{ C_c^\infty(U) }[/math] are known as distributions on [math]\displaystyle{ U. }[/math] Other equivalent definitions are described below.

By definition, a distribution on [math]\displaystyle{ U }[/math] is a continuous linear functional on [math]\displaystyle{ C_c^\infty(U). }[/math] Said differently, a distribution on [math]\displaystyle{ U }[/math] is an element of the continuous dual space of [math]\displaystyle{ C_c^\infty(U) }[/math] when [math]\displaystyle{ C_c^\infty(U) }[/math] is endowed with its canonical LF topology.

There is a canonical duality pairing between a distribution [math]\displaystyle{ T }[/math] on [math]\displaystyle{ U }[/math] and a test function [math]\displaystyle{ f \in C_c^\infty(U), }[/math] which is denoted using angle brackets by [math]\displaystyle{ \begin{cases} \mathcal{D}'(U) \times C_c^\infty(U) \to \R \\ (T, f) \mapsto \langle T, f \rangle := T(f) \end{cases} }[/math]

One interprets this notation as the distribution [math]\displaystyle{ T }[/math] acting on the test function [math]\displaystyle{ f }[/math] to give a scalar, or symmetrically as the test function [math]\displaystyle{ f }[/math] acting on the distribution [math]\displaystyle{ T. }[/math]

Characterizations of distributions

Proposition. If [math]\displaystyle{ T }[/math] is a linear functional on [math]\displaystyle{ C_c^\infty(U) }[/math] then the following are equivalent:

  1. T is a distribution;
  2. T is continuous;
  3. T is continuous at the origin;
  4. T is uniformly continuous;
  5. T is a bounded operator;
  6. T is sequentially continuous;
    • explicitly, for every sequence [math]\displaystyle{ \left(f_i\right)_{i=1}^\infty }[/math] in [math]\displaystyle{ C_c^\infty(U) }[/math] that converges in [math]\displaystyle{ C_c^\infty(U) }[/math] to some [math]\displaystyle{ f \in C_c^\infty(U), }[/math] [math]\displaystyle{ \lim_{i \to \infty} T\left(f_i\right) = T(f); }[/math][note 4]
  7. T is sequentially continuous at the origin; in other words, T maps null sequences[note 5] to null sequences;
    • explicitly, for every sequence [math]\displaystyle{ \left(f_i\right)_{i=1}^\infty }[/math] in [math]\displaystyle{ C_c^\infty(U) }[/math] that converges in [math]\displaystyle{ C_c^\infty(U) }[/math] to the origin (such a sequence is called a null sequence), [math]\displaystyle{ \lim_{i \to \infty} T\left(f_i\right) = 0; }[/math]
    • a null sequence is by definition any sequence that converges to the origin;
  8. T maps null sequences to bounded subsets;
    • explicitly, for every sequence [math]\displaystyle{ \left(f_i\right)_{i=1}^\infty }[/math] in [math]\displaystyle{ C_c^\infty(U) }[/math] that converges in [math]\displaystyle{ C_c^\infty(U) }[/math] to the origin, the sequence [math]\displaystyle{ \left(T\left(f_i\right)\right)_{i=1}^\infty }[/math] is bounded;
  9. T maps Mackey convergent null sequences to bounded subsets;
    • explicitly, for every Mackey convergent null sequence [math]\displaystyle{ \left(f_i\right)_{i=1}^\infty }[/math] in [math]\displaystyle{ C_c^\infty(U), }[/math] the sequence [math]\displaystyle{ \left(T\left(f_i\right)\right)_{i=1}^\infty }[/math] is bounded;
    • a sequence [math]\displaystyle{ f_{\bull} = \left(f_i\right)_{i=1}^\infty }[/math] is said to be Mackey convergent to the origin if there exists a divergent sequence [math]\displaystyle{ r_{\bull} = \left(r_i\right)_{i=1}^\infty \to \infty }[/math] of positive real numbers such that the sequence [math]\displaystyle{ \left(r_i f_i\right)_{i=1}^\infty }[/math] is bounded; every sequence that is Mackey convergent to the origin necessarily converges to the origin (in the usual sense);
  10. The kernel of T is a closed subspace of [math]\displaystyle{ C_c^\infty(U); }[/math]
  11. The graph of T is closed;
  12. There exists a continuous seminorm [math]\displaystyle{ g }[/math] on [math]\displaystyle{ C_c^\infty(U) }[/math] such that [math]\displaystyle{ |T| \leq g; }[/math]
  13. There exists a constant [math]\displaystyle{ C \gt 0 }[/math] and a finite subset [math]\displaystyle{ \{g_1, \ldots, g_m\} \subseteq \mathcal{P} }[/math] (where [math]\displaystyle{ \mathcal{P} }[/math] is any collection of continuous seminorms that defines the canonical LF topology on [math]\displaystyle{ C_c^\infty(U) }[/math]) such that [math]\displaystyle{ |T| \leq C(g_1 + \cdots + g_m); }[/math][note 6]
  14. For every compact subset [math]\displaystyle{ K\subseteq U }[/math] there exist constants [math]\displaystyle{ C\gt 0 }[/math] and [math]\displaystyle{ N\in \N }[/math] such that for all [math]\displaystyle{ f \in C^\infty(K), }[/math][1] [math]\displaystyle{ |T(f)| \leq C \sup \{|\partial^\alpha f(x)| : x \in U, |\alpha|\leq N\}; }[/math]
  15. For every compact subset [math]\displaystyle{ K\subseteq U }[/math] there exist constants [math]\displaystyle{ C_K\gt 0 }[/math] and [math]\displaystyle{ N_K\in \N }[/math] such that for all [math]\displaystyle{ f \in C_c^\infty(U) }[/math] with support contained in [math]\displaystyle{ K, }[/math][10] [math]\displaystyle{ |T(f)| \leq C_K \sup \{|\partial^\alpha f(x)| : x \in K, |\alpha|\leq N_K\}; }[/math]
  16. For any compact subset [math]\displaystyle{ K\subseteq U }[/math] and any sequence [math]\displaystyle{ \{f_i\}_{i=1}^\infty }[/math] in [math]\displaystyle{ C^\infty(K), }[/math] if [math]\displaystyle{ \{\partial^p f_i\}_{i=1}^\infty }[/math] converges uniformly to zero for all multi-indices [math]\displaystyle{ p, }[/math] then [math]\displaystyle{ T(f_i) \to 0; }[/math]

Topology on the space of distributions and its relation to the weak-* topology

The set of all distributions on [math]\displaystyle{ U }[/math] is the continuous dual space of [math]\displaystyle{ C_c^\infty(U), }[/math] which when endowed with the strong dual topology is denoted by [math]\displaystyle{ \mathcal{D}'(U). }[/math] Importantly, unless indicated otherwise, the topology on [math]\displaystyle{ \mathcal{D}'(U) }[/math] is the strong dual topology; if the topology is instead the weak-* topology then this will be indicated. Neither topology is metrizable although unlike the weak-* topology, the strong dual topology makes [math]\displaystyle{ \mathcal{D}'(U) }[/math] into a complete nuclear space, to name just a few of its desirable properties.

Neither [math]\displaystyle{ C_c^\infty(U) }[/math] nor its strong dual [math]\displaystyle{ \mathcal{D}'(U) }[/math] is a sequential space and so neither of their topologies can be fully described by sequences (in other words, defining only what sequences converge in these spaces is not enough to fully/correctly define their topologies). However, a sequence in [math]\displaystyle{ \mathcal{D}'(U) }[/math] converges in the strong dual topology if and only if it converges in the weak-* topology (this leads many authors to use pointwise convergence to define the convergence of a sequence of distributions; this is fine for sequences but this is not guaranteed to extend to the convergence of nets of distributions because a net may converge pointwise but fail to converge in the strong dual topology). More information about the topology that [math]\displaystyle{ \mathcal{D}'(U) }[/math] is endowed with can be found in the article on spaces of test functions and distributions and the articles on polar topologies and dual systems.

A linear map from [math]\displaystyle{ \mathcal{D}'(U) }[/math] into another locally convex topological vector space (such as any normed space) is continuous if and only if it is sequentially continuous at the origin. However, this is no longer guaranteed if the map is not linear or for maps valued in more general topological spaces (for example, that are not also locally convex topological vector spaces). The same is true of maps from [math]\displaystyle{ C_c^\infty(U) }[/math] (more generally, this is true of maps from any locally convex bornological space).

Localization of distributions

There is no way to define the value of a distribution in [math]\displaystyle{ \mathcal{D}'(U) }[/math] at a particular point of U. However, as is the case with functions, distributions on U restrict to give distributions on open subsets of U. Furthermore, distributions are locally determined in the sense that a distribution on all of U can be assembled from a distribution on an open cover of U satisfying some compatibility conditions on the overlaps. Such a structure is known as a sheaf.

Extensions and restrictions to an open subset

Let [math]\displaystyle{ V \subseteq U }[/math] be open subsets of [math]\displaystyle{ \R^n. }[/math] Every function [math]\displaystyle{ f \in \mathcal{D}(V) }[/math] can be extended by zero from its domain V to a function on U by setting it equal to [math]\displaystyle{ 0 }[/math] on the complement [math]\displaystyle{ U \setminus V. }[/math] This extension is a smooth compactly supported function called the trivial extension of [math]\displaystyle{ f }[/math] to [math]\displaystyle{ U }[/math] and it will be denoted by [math]\displaystyle{ E_{VU} (f). }[/math] This assignment [math]\displaystyle{ f \mapsto E_{VU} (f) }[/math] defines the trivial extension operator [math]\displaystyle{ E_{VU} : \mathcal{D}(V) \to \mathcal{D}(U), }[/math] which is a continuous injective linear map. It is used to canonically identify [math]\displaystyle{ \mathcal{D}(V) }[/math] as a vector subspace of [math]\displaystyle{ \mathcal{D}(U) }[/math] (although not as a topological subspace). Its transpose (explained here) [math]\displaystyle{ \rho_{VU} := {}^{t}E_{VU} : \mathcal{D}'(U) \to \mathcal{D}'(V), }[/math] is called the restriction to [math]\displaystyle{ V }[/math] of distributions in [math]\displaystyle{ U }[/math][11] and as the name suggests, the image [math]\displaystyle{ \rho_{VU}(T) }[/math] of a distribution [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] under this map is a distribution on [math]\displaystyle{ V }[/math] called the restriction of [math]\displaystyle{ T }[/math] to [math]\displaystyle{ V. }[/math] The defining condition of the restriction [math]\displaystyle{ \rho_{VU}(T) }[/math] is: [math]\displaystyle{ \langle \rho_{VU} T, \phi \rangle = \langle T, E_{VU} \phi \rangle \quad \text{ for all } \phi \in \mathcal{D}(V). }[/math] If [math]\displaystyle{ V \neq U }[/math] then the (continuous injective linear) trivial extension map [math]\displaystyle{ E_{VU} : \mathcal{D}(V) \to \mathcal{D}(U) }[/math] is not a topological embedding (in other words, if this linear injection was used to identify [math]\displaystyle{ \mathcal{D}(V) }[/math] as a subset of [math]\displaystyle{ \mathcal{D}(U) }[/math] then [math]\displaystyle{ \mathcal{D}(V) }[/math]'s topology would strictly finer than the subspace topology that [math]\displaystyle{ \mathcal{D}(U) }[/math] induces on it; importantly, it would not be a topological subspace since that requires equality of topologies) and its range is also not dense in its codomain [math]\displaystyle{ \mathcal{D}(U). }[/math][11] Consequently if [math]\displaystyle{ V \neq U }[/math] then the restriction mapping is neither injective nor surjective.[11] A distribution [math]\displaystyle{ S \in \mathcal{D}'(V) }[/math] is said to be extendible to U if it belongs to the range of the transpose of [math]\displaystyle{ E_{VU} }[/math] and it is called extendible if it is extendable to [math]\displaystyle{ \R^n. }[/math][11]

Unless [math]\displaystyle{ U = V, }[/math] the restriction to V is neither injective nor surjective. Lack of surjectivity follows since distributions can blow up towards the boundary of V. For instance, if [math]\displaystyle{ U = \R }[/math] and [math]\displaystyle{ V = (0, 2), }[/math] then the distribution [math]\displaystyle{ T(x) = \sum_{n=1}^\infty n \, \delta\left(x-\frac{1}{n}\right) }[/math] is in [math]\displaystyle{ \mathcal{D}'(V) }[/math] but admits no extension to [math]\displaystyle{ \mathcal{D}'(U). }[/math]

Gluing and distributions that vanish in a set

Theorem[12] — Let [math]\displaystyle{ (U_i)_{i \in I} }[/math] be a collection of open subsets of [math]\displaystyle{ \R^n. }[/math] For each [math]\displaystyle{ i \in I, }[/math] let [math]\displaystyle{ T_i \in \mathcal{D}'(U_i) }[/math] and suppose that for all [math]\displaystyle{ i, j \in I, }[/math] the restriction of [math]\displaystyle{ T_i }[/math] to [math]\displaystyle{ U_i \cap U_j }[/math] is equal to the restriction of [math]\displaystyle{ T_j }[/math] to [math]\displaystyle{ U_i \cap U_j }[/math] (note that both restrictions are elements of [math]\displaystyle{ \mathcal{D}'(U_i \cap U_j) }[/math]). Then there exists a unique [math]\displaystyle{ T \in \mathcal{D}'(\bigcup_{i \in I} U_i) }[/math] such that for all [math]\displaystyle{ i \in I, }[/math] the restriction of T to [math]\displaystyle{ U_i }[/math] is equal to [math]\displaystyle{ T_i. }[/math]

Let V be an open subset of U. [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] is said to vanish in V if for all [math]\displaystyle{ f \in \mathcal{D}(U) }[/math] such that [math]\displaystyle{ \operatorname{supp}(f) \subseteq V }[/math] we have [math]\displaystyle{ Tf = 0. }[/math] T vanishes in V if and only if the restriction of T to V is equal to 0, or equivalently, if and only if T lies in the kernel of the restriction map [math]\displaystyle{ \rho_{VU}. }[/math]

Corollary[12] — Let [math]\displaystyle{ (U_i)_{i \in I} }[/math] be a collection of open subsets of [math]\displaystyle{ \R^n }[/math] and let [math]\displaystyle{ T \in \mathcal{D}'(\bigcup_{i \in I} U_i). }[/math] [math]\displaystyle{ T = 0 }[/math] if and only if for each [math]\displaystyle{ i \in I, }[/math] the restriction of T to [math]\displaystyle{ U_i }[/math] is equal to 0.

Corollary[12] — The union of all open subsets of U in which a distribution T vanishes is an open subset of U in which T vanishes.

Support of a distribution

This last corollary implies that for every distribution T on U, there exists a unique largest subset V of U such that T vanishes in V (and does not vanish in any open subset of U that is not contained in V); the complement in U of this unique largest open subset is called the support of T.[12] Thus [math]\displaystyle{ \operatorname{supp}(T) = U \setminus \bigcup \{V \mid \rho_{VU}T = 0\}. }[/math]

If [math]\displaystyle{ f }[/math] is a locally integrable function on U and if [math]\displaystyle{ D_f }[/math] is its associated distribution, then the support of [math]\displaystyle{ D_f }[/math] is the smallest closed subset of U in the complement of which [math]\displaystyle{ f }[/math] is almost everywhere equal to 0.[12] If [math]\displaystyle{ f }[/math] is continuous, then the support of [math]\displaystyle{ D_f }[/math] is equal to the closure of the set of points in U at which [math]\displaystyle{ f }[/math] does not vanish.[12] The support of the distribution associated with the Dirac measure at a point [math]\displaystyle{ x_0 }[/math] is the set [math]\displaystyle{ \{x_0\}. }[/math][12] If the support of a test function [math]\displaystyle{ f }[/math] does not intersect the support of a distribution T then [math]\displaystyle{ Tf = 0. }[/math] A distribution T is 0 if and only if its support is empty. If [math]\displaystyle{ f \in C^\infty(U) }[/math] is identically 1 on some open set containing the support of a distribution T then [math]\displaystyle{ f T = T. }[/math] If the support of a distribution T is compact then it has finite order and there is a constant [math]\displaystyle{ C }[/math] and a non-negative integer [math]\displaystyle{ N }[/math] such that:[7] [math]\displaystyle{ |T \phi| \leq C\|\phi\|_N := C \sup \left\{\left|\partial^\alpha \phi(x)\right| : x \in U, |\alpha| \leq N \right\} \quad \text{ for all } \phi \in \mathcal{D}(U). }[/math]

If T has compact support, then it has a unique extension to a continuous linear functional [math]\displaystyle{ \widehat{T} }[/math] on [math]\displaystyle{ C^\infty(U) }[/math]; this function can be defined by [math]\displaystyle{ \widehat{T} (f) := T(\psi f), }[/math] where [math]\displaystyle{ \psi \in \mathcal{D}(U) }[/math] is any function that is identically 1 on an open set containing the support of T.[7]

If [math]\displaystyle{ S, T \in \mathcal{D}'(U) }[/math] and [math]\displaystyle{ \lambda \neq 0 }[/math] then [math]\displaystyle{ \operatorname{supp}(S + T) \subseteq \operatorname{supp}(S) \cup \operatorname{supp}(T) }[/math] and [math]\displaystyle{ \operatorname{supp}(\lambda T) = \operatorname{supp}(T). }[/math] Thus, distributions with support in a given subset [math]\displaystyle{ A \subseteq U }[/math] form a vector subspace of [math]\displaystyle{ \mathcal{D}'(U). }[/math][13] Furthermore, if [math]\displaystyle{ P }[/math] is a differential operator in U, then for all distributions T on U and all [math]\displaystyle{ f \in C^\infty(U) }[/math] we have [math]\displaystyle{ \operatorname{supp} (P(x, \partial) T) \subseteq \operatorname{supp}(T) }[/math] and [math]\displaystyle{ \operatorname{supp}(fT) \subseteq \operatorname{supp}(f) \cap \operatorname{supp}(T). }[/math][13]

Distributions with compact support

Support in a point set and Dirac measures

For any [math]\displaystyle{ x \in U, }[/math] let [math]\displaystyle{ \delta_x \in \mathcal{D}'(U) }[/math] denote the distribution induced by the Dirac measure at [math]\displaystyle{ x. }[/math] For any [math]\displaystyle{ x_0 \in U }[/math] and distribution [math]\displaystyle{ T \in \mathcal{D}'(U), }[/math] the support of T is contained in [math]\displaystyle{ \{x_0\} }[/math] if and only if T is a finite linear combination of derivatives of the Dirac measure at [math]\displaystyle{ x_0. }[/math][14] If in addition the order of T is [math]\displaystyle{ \leq k }[/math] then there exist constants [math]\displaystyle{ \alpha_p }[/math] such that:[15] [math]\displaystyle{ T = \sum_{|p| \leq k} \alpha_p \partial^p \delta_{x_0}. }[/math]

Said differently, if T has support at a single point [math]\displaystyle{ \{P\}, }[/math] then T is in fact a finite linear combination of distributional derivatives of the [math]\displaystyle{ \delta }[/math] function at P. That is, there exists an integer m and complex constants [math]\displaystyle{ a_\alpha }[/math] such that [math]\displaystyle{ T = \sum_{|\alpha|\leq m} a_\alpha \partial^\alpha(\tau_P\delta) }[/math] where [math]\displaystyle{ \tau_P }[/math] is the translation operator.

Distribution with compact support

Theorem[7] — Suppose T is a distribution on U with compact support K. There exists a continuous function [math]\displaystyle{ f }[/math] defined on U and a multi-index p such that [math]\displaystyle{ T = \partial^p f, }[/math] where the derivatives are understood in the sense of distributions. That is, for all test functions [math]\displaystyle{ \phi }[/math] on U, [math]\displaystyle{ T \phi = (-1)^{|p|} \int_{U} f(x) (\partial^p \phi)(x) \, dx. }[/math]

Distributions of finite order with support in an open subset

Theorem[7] — Suppose T is a distribution on U with compact support K and let V be an open subset of U containing K. Since every distribution with compact support has finite order, take N to be the order of T and define [math]\displaystyle{ P:=\{0,1,\ldots, N+2\}^n. }[/math] There exists a family of continuous functions [math]\displaystyle{ (f_p)_{p\in P} }[/math] defined on U with support in V such that [math]\displaystyle{ T = \sum_{p \in P} \partial^p f_p, }[/math] where the derivatives are understood in the sense of distributions. That is, for all test functions [math]\displaystyle{ \phi }[/math] on U, [math]\displaystyle{ T \phi = \sum_{p \in P} (-1)^{|p|} \int_U f_p(x) (\partial^p \phi)(x) \, dx. }[/math]

Global structure of distributions

The formal definition of distributions exhibits them as a subspace of a very large space, namely the topological dual of [math]\displaystyle{ \mathcal{D}(U) }[/math] (or the Schwartz space [math]\displaystyle{ \mathcal{S}(\R^n) }[/math] for tempered distributions). It is not immediately clear from the definition how exotic a distribution might be. To answer this question, it is instructive to see distributions built up from a smaller space, namely the space of continuous functions. Roughly, any distribution is locally a (multiple) derivative of a continuous function. A precise version of this result, given below, holds for distributions of compact support, tempered distributions, and general distributions. Generally speaking, no proper subset of the space of distributions contains all continuous functions and is closed under differentiation. This says that distributions are not particularly exotic objects; they are only as complicated as necessary.

Distributions as sheaves

Theorem[16] — Let T be a distribution on U. There exists a sequence [math]\displaystyle{ (T_i)_{i=1}^\infty }[/math] in [math]\displaystyle{ \mathcal{D}'(U) }[/math] such that each Ti has compact support and every compact subset [math]\displaystyle{ K \subseteq U }[/math] intersects the support of only finitely many [math]\displaystyle{ T_i, }[/math] and the sequence of partial sums [math]\displaystyle{ (S_j)_{j=1}^\infty, }[/math] defined by [math]\displaystyle{ S_j := T_1 + \cdots + T_j, }[/math] converges in [math]\displaystyle{ \mathcal{D}'(U) }[/math] to T; in other words we have: [math]\displaystyle{ T = \sum_{i=1}^\infty T_i. }[/math] Recall that a sequence converges in [math]\displaystyle{ \mathcal{D}'(U) }[/math] (with its strong dual topology) if and only if it converges pointwise.

Decomposition of distributions as sums of derivatives of continuous functions

By combining the above results, one may express any distribution on U as the sum of a series of distributions with compact support, where each of these distributions can in turn be written as a finite sum of distributional derivatives of continuous functions on U. In other words, for arbitrary [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] we can write: [math]\displaystyle{ T = \sum_{i=1}^\infty \sum_{p \in P_i} \partial^p f_{ip}, }[/math] where [math]\displaystyle{ P_1, P_2, \ldots }[/math] are finite sets of multi-indices and the functions [math]\displaystyle{ f_{ip} }[/math] are continuous.

Theorem[17] — Let T be a distribution on U. For every multi-index p there exists a continuous function [math]\displaystyle{ g_p }[/math] on U such that

  1. any compact subset K of U intersects the support of only finitely many [math]\displaystyle{ g_p, }[/math] and
  2. [math]\displaystyle{ T = \sum\nolimits_p \partial^p g_p. }[/math]

Moreover, if T has finite order, then one can choose [math]\displaystyle{ g_p }[/math] in such a way that only finitely many of them are non-zero.

Note that the infinite sum above is well-defined as a distribution. The value of T for a given [math]\displaystyle{ f \in \mathcal{D}(U) }[/math] can be computed using the finitely many [math]\displaystyle{ g_\alpha }[/math] that intersect the support of [math]\displaystyle{ f. }[/math]

Operations on distributions

Many operations which are defined on smooth functions with compact support can also be defined for distributions. In general, if [math]\displaystyle{ A:\mathcal{D}(U)\to\mathcal{D}(U) }[/math] is a linear map that is continuous with respect to the weak topology, then it is not always possible to extend [math]\displaystyle{ A }[/math] to a map [math]\displaystyle{ A': \mathcal{D}'(U)\to \mathcal{D}'(U) }[/math] by classic extension theorems of topology or linear functional analysis.[note 7] The “distributional” extension of the above linear continuous operator A is possible if and only if A admits a Schwartz adjoint, that is another linear continuous operator B of the same type such that [math]\displaystyle{ \langle Af,g\rangle = \langle f,Bg\rangle }[/math], for every pair of test functions. In that condition, B is unique and the extension A’ is the transpose of the Schwartz adjoint B.[citation needed][18][clarification needed]

Preliminaries: Transpose of a linear operator

Main page: Transpose of a linear map

Operations on distributions and spaces of distributions are often defined using the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well-known in functional analysis.[19] For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general, the transpose of a continuous linear map [math]\displaystyle{ A : X \to Y }[/math] is the linear map [math]\displaystyle{ {}^{t}A : Y' \to X' \qquad \text{ defined by } \qquad {}^{t}A(y') := y' \circ A, }[/math] or equivalently, it is the unique map satisfying [math]\displaystyle{ \langle y', A(x)\rangle = \left\langle {}^{t}A (y'), x \right\rangle }[/math] for all [math]\displaystyle{ x \in X }[/math] and all [math]\displaystyle{ y' \in Y' }[/math] (the prime symbol in [math]\displaystyle{ y' }[/math] does not denote a derivative of any kind; it merely indicates that [math]\displaystyle{ y' }[/math] is an element of the continuous dual space [math]\displaystyle{ Y' }[/math]). Since [math]\displaystyle{ A }[/math] is continuous, the transpose [math]\displaystyle{ {}^{t}A : Y' \to X' }[/math] is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details).

In the context of distributions, the characterization of the transpose can be refined slightly. Let [math]\displaystyle{ A : \mathcal{D}(U) \to \mathcal{D}(U) }[/math] be a continuous linear map. Then by definition, the transpose of [math]\displaystyle{ A }[/math] is the unique linear operator [math]\displaystyle{ {}^tA : \mathcal{D}'(U) \to \mathcal{D}'(U) }[/math] that satisfies: [math]\displaystyle{ \langle {}^{t}A(T), \phi \rangle = \langle T, A(\phi) \rangle \quad \text{ for all } \phi \in \mathcal{D}(U) \text{ and all } T \in \mathcal{D}'(U). }[/math]

Since [math]\displaystyle{ \mathcal{D}(U) }[/math] is dense in [math]\displaystyle{ \mathcal{D}'(U) }[/math] (here, [math]\displaystyle{ \mathcal{D}(U) }[/math] actually refers to the set of distributions [math]\displaystyle{ \left\{D_\psi : \psi \in \mathcal{D}(U)\right\} }[/math]) it is sufficient that the defining equality hold for all distributions of the form [math]\displaystyle{ T = D_\psi }[/math] where [math]\displaystyle{ \psi \in \mathcal{D}(U). }[/math] Explicitly, this means that a continuous linear map [math]\displaystyle{ B : \mathcal{D}'(U) \to \mathcal{D}'(U) }[/math] is equal to [math]\displaystyle{ {}^{t}A }[/math] if and only if the condition below holds: [math]\displaystyle{ \langle B(D_\psi), \phi \rangle = \langle {}^{t}A(D_\psi), \phi \rangle \quad \text{ for all } \phi, \psi \in \mathcal{D}(U) }[/math] where the right-hand side equals [math]\displaystyle{ \langle {}^{t}A(D_\psi), \phi \rangle = \langle D_\psi, A(\phi) \rangle = \langle \psi, A(\phi) \rangle = \int_U \psi \cdot A(\phi) \,dx. }[/math]

Differential operators

Differentiation of distributions

Let [math]\displaystyle{ A : \mathcal{D}(U) \to \mathcal{D}(U) }[/math] be the partial derivative operator [math]\displaystyle{ \tfrac{\partial}{\partial x_k}. }[/math] To extend [math]\displaystyle{ A }[/math] we compute its transpose: [math]\displaystyle{ \begin{align} \langle {}^{t}A(D_\psi), \phi \rangle &= \int_U \psi (A\phi) \,dx && \text{(See above.)} \\ &= \int_U \psi \frac{\partial\phi}{\partial x_k} \, dx \\[4pt] &= -\int_U \phi \frac{\partial\psi}{\partial x_k}\, dx && \text{(integration by parts)} \\[4pt] &= -\left\langle \frac{\partial\psi}{\partial x_k}, \phi \right\rangle \\[4pt] &= -\langle A \psi, \phi \rangle = \langle - A \psi, \phi \rangle \end{align} }[/math]

Therefore [math]\displaystyle{ {}^{t}A = -A. }[/math] Thus, the partial derivative of [math]\displaystyle{ T }[/math] with respect to the coordinate [math]\displaystyle{ x_k }[/math] is defined by the formula [math]\displaystyle{ \left\langle \frac{\partial T}{\partial x_k}, \phi \right\rangle = - \left\langle T, \frac{\partial \phi}{\partial x_k} \right\rangle \qquad \text{ for all } \phi \in \mathcal{D}(U). }[/math]

With this definition, every distribution is infinitely differentiable, and the derivative in the direction [math]\displaystyle{ x_k }[/math] is a linear operator on [math]\displaystyle{ \mathcal{D}'(U). }[/math]

More generally, if [math]\displaystyle{ \alpha }[/math] is an arbitrary multi-index, then the partial derivative [math]\displaystyle{ \partial^\alpha T }[/math] of the distribution [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] is defined by [math]\displaystyle{ \langle \partial^\alpha T, \phi \rangle = (-1)^{|\alpha|} \langle T, \partial^\alpha \phi \rangle \qquad \text{ for all } \phi \in \mathcal{D}(U). }[/math]

Differentiation of distributions is a continuous operator on [math]\displaystyle{ \mathcal{D}'(U); }[/math] this is an important and desirable property that is not shared by most other notions of differentiation.

If [math]\displaystyle{ T }[/math] is a distribution in [math]\displaystyle{ \R }[/math] then [math]\displaystyle{ \lim_{x \to 0} \frac{T - \tau_x T}{x} = T'\in \mathcal{D}'(\R), }[/math] where [math]\displaystyle{ T' }[/math] is the derivative of [math]\displaystyle{ T }[/math] and [math]\displaystyle{ \tau_x }[/math] is a translation by [math]\displaystyle{ x; }[/math] thus the derivative of [math]\displaystyle{ T }[/math] may be viewed as a limit of quotients.[20]

Differential operators acting on smooth functions

A linear differential operator in [math]\displaystyle{ U }[/math] with smooth coefficients acts on the space of smooth functions on [math]\displaystyle{ U. }[/math] Given such an operator [math]\displaystyle{ P := \sum_\alpha c_\alpha \partial^\alpha, }[/math] we would like to define a continuous linear map, [math]\displaystyle{ D_P }[/math] that extends the action of [math]\displaystyle{ P }[/math] on [math]\displaystyle{ C^\infty(U) }[/math] to distributions on [math]\displaystyle{ U. }[/math] In other words, we would like to define [math]\displaystyle{ D_P }[/math] such that the following diagram commutes: [math]\displaystyle{ \begin{matrix} \mathcal{D}'(U) & \stackrel{D_P}{\longrightarrow} & \mathcal{D}'(U) \\[2pt] \uparrow & & \uparrow \\[2pt] C^\infty(U) & \stackrel{P}{\longrightarrow} & C^\infty(U) \end{matrix} }[/math] where the vertical maps are given by assigning [math]\displaystyle{ f \in C^\infty(U) }[/math] its canonical distribution [math]\displaystyle{ D_f \in \mathcal{D}'(U), }[/math] which is defined by: [math]\displaystyle{ D_f(\phi) = \langle f, \phi \rangle := \int_U f(x) \phi(x) \,dx \quad \text{ for all } \phi \in \mathcal{D}(U). }[/math] With this notation, the diagram commuting is equivalent to: [math]\displaystyle{ D_{P(f)} = D_PD_f \qquad \text{ for all } f \in C^\infty(U). }[/math]

To find [math]\displaystyle{ D_P, }[/math] the transpose [math]\displaystyle{ {}^{t} P : \mathcal{D}'(U) \to \mathcal{D}'(U) }[/math] of the continuous induced map [math]\displaystyle{ P : \mathcal{D}(U)\to \mathcal{D}(U) }[/math] defined by [math]\displaystyle{ \phi \mapsto P(\phi) }[/math] is considered in the lemma below. This leads to the following definition of the differential operator on [math]\displaystyle{ U }[/math] called the formal transpose of [math]\displaystyle{ P, }[/math] which will be denoted by [math]\displaystyle{ P_* }[/math] to avoid confusion with the transpose map, that is defined by [math]\displaystyle{ P_* := \sum_\alpha b_\alpha \partial^\alpha \quad \text{ where } \quad b_\alpha := \sum_{\beta \geq \alpha} (-1)^{|\beta|} \binom{\beta}{\alpha} \partial^{\beta-\alpha} c_\beta. }[/math]

Lemma — Let [math]\displaystyle{ P }[/math] be a linear differential operator with smooth coefficients in [math]\displaystyle{ U. }[/math] Then for all [math]\displaystyle{ \phi \in \mathcal{D}(U) }[/math] we have [math]\displaystyle{ \left\langle {}^{t}P(D_f), \phi \right\rangle = \left\langle D_{P_*(f)}, \phi \right\rangle, }[/math] which is equivalent to: [math]\displaystyle{ {}^{t}P(D_f) = D_{P_*(f)}. }[/math]

Proof

As discussed above, for any [math]\displaystyle{ \phi \in \mathcal{D}(U), }[/math] the transpose may be calculated by: [math]\displaystyle{ \begin{align} \left\langle {}^{t}P(D_f), \phi \right\rangle &= \int_U f(x) P(\phi)(x) \,dx \\ &= \int_U f(x) \left[\sum\nolimits_\alpha c_\alpha(x) (\partial^\alpha \phi)(x) \right] \,dx \\ &= \sum\nolimits_\alpha \int_U f(x) c_\alpha(x) (\partial^\alpha \phi)(x) \,dx \\ &= \sum\nolimits_\alpha (-1)^{|\alpha|} \int_U \phi(x) (\partial^\alpha(c_\alpha f))(x) \,d x \end{align} }[/math]

For the last line we used integration by parts combined with the fact that [math]\displaystyle{ \phi }[/math] and therefore all the functions [math]\displaystyle{ f (x)c_\alpha (x) \partial^\alpha \phi(x) }[/math] have compact support.[note 8] Continuing the calculation above, for all [math]\displaystyle{ \phi \in \mathcal{D}(U): }[/math] [math]\displaystyle{ \begin{align} \left\langle {}^{t}P(D_f), \phi \right\rangle &=\sum\nolimits_\alpha (-1)^{|\alpha|} \int_U \phi(x) (\partial^\alpha(c_\alpha f))(x) \,dx && \text{As shown above} \\[4pt] &= \int_U \phi(x) \sum\nolimits_\alpha (-1)^{|\alpha|} (\partial^\alpha(c_\alpha f))(x)\,dx \\[4pt] &= \int_U \phi(x) \sum_\alpha \left[\sum_{\gamma \le \alpha} \binom{\alpha}{\gamma} (\partial^{\gamma}c_\alpha)(x) (\partial^{\alpha-\gamma}f)(x) \right] \,dx && \text{Leibniz rule}\\ &= \int_U \phi(x) \left[\sum_\alpha \sum_{\gamma \le \alpha} (-1)^{|\alpha|} \binom{\alpha}{\gamma} (\partial^{\gamma}c_\alpha)(x) (\partial^{\alpha-\gamma}f)(x)\right] \,dx \\ &= \int_U \phi(x) \left[ \sum_\alpha \left[ \sum_{\beta \geq \alpha} (-1)^{|\beta|} \binom{\beta}{\alpha} \left(\partial^{\beta-\alpha}c_{\beta}\right)(x) \right] (\partial^\alpha f)(x)\right] \,dx && \text{Grouping terms by derivatives of } f \\ &= \int_U \phi(x) \left[\sum\nolimits_\alpha b_\alpha(x) (\partial^\alpha f)(x) \right] \, dx && b_\alpha:=\sum_{\beta \geq \alpha} (-1)^{|\beta|} \binom{\beta}{\alpha} \partial^{\beta-\alpha}c_{\beta} \\ &= \left\langle \left(\sum\nolimits_\alpha b_\alpha \partial^\alpha \right) (f), \phi \right\rangle \end{align} }[/math]

The Lemma combined with the fact that the formal transpose of the formal transpose is the original differential operator, that is, [math]\displaystyle{ P_{**}= P, }[/math][21] enables us to arrive at the correct definition: the formal transpose induces the (continuous) canonical linear operator [math]\displaystyle{ P_* : C_c^\infty(U) \to C_c^\infty(U) }[/math] defined by [math]\displaystyle{ \phi \mapsto P_*(\phi). }[/math] We claim that the transpose of this map, [math]\displaystyle{ {}^{t}P_* : \mathcal{D}'(U) \to \mathcal{D}'(U), }[/math] can be taken as [math]\displaystyle{ D_P. }[/math] To see this, for every [math]\displaystyle{ \phi \in \mathcal{D}(U), }[/math] compute its action on a distribution of the form [math]\displaystyle{ D_f }[/math] with [math]\displaystyle{ f \in C^\infty(U) }[/math]:

[math]\displaystyle{ \begin{align} \left\langle {}^{t}P_*\left(D_f\right),\phi \right\rangle &= \left\langle D_{P_{**}(f)}, \phi \right\rangle && \text{Using Lemma above with } P_* \text{ in place of } P\\ &= \left\langle D_{P(f)}, \phi \right\rangle && P_{**} = P \end{align} }[/math]

We call the continuous linear operator [math]\displaystyle{ D_P := {}^{t}P_* : \mathcal{D}'(U) \to \mathcal{D}'(U) }[/math] the differential operator on distributions extending [math]\displaystyle{ P }[/math].[21] Its action on an arbitrary distribution [math]\displaystyle{ S }[/math] is defined via: [math]\displaystyle{ D_P(S)(\phi) = S\left(P_*(\phi)\right) \quad \text{ for all } \phi \in \mathcal{D}(U). }[/math]

If [math]\displaystyle{ (T_i)_{i=1}^\infty }[/math] converges to [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] then for every multi-index [math]\displaystyle{ \alpha, (\partial^\alpha T_i)_{i=1}^\infty }[/math] converges to [math]\displaystyle{ \partial^\alpha T \in \mathcal{D}'(U). }[/math]

Multiplication of distributions by smooth functions

A differential operator of order 0 is just multiplication by a smooth function. And conversely, if [math]\displaystyle{ f }[/math] is a smooth function then [math]\displaystyle{ P := f(x) }[/math] is a differential operator of order 0, whose formal transpose is itself (that is, [math]\displaystyle{ P_* = P }[/math]). The induced differential operator [math]\displaystyle{ D_P : \mathcal{D}'(U) \to \mathcal{D}'(U) }[/math] maps a distribution [math]\displaystyle{ T }[/math] to a distribution denoted by [math]\displaystyle{ fT := D_P(T). }[/math] We have thus defined the multiplication of a distribution by a smooth function.

We now give an alternative presentation of the multiplication of a distribution [math]\displaystyle{ T }[/math] on [math]\displaystyle{ U }[/math] by a smooth function [math]\displaystyle{ m : U \to \R. }[/math] The product [math]\displaystyle{ mT }[/math] is defined by [math]\displaystyle{ \langle mT, \phi \rangle = \langle T, m\phi \rangle \qquad \text{ for all } \phi \in \mathcal{D}(U). }[/math]

This definition coincides with the transpose definition since if [math]\displaystyle{ M : \mathcal{D}(U) \to \mathcal{D}(U) }[/math] is the operator of multiplication by the function [math]\displaystyle{ m }[/math] (that is, [math]\displaystyle{ (M\phi)(x) = m(x)\phi(x) }[/math]), then [math]\displaystyle{ \int_U (M \phi)(x) \psi(x)\,dx = \int_U m(x) \phi(x) \psi(x)\,d x = \int_U \phi(x) m(x) \psi(x) \,d x = \int_U \phi(x) (M \psi)(x)\,d x, }[/math] so that [math]\displaystyle{ {}^tM = M. }[/math]

Under multiplication by smooth functions, [math]\displaystyle{ \mathcal{D}'(U) }[/math] is a module over the ring [math]\displaystyle{ C^\infty(U). }[/math] With this definition of multiplication by a smooth function, the ordinary product rule of calculus remains valid. However, some unusual identities also arise. For example, if [math]\displaystyle{ \delta }[/math] is the Dirac delta distribution on [math]\displaystyle{ \R, }[/math] then [math]\displaystyle{ m \delta = m(0) \delta, }[/math] and if [math]\displaystyle{ \delta^' }[/math] is the derivative of the delta distribution, then [math]\displaystyle{ m\delta' = m(0) \delta' - m' \delta = m(0) \delta' - m'(0) \delta. }[/math]

The bilinear multiplication map [math]\displaystyle{ C^\infty(\R^n) \times \mathcal{D}'(\R^n) \to \mathcal{D}'\left(\R^n\right) }[/math] given by [math]\displaystyle{ (f,T) \mapsto fT }[/math] is not continuous; it is however, hypocontinuous.[22]

Example. The product of any distribution [math]\displaystyle{ T }[/math] with the function that is identically 1 on [math]\displaystyle{ U }[/math] is equal to [math]\displaystyle{ T. }[/math]

Example. Suppose [math]\displaystyle{ (f_i)_{i=1}^\infty }[/math] is a sequence of test functions on [math]\displaystyle{ U }[/math] that converges to the constant function [math]\displaystyle{ 1 \in C^\infty(U). }[/math] For any distribution [math]\displaystyle{ T }[/math] on [math]\displaystyle{ U, }[/math] the sequence [math]\displaystyle{ (f_i T)_{i=1}^\infty }[/math] converges to [math]\displaystyle{ T \in \mathcal{D}'(U). }[/math][23]

If [math]\displaystyle{ (T_i)_{i=1}^\infty }[/math] converges to [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] and [math]\displaystyle{ (f_i)_{i=1}^\infty }[/math] converges to [math]\displaystyle{ f \in C^\infty(U) }[/math] then [math]\displaystyle{ (f_i T_i)_{i=1}^\infty }[/math] converges to [math]\displaystyle{ fT \in \mathcal{D}'(U). }[/math]

Problem of multiplying distributions

It is easy to define the product of a distribution with a smooth function, or more generally the product of two distributions whose singular supports are disjoint.[24] With more effort, it is possible to define a well-behaved product of several distributions provided their wave front sets at each point are compatible. A limitation of the theory of distributions (and hyperfunctions) is that there is no associative product of two distributions extending the product of a distribution by a smooth function, as has been proved by Laurent Schwartz in the 1950s. For example, if [math]\displaystyle{ \operatorname{p.v.} \frac{1}{x} }[/math] is the distribution obtained by the Cauchy principal value [math]\displaystyle{ \left(\operatorname{p.v.} \frac{1}{x}\right)(\phi) = \lim_{\varepsilon\to 0^+} \int_{|x| \geq \varepsilon} \frac{\phi(x)}{x}\, dx \quad \text{ for all } \phi \in \mathcal{S}(\R). }[/math]

If [math]\displaystyle{ \delta }[/math] is the Dirac delta distribution then [math]\displaystyle{ (\delta \times x) \times \operatorname{p.v.} \frac{1}{x} = 0 }[/math] but, [math]\displaystyle{ \delta \times \left(x \times \operatorname{p.v.} \frac{1}{x}\right) = \delta }[/math] so the product of a distribution by a smooth function (which is always well-defined) cannot be extended to an associative product on the space of distributions.

Thus, nonlinear problems cannot be posed in general and thus are not solved within distribution theory alone. In the context of quantum field theory, however, solutions can be found. In more than two spacetime dimensions the problem is related to the regularization of divergences. Here Henri Epstein and Vladimir Glaser developed the mathematically rigorous (but extremely technical) causal perturbation theory. This does not solve the problem in other situations. Many other interesting theories are non-linear, like for example the Navier–Stokes equations of fluid dynamics.

Several not entirely satisfactory[citation needed] theories of algebras of generalized functions have been developed, among which Colombeau's (simplified) algebra is maybe the most popular in use today.

Inspired by Lyons' rough path theory,[25] Martin Hairer proposed a consistent way of multiplying distributions with certain structures (regularity structures[26]), available in many examples from stochastic analysis, notably stochastic partial differential equations. See also Gubinelli–Imkeller–Perkowski (2015) for a related development based on Bony's paraproduct from Fourier analysis.

Composition with a smooth function

Let [math]\displaystyle{ T }[/math] be a distribution on [math]\displaystyle{ U. }[/math] Let [math]\displaystyle{ V }[/math] be an open set in [math]\displaystyle{ \R^n }[/math] and [math]\displaystyle{ F : V \to U. }[/math] If [math]\displaystyle{ F }[/math] is a submersion then it is possible to define [math]\displaystyle{ T \circ F \in \mathcal{D}'(V). }[/math]

This is the composition of the distribution [math]\displaystyle{ T }[/math] with [math]\displaystyle{ F }[/math], and is also called the pullback of [math]\displaystyle{ T }[/math] along [math]\displaystyle{ F }[/math], sometimes written [math]\displaystyle{ F^\sharp : T \mapsto F^\sharp T = T \circ F. }[/math]

The pullback is often denoted [math]\displaystyle{ F^*, }[/math] although this notation should not be confused with the use of '*' to denote the adjoint of a linear mapping.

The condition that [math]\displaystyle{ F }[/math] be a submersion is equivalent to the requirement that the Jacobian derivative [math]\displaystyle{ d F(x) }[/math] of [math]\displaystyle{ F }[/math] is a surjective linear map for every [math]\displaystyle{ x \in V. }[/math] A necessary (but not sufficient) condition for extending [math]\displaystyle{ F^{\#} }[/math] to distributions is that [math]\displaystyle{ F }[/math] be an open mapping.[27] The Inverse function theorem ensures that a submersion satisfies this condition.

If [math]\displaystyle{ F }[/math] is a submersion, then [math]\displaystyle{ F^{\#} }[/math] is defined on distributions by finding the transpose map. The uniqueness of this extension is guaranteed since [math]\displaystyle{ F^{\#} }[/math] is a continuous linear operator on [math]\displaystyle{ \mathcal{D}(U). }[/math] Existence, however, requires using the change of variables formula, the inverse function theorem (locally), and a partition of unity argument.[28]

In the special case when [math]\displaystyle{ F }[/math] is a diffeomorphism from an open subset [math]\displaystyle{ V }[/math] of [math]\displaystyle{ \R^n }[/math] onto an open subset [math]\displaystyle{ U }[/math] of [math]\displaystyle{ \R^n }[/math] change of variables under the integral gives: [math]\displaystyle{ \int_V \phi\circ F(x) \psi(x)\,dx = \int_U \phi(x) \psi \left(F^{-1}(x) \right) \left|\det dF^{-1}(x) \right|\,dx. }[/math]

In this particular case, then, [math]\displaystyle{ F^{\#} }[/math] is defined by the transpose formula: [math]\displaystyle{ \left\langle F^\sharp T, \phi \right\rangle = \left\langle T, \left|\det d(F^{-1}) \right|\phi\circ F^{-1} \right\rangle. }[/math]

Convolution

Under some circumstances, it is possible to define the convolution of a function with a distribution, or even the convolution of two distributions. Recall that if [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] are functions on [math]\displaystyle{ \R^n }[/math] then we denote by [math]\displaystyle{ f\ast g }[/math] the convolution of [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g, }[/math] defined at [math]\displaystyle{ x \in \R^n }[/math] to be the integral [math]\displaystyle{ (f \ast g)(x) := \int_{\R^n} f(x-y) g(y) \,dy = \int_{\R^n} f(y)g(x-y) \,dy }[/math] provided that the integral exists. If [math]\displaystyle{ 1 \leq p, q, r \leq \infty }[/math] are such that [math]\displaystyle{ \frac{1}{r} = \frac{1}{p} + \frac{1}{q} - 1 }[/math] then for any functions [math]\displaystyle{ f \in L^p(\R^n) }[/math] and [math]\displaystyle{ g \in L^q(\R^n) }[/math] we have [math]\displaystyle{ f \ast g \in L^r(\R^n) }[/math] and [math]\displaystyle{ \|f\ast g\|_{L^r} \leq \|f\|_{L^p} \|g\|_{L^q}. }[/math][29] If [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] are continuous functions on [math]\displaystyle{ \R^n, }[/math] at least one of which has compact support, then [math]\displaystyle{ \operatorname{supp}(f \ast g) \subseteq \operatorname{supp} (f) + \operatorname{supp} (g) }[/math] and if [math]\displaystyle{ A\subseteq \R^n }[/math] then the value of [math]\displaystyle{ f\ast g }[/math] on [math]\displaystyle{ A }[/math] do not depend on the values of [math]\displaystyle{ f }[/math] outside of the Minkowski sum [math]\displaystyle{ A -\operatorname{supp} (g) = \{a-s : a\in A, s\in \operatorname{supp}(g)\}. }[/math][29]

Importantly, if [math]\displaystyle{ g \in L^1(\R^n) }[/math] has compact support then for any [math]\displaystyle{ 0 \leq k \leq \infty, }[/math] the convolution map [math]\displaystyle{ f \mapsto f \ast g }[/math] is continuous when considered as the map [math]\displaystyle{ C^k(\R^n) \to C^k(\R^n) }[/math] or as the map [math]\displaystyle{ C_c^k(\R^n) \to C_c^k(\R^n). }[/math][29]

Translation and symmetry

Given [math]\displaystyle{ a \in \R^n, }[/math] the translation operator [math]\displaystyle{ \tau_a }[/math] sends [math]\displaystyle{ f : \R^n \to \Complex }[/math] to [math]\displaystyle{ \tau_a f : \R^n \to \Complex, }[/math] defined by [math]\displaystyle{ \tau_a f(y) = f(y-a). }[/math] This can be extended by the transpose to distributions in the following way: given a distribution [math]\displaystyle{ T, }[/math] the translation of [math]\displaystyle{ T }[/math] by [math]\displaystyle{ a }[/math] is the distribution [math]\displaystyle{ \tau_a T : \mathcal{D}(\R^n) \to \Complex }[/math] defined by [math]\displaystyle{ \tau_a T(\phi) := \left\langle T, \tau_{-a} \phi \right\rangle. }[/math][30][31]

Given [math]\displaystyle{ f : \R^n \to \Complex, }[/math] define the function [math]\displaystyle{ \tilde{f} : \R^n \to \Complex }[/math] by [math]\displaystyle{ \tilde{f}(x) := f(-x). }[/math] Given a distribution [math]\displaystyle{ T, }[/math] let [math]\displaystyle{ \tilde{T} : \mathcal{D}(\R^n) \to \Complex }[/math] be the distribution defined by [math]\displaystyle{ \tilde{T}(\phi) := T \left(\tilde{\phi}\right). }[/math] The operator [math]\displaystyle{ T \mapsto \tilde{T} }[/math] is called the symmetry with respect to the origin.[30]

Convolution of a test function with a distribution

Convolution with [math]\displaystyle{ f \in \mathcal{D}(\R^n) }[/math] defines a linear map: [math]\displaystyle{ \begin{alignat}{4} C_f : \,& \mathcal{D}(\R^n) && \to \,&& \mathcal{D}(\R^n) \\ & g && \mapsto\,&& f \ast g \\ \end{alignat} }[/math] which is continuous with respect to the canonical LF space topology on [math]\displaystyle{ \mathcal{D}(\R^n). }[/math]

Convolution of [math]\displaystyle{ f }[/math] with a distribution [math]\displaystyle{ T \in \mathcal{D}'(\R^n) }[/math] can be defined by taking the transpose of [math]\displaystyle{ C_f }[/math] relative to the duality pairing of [math]\displaystyle{ \mathcal{D}(\R^n) }[/math] with the space [math]\displaystyle{ \mathcal{D}'(\R^n) }[/math] of distributions.[32] If [math]\displaystyle{ f, g, \phi \in \mathcal{D}(\R^n), }[/math] then by Fubini's theorem [math]\displaystyle{ \langle C_fg, \phi \rangle = \int_{\R^n}\phi(x)\int_{\R^n}f(x-y) g(y) \,dy \,dx = \left\langle g,C_{\tilde{f}}\phi \right\rangle. }[/math]

Extending by continuity, the convolution of [math]\displaystyle{ f }[/math] with a distribution [math]\displaystyle{ T }[/math] is defined by [math]\displaystyle{ \langle f \ast T, \phi \rangle = \left\langle T, \tilde{f} \ast \phi \right\rangle, \quad \text{ for all } \phi \in \mathcal{D}(\R^n). }[/math]

An alternative way to define the convolution of a test function [math]\displaystyle{ f }[/math] and a distribution [math]\displaystyle{ T }[/math] is to use the translation operator [math]\displaystyle{ \tau_a. }[/math] The convolution of the compactly supported function [math]\displaystyle{ f }[/math] and the distribution [math]\displaystyle{ T }[/math] is then the function defined for each [math]\displaystyle{ x \in \R^n }[/math] by [math]\displaystyle{ (f \ast T)(x) = \left\langle T, \tau_x \tilde{f} \right\rangle. }[/math]

It can be shown that the convolution of a smooth, compactly supported function and a distribution is a smooth function. If the distribution [math]\displaystyle{ T }[/math] has compact support, and if [math]\displaystyle{ f }[/math] is a polynomial (resp. an exponential function, an analytic function, the restriction of an entire analytic function on [math]\displaystyle{ \Complex^n }[/math] to [math]\displaystyle{ \R^n, }[/math] the restriction of an entire function of exponential type in [math]\displaystyle{ \Complex^n }[/math] to [math]\displaystyle{ \R^n }[/math]), then the same is true of [math]\displaystyle{ T \ast f. }[/math][30] If the distribution [math]\displaystyle{ T }[/math] has compact support as well, then [math]\displaystyle{ f\ast T }[/math] is a compactly supported function, and the Titchmarsh convolution theorem (Hörmander 1983) implies that: [math]\displaystyle{ \operatorname{ch}(\operatorname{supp}(f \ast T)) = \operatorname{ch}(\operatorname{supp}(f)) + \operatorname{ch} (\operatorname{supp}(T)) }[/math] where [math]\displaystyle{ \operatorname{ch} }[/math] denotes the convex hull and [math]\displaystyle{ \operatorname{supp} }[/math] denotes the support.

Convolution of a smooth function with a distribution

Let [math]\displaystyle{ f \in C^\infty(\R^n) }[/math] and [math]\displaystyle{ T \in \mathcal{D}'(\R^n) }[/math] and assume that at least one of [math]\displaystyle{ f }[/math] and [math]\displaystyle{ T }[/math] has compact support. The convolution of [math]\displaystyle{ f }[/math] and [math]\displaystyle{ T, }[/math] denoted by [math]\displaystyle{ f \ast T }[/math] or by [math]\displaystyle{ T \ast f, }[/math] is the smooth function:[30] [math]\displaystyle{ \begin{alignat}{4} f \ast T : \,& \R^n && \to \,&& \Complex \\ & x && \mapsto\,&& \left\langle T, \tau_x \tilde{f} \right\rangle \\ \end{alignat} }[/math] satisfying for all [math]\displaystyle{ p \in \N^n }[/math]: [math]\displaystyle{ \begin{align} &\operatorname{supp}(f \ast T) \subseteq \operatorname{supp}(f)+ \operatorname{supp}(T) \\[6pt] &\text{ for all } p \in \N^n: \quad \begin{cases}\partial^p \left\langle T, \tau_x \tilde{f} \right\rangle = \left\langle T, \partial^p \tau_x \tilde{f} \right\rangle \\ \partial^p (T \ast f) = (\partial^p T) \ast f = T \ast (\partial^p f). \end{cases} \end{align} }[/math]

Let [math]\displaystyle{ M }[/math] be the map [math]\displaystyle{ f \mapsto T \ast f }[/math]. If [math]\displaystyle{ T }[/math] is a distribution, then [math]\displaystyle{ M }[/math] is continuous as a map [math]\displaystyle{ \mathcal{D}(\R^n) \to C^\infty(\R^n) }[/math]. If [math]\displaystyle{ T }[/math] also has compact support, then [math]\displaystyle{ M }[/math] is also continuous as the map [math]\displaystyle{ C^\infty(\R^n) \to C^\infty(\R^n) }[/math] and continuous as the map [math]\displaystyle{ \mathcal{D}(\R^n) \to \mathcal{D}(\R^n). }[/math][30]

If [math]\displaystyle{ L : \mathcal{D}(\R^n) \to C^\infty(\R^n) }[/math] is a continuous linear map such that [math]\displaystyle{ L \partial^\alpha \phi = \partial^\alpha L \phi }[/math] for all [math]\displaystyle{ \alpha }[/math] and all [math]\displaystyle{ \phi \in \mathcal{D}(\R^n) }[/math] then there exists a distribution [math]\displaystyle{ T \in \mathcal{D}'(\R^n) }[/math] such that [math]\displaystyle{ L \phi = T \circ \phi }[/math] for all [math]\displaystyle{ \phi \in \mathcal{D}(\R^n). }[/math][7]

Example.[7] Let [math]\displaystyle{ H }[/math] be the Heaviside function on [math]\displaystyle{ \R. }[/math] For any [math]\displaystyle{ \phi \in \mathcal{D}(\R), }[/math] [math]\displaystyle{ (H \ast \phi)(x) = \int_{-\infty}^x \phi(t) \, dt. }[/math]

Let [math]\displaystyle{ \delta }[/math] be the Dirac measure at 0 and let [math]\displaystyle{ \delta' }[/math] be its derivative as a distribution. Then [math]\displaystyle{ \delta' \ast H = \delta }[/math] and [math]\displaystyle{ 1 \ast \delta' = 0. }[/math] Importantly, the associative law fails to hold: [math]\displaystyle{ 1 = 1 \ast \delta = 1 \ast (\delta' \ast H ) \neq (1 \ast \delta') \ast H = 0 \ast H = 0. }[/math]

Convolution of distributions

It is also possible to define the convolution of two distributions [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] on [math]\displaystyle{ \R^n, }[/math] provided one of them has compact support. Informally, to define [math]\displaystyle{ S \ast T }[/math] where [math]\displaystyle{ T }[/math] has compact support, the idea is to extend the definition of the convolution [math]\displaystyle{ \,\ast\, }[/math] to a linear operation on distributions so that the associativity formula [math]\displaystyle{ S \ast (T \ast \phi) = (S \ast T) \ast \phi }[/math] continues to hold for all test functions [math]\displaystyle{ \phi. }[/math][33]

It is also possible to provide a more explicit characterization of the convolution of distributions.[32] Suppose that [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] are distributions and that [math]\displaystyle{ S }[/math] has compact support. Then the linear maps [math]\displaystyle{ \begin{alignat}{9} \bullet \ast \tilde{S} : \,& \mathcal{D}(\R^n) && \to \,&& \mathcal{D}(\R^n) && \quad \text{ and } \quad && \bullet \ast \tilde{T} : \,&& \mathcal{D}(\R^n) && \to \,&& \mathcal{D}(\R^n) \\ & f && \mapsto\,&& f \ast \tilde{S} && && && f && \mapsto\,&& f \ast \tilde{T} \\ \end{alignat} }[/math] are continuous. The transposes of these maps: [math]\displaystyle{ {}^{t}\left(\bullet \ast \tilde{S}\right) : \mathcal{D}'(\R^n) \to \mathcal{D}'(\R^n) \qquad {}^{t}\left(\bullet \ast \tilde{T}\right) : \mathcal{E}'(\R^n) \to \mathcal{D}'(\R^n) }[/math] are consequently continuous and it can also be shown that[30] [math]\displaystyle{ {}^{t}\left(\bullet \ast \tilde{S}\right)(T) = {}^{t}\left(\bullet \ast \tilde{T}\right)(S). }[/math]

This common value is called the convolution of [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] and it is a distribution that is denoted by [math]\displaystyle{ S \ast T }[/math] or [math]\displaystyle{ T \ast S. }[/math] It satisfies [math]\displaystyle{ \operatorname{supp} (S \ast T) \subseteq \operatorname{supp}(S) + \operatorname{supp}(T). }[/math][30] If [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] are two distributions, at least one of which has compact support, then for any [math]\displaystyle{ a \in \R^n, }[/math] [math]\displaystyle{ \tau_a(S \ast T) = \left(\tau_a S\right) \ast T = S \ast \left(\tau_a T\right). }[/math][30] If [math]\displaystyle{ T }[/math] is a distribution in [math]\displaystyle{ \R^n }[/math] and if [math]\displaystyle{ \delta }[/math] is a Dirac measure then [math]\displaystyle{ T \ast \delta = T = \delta \ast T }[/math];[30] thus [math]\displaystyle{ \delta }[/math] is the identity element of the convolution operation. Moreover, if [math]\displaystyle{ f }[/math] is a function then [math]\displaystyle{ f \ast \delta^{\prime} = f^{\prime} = \delta^{\prime} \ast f }[/math] where now the associativity of convolution implies that [math]\displaystyle{ f^{\prime} \ast g = g^{\prime} \ast f }[/math] for all functions [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g. }[/math]

Suppose that it is [math]\displaystyle{ T }[/math] that has compact support. For [math]\displaystyle{ \phi \in \mathcal{D}(\R^n) }[/math] consider the function [math]\displaystyle{ \psi(x) = \langle T, \tau_{-x} \phi \rangle. }[/math]

It can be readily shown that this defines a smooth function of [math]\displaystyle{ x, }[/math] which moreover has compact support. The convolution of [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] is defined by [math]\displaystyle{ \langle S \ast T, \phi \rangle = \langle S, \psi \rangle. }[/math]

This generalizes the classical notion of convolution of functions and is compatible with differentiation in the following sense: for every multi-index [math]\displaystyle{ \alpha. }[/math] [math]\displaystyle{ \partial^\alpha(S \ast T) = (\partial^\alpha S) \ast T = S \ast (\partial^\alpha T). }[/math]

The convolution of a finite number of distributions, all of which (except possibly one) have compact support, is associative.[30]

This definition of convolution remains valid under less restrictive assumptions about [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T. }[/math][34]

The convolution of distributions with compact support induces a continuous bilinear map [math]\displaystyle{ \mathcal{E}' \times \mathcal{E}' \to \mathcal{E}' }[/math] defined by [math]\displaystyle{ (S,T) \mapsto S * T, }[/math] where [math]\displaystyle{ \mathcal{E}' }[/math] denotes the space of distributions with compact support.[22] However, the convolution map as a function [math]\displaystyle{ \mathcal{E}' \times \mathcal{D}' \to \mathcal{D}' }[/math] is not continuous[22] although it is separately continuous.[35] The convolution maps [math]\displaystyle{ \mathcal{D}(\R^n) \times \mathcal{D}' \to \mathcal{D}' }[/math] and [math]\displaystyle{ \mathcal{D}(\R^n) \times \mathcal{D}' \to \mathcal{D}(\R^n) }[/math] given by [math]\displaystyle{ (f, T) \mapsto f * T }[/math] both fail to be continuous.[22] Each of these non-continuous maps is, however, separately continuous and hypocontinuous.[22]

Convolution versus multiplication

In general, regularity is required for multiplication products, and locality is required for convolution products. It is expressed in the following extension of the Convolution Theorem which guarantees the existence of both convolution and multiplication products. Let [math]\displaystyle{ F(\alpha) = f \in \mathcal{O}'_C }[/math] be a rapidly decreasing tempered distribution or, equivalently, [math]\displaystyle{ F(f) = \alpha \in \mathcal{O}_M }[/math] be an ordinary (slowly growing, smooth) function within the space of tempered distributions and let [math]\displaystyle{ F }[/math] be the normalized (unitary, ordinary frequency) Fourier transform.[36] Then, according to (Schwartz 1951), [math]\displaystyle{ F(f * g) = F(f) \cdot F(g) \qquad \text{ and } \qquad F(\alpha \cdot g) = F(\alpha) * F(g) }[/math] hold within the space of tempered distributions.[37][38][39] In particular, these equations become the Poisson Summation Formula if [math]\displaystyle{ g \equiv \operatorname{\text{Ш}} }[/math] is the Dirac Comb.[40] The space of all rapidly decreasing tempered distributions is also called the space of convolution operators [math]\displaystyle{ \mathcal{O}'_C }[/math] and the space of all ordinary functions within the space of tempered distributions is also called the space of multiplication operators [math]\displaystyle{ \mathcal{O}_M. }[/math] More generally, [math]\displaystyle{ F(\mathcal{O}'_C) = \mathcal{O}_M }[/math] and [math]\displaystyle{ F(\mathcal{O}_M) = \mathcal{O}'_C. }[/math][41][42] A particular case is the Paley-Wiener-Schwartz Theorem which states that [math]\displaystyle{ F(\mathcal{E}') = \operatorname{PW} }[/math] and [math]\displaystyle{ F(\operatorname{PW} ) = \mathcal{E}'. }[/math] This is because [math]\displaystyle{ \mathcal{E}' \subseteq \mathcal{O}'_C }[/math] and [math]\displaystyle{ \operatorname{PW} \subseteq \mathcal{O}_M. }[/math] In other words, compactly supported tempered distributions [math]\displaystyle{ \mathcal{E}' }[/math] belong to the space of convolution operators [math]\displaystyle{ \mathcal{O}'_C }[/math] and Paley-Wiener functions [math]\displaystyle{ \operatorname{PW}, }[/math] better known as bandlimited functions, belong to the space of multiplication operators [math]\displaystyle{ \mathcal{O}_M. }[/math][43]

For example, let [math]\displaystyle{ g \equiv \operatorname{\text{Ш}} \in \mathcal{S}' }[/math] be the Dirac comb and [math]\displaystyle{ f \equiv \delta \in \mathcal{E}' }[/math] be the Dirac delta;then [math]\displaystyle{ \alpha \equiv 1 \in \operatorname{PW} }[/math] is the function that is constantly one and both equations yield the Dirac-comb identity. Another example is to let [math]\displaystyle{ g }[/math] be the Dirac comb and [math]\displaystyle{ f \equiv \operatorname{rect} \in \mathcal{E}' }[/math] be the rectangular function; then [math]\displaystyle{ \alpha \equiv \operatorname{sinc} \in \operatorname{PW} }[/math] is the sinc function and both equations yield the Classical Sampling Theorem for suitable [math]\displaystyle{ \operatorname{rect} }[/math] functions. More generally, if [math]\displaystyle{ g }[/math] is the Dirac comb and [math]\displaystyle{ f \in \mathcal{S} \subseteq \mathcal{O}'_C \cap \mathcal{O}_M }[/math] is a smooth window function (Schwartz function), for example, the Gaussian, then [math]\displaystyle{ \alpha \in \mathcal{S} }[/math] is another smooth window function (Schwartz function). They are known as mollifiers, especially in partial differential equations theory, or as regularizers in physics because they allow turning generalized functions into regular functions.

Tensor products of distributions

Let [math]\displaystyle{ U \subseteq \R^m }[/math] and [math]\displaystyle{ V \subseteq \R^n }[/math] be open sets. Assume all vector spaces to be over the field [math]\displaystyle{ \mathbb{F}, }[/math] where [math]\displaystyle{ \mathbb{F}=\R }[/math] or [math]\displaystyle{ \Complex. }[/math] For [math]\displaystyle{ f \in \mathcal{D}(U \times V) }[/math] define for every [math]\displaystyle{ u \in U }[/math] and every [math]\displaystyle{ v \in V }[/math] the following functions: [math]\displaystyle{ \begin{alignat}{9} f_u : \,& V && \to \,&& \mathbb{F} && \quad \text{ and } \quad && f^v : \,&& U && \to \,&& \mathbb{F} \\ & y && \mapsto\,&& f(u, y) && && && x && \mapsto\,&& f(x, v) \\ \end{alignat} }[/math]

Given [math]\displaystyle{ S \in \mathcal{D}^{\prime}(U) }[/math] and [math]\displaystyle{ T \in \mathcal{D}^{\prime}(V), }[/math] define the following functions: [math]\displaystyle{ \begin{alignat}{9} \langle S, f^{\bullet}\rangle : \,& V && \to \,&& \mathbb{F} && \quad \text{ and } \quad && \langle T, f_{\bullet}\rangle : \,&& U && \to \,&& \mathbb{F} \\ & v && \mapsto\,&& \langle S, f^v \rangle && && && u && \mapsto\,&& \langle T, f_u \rangle \\ \end{alignat} }[/math] where [math]\displaystyle{ \langle T, f_{\bullet}\rangle \in \mathcal{D}(U) }[/math] and [math]\displaystyle{ \langle S, f^{\bullet}\rangle \in \mathcal{D}(V). }[/math] These definitions associate every [math]\displaystyle{ S \in \mathcal{D}'(U) }[/math] and [math]\displaystyle{ T \in \mathcal{D}'(V) }[/math] with the (respective) continuous linear map: [math]\displaystyle{ \begin{alignat}{9} \,&& \mathcal{D}(U \times V) & \to \,&& \mathcal{D}(V) && \quad \text{ and } \quad && \,& \mathcal{D}(U \times V) && \to \,&& \mathcal{D}(U) \\ && f \ & \mapsto\,&& \langle S, f^{\bullet} \rangle && && & f \ && \mapsto\,&& \langle T, f_{\bullet} \rangle \\ \end{alignat} }[/math]

Moreover, if either [math]\displaystyle{ S }[/math] (resp. [math]\displaystyle{ T }[/math]) has compact support then it also induces a continuous linear map of [math]\displaystyle{ C^\infty(U \times V) \to C^\infty(V) }[/math] (resp. [math]\displaystyle{ C^\infty(U \times V) \to C^\infty(U) }[/math]).[44]

Fubini's theorem for distributions[44] — Let [math]\displaystyle{ S \in \mathcal{D}'(U) }[/math] and [math]\displaystyle{ T \in \mathcal{D}'(V). }[/math] If [math]\displaystyle{ f \in \mathcal{D}(U \times V) }[/math] then [math]\displaystyle{ \langle S, \langle T, f_{\bullet} \rangle \rangle = \langle T, \langle S, f^{\bullet} \rangle \rangle. }[/math]

The tensor product of [math]\displaystyle{ S \in \mathcal{D}'(U) }[/math] and [math]\displaystyle{ T \in \mathcal{D}'(V), }[/math] denoted by [math]\displaystyle{ S \otimes T }[/math] or [math]\displaystyle{ T \otimes S, }[/math] is the distribution in [math]\displaystyle{ U \times V }[/math] defined by:[44] [math]\displaystyle{ (S \otimes T)(f) := \langle S, \langle T, f_{\bullet} \rangle \rangle = \langle T, \langle S, f^{\bullet}\rangle \rangle. }[/math]

Spaces of distributions

For all [math]\displaystyle{ 0 \lt k \lt \infty }[/math] and all [math]\displaystyle{ 1 \lt p \lt \infty, }[/math] every one of the following canonical injections is continuous and has an image (also called the range) that is a dense subset of its codomain: [math]\displaystyle{ \begin{matrix} C_c^\infty(U) & \to & C_c^k(U) & \to & C_c^0(U) & \to & L_c^\infty(U) & \to & L_c^p(U) & \to & L_c^1(U) \\ \downarrow & &\downarrow && \downarrow \\ C^\infty(U) & \to & C^k(U) & \to & C^0(U) \\{} \end{matrix} }[/math] where the topologies on [math]\displaystyle{ L_c^q(U) }[/math] ([math]\displaystyle{ 1 \leq q \leq \infty }[/math]) are defined as direct limits of the spaces [math]\displaystyle{ L_c^q(K) }[/math] in a manner analogous to how the topologies on [math]\displaystyle{ C_c^k(U) }[/math] were defined (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in its codomain.[45]

Suppose that [math]\displaystyle{ X }[/math] is one of the spaces [math]\displaystyle{ C_c^k(U) }[/math] (for [math]\displaystyle{ k \in \{0, 1, \ldots, \infty\} }[/math]) or [math]\displaystyle{ L^p_c(U) }[/math] (for [math]\displaystyle{ 1 \leq p \leq \infty }[/math]) or [math]\displaystyle{ L^p(U) }[/math] (for [math]\displaystyle{ 1 \leq p \lt \infty }[/math]). Because the canonical injection [math]\displaystyle{ \operatorname{In}_X : C_c^\infty(U) \to X }[/math] is a continuous injection whose image is dense in the codomain, this map's transpose [math]\displaystyle{ {}^{t}\operatorname{In}_X : X'_b \to \mathcal{D}'(U) = \left(C_c^\infty(U)\right)'_b }[/math] is a continuous injection. This injective transpose map thus allows the continuous dual space [math]\displaystyle{ X' }[/math] of [math]\displaystyle{ X }[/math] to be identified with a certain vector subspace of the space [math]\displaystyle{ \mathcal{D}'(U) }[/math] of all distributions (specifically, it is identified with the image of this transpose map). This transpose map is continuous but it is not necessarily a topological embedding. A linear subspace of [math]\displaystyle{ \mathcal{D}'(U) }[/math] carrying a locally convex topology that is finer than the subspace topology induced on it by [math]\displaystyle{ \mathcal{D}'(U) = \left(C_c^\infty(U)\right)'_b }[/math] is called a space of distributions.[46] Almost all of the spaces of distributions mentioned in this article arise in this way (for example, tempered distribution, restrictions, distributions of order [math]\displaystyle{ \leq }[/math] some integer, distributions induced by a positive Radon measure, distributions induced by an [math]\displaystyle{ L^p }[/math]-function, etc.) and any representation theorem about the continuous dual space of [math]\displaystyle{ X }[/math] may, through the transpose [math]\displaystyle{ {}^{t}\operatorname{In}_X : X'_b \to \mathcal{D}'(U), }[/math] be transferred directly to elements of the space [math]\displaystyle{ \operatorname{Im} \left({}^{t}\operatorname{In}_X\right). }[/math]

Radon measures

The inclusion map [math]\displaystyle{ \operatorname{In} : C_c^\infty(U) \to C_c^0(U) }[/math] is a continuous injection whose image is dense in its codomain, so the transpose [math]\displaystyle{ {}^{t}\operatorname{In} : (C_c^0(U))'_b \to \mathcal{D}'(U) = (C_c^\infty(U))'_b }[/math] is also a continuous injection.

Note that the continuous dual space [math]\displaystyle{ (C_c^0(U))'_b }[/math] can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals [math]\displaystyle{ T \in (C_c^0(U))'_b }[/math] and integral with respect to a Radon measure; that is,

  • if [math]\displaystyle{ T \in (C_c^0(U))'_b }[/math] then there exists a Radon measure [math]\displaystyle{ \mu }[/math] on U such that for all [math]\displaystyle{ f \in C_c^0(U), T(f) = \int_U f \, d\mu, }[/math] and
  • if [math]\displaystyle{ \mu }[/math] is a Radon measure on U then the linear functional on [math]\displaystyle{ C_c^0(U) }[/math] defined by sending [math]\displaystyle{ f \in C_c^0(U) }[/math] to [math]\displaystyle{ \int_U f \, d\mu }[/math] is continuous.

Through the injection [math]\displaystyle{ {}^{t}\operatorname{In} : (C_c^0(U))'_b \to \mathcal{D}'(U), }[/math] every Radon measure becomes a distribution on U. If [math]\displaystyle{ f }[/math] is a locally integrable function on U then the distribution [math]\displaystyle{ \phi \mapsto \int_U f(x) \phi(x) \, dx }[/math] is a Radon measure; so Radon measures form a large and important space of distributions.

The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally [math]\displaystyle{ L^\infty }[/math] functions on U:

Theorem.[47] — Suppose [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] is a Radon measure, where [math]\displaystyle{ U \subseteq \R^n, }[/math] let [math]\displaystyle{ V \subseteq U }[/math] be a neighborhood of the support of [math]\displaystyle{ T, }[/math] and let [math]\displaystyle{ I = \{p \in \N^n : |p| \leq n\}. }[/math] There exists a family [math]\displaystyle{ f=(f_p)_{p\in I} }[/math] of locally [math]\displaystyle{ L^\infty }[/math] functions on U such that [math]\displaystyle{ \operatorname{supp} f_p \subseteq V }[/math] for every [math]\displaystyle{ p\in I, }[/math] and [math]\displaystyle{ T = \sum_{p\in I} \partial^p f_p. }[/math] Furthermore, [math]\displaystyle{ T }[/math] is also equal to a finite sum of derivatives of continuous functions on [math]\displaystyle{ U, }[/math] where each derivative has order [math]\displaystyle{ \leq 2 n. }[/math]

Positive Radon measures

A linear function [math]\displaystyle{ T }[/math] on a space of functions is called positive if whenever a function [math]\displaystyle{ f }[/math] that belongs to the domain of [math]\displaystyle{ T }[/math] is non-negative (that is, [math]\displaystyle{ f }[/math] is real-valued and [math]\displaystyle{ f \geq 0 }[/math]) then [math]\displaystyle{ T(f) \geq 0. }[/math] One may show that every positive linear functional on [math]\displaystyle{ C_c^0(U) }[/math] is necessarily continuous (that is, necessarily a Radon measure).[48] Lebesgue measure is an example of a positive Radon measure.

Locally integrable functions as distributions

One particularly important class of Radon measures are those that are induced locally integrable functions. The function [math]\displaystyle{ f : U \to \R }[/math] is called locally integrable if it is Lebesgue integrable over every compact subset K of U. This is a large class of functions that includes all continuous functions and all Lp space [math]\displaystyle{ L^p }[/math] functions. The topology on [math]\displaystyle{ \mathcal{D}(U) }[/math] is defined in such a fashion that any locally integrable function [math]\displaystyle{ f }[/math] yields a continuous linear functional on [math]\displaystyle{ \mathcal{D}(U) }[/math] – that is, an element of [math]\displaystyle{ \mathcal{D}'(U) }[/math] – denoted here by [math]\displaystyle{ T_f, }[/math] whose value on the test function [math]\displaystyle{ \phi }[/math] is given by the Lebesgue integral: [math]\displaystyle{ \langle T_f, \phi \rangle = \int_U f \phi\,dx. }[/math]

Conventionally, one abuses notation by identifying [math]\displaystyle{ T_f }[/math] with [math]\displaystyle{ f, }[/math] provided no confusion can arise, and thus the pairing between [math]\displaystyle{ T_f }[/math] and [math]\displaystyle{ \phi }[/math] is often written [math]\displaystyle{ \langle f, \phi \rangle = \langle T_f, \phi \rangle. }[/math]

If [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] are two locally integrable functions, then the associated distributions [math]\displaystyle{ T_f }[/math] and [math]\displaystyle{ T_g }[/math] are equal to the same element of [math]\displaystyle{ \mathcal{D}'(U) }[/math] if and only if [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] are equal almost everywhere (see, for instance, (Hörmander 1983)). Similarly, every Radon measure [math]\displaystyle{ \mu }[/math] on [math]\displaystyle{ U }[/math] defines an element of [math]\displaystyle{ \mathcal{D}'(U) }[/math] whose value on the test function [math]\displaystyle{ \phi }[/math] is [math]\displaystyle{ \int\phi \,d\mu. }[/math] As above, it is conventional to abuse notation and write the pairing between a Radon measure [math]\displaystyle{ \mu }[/math] and a test function [math]\displaystyle{ \phi }[/math] as [math]\displaystyle{ \langle \mu, \phi \rangle. }[/math] Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure.

Test functions as distributions

The test functions are themselves locally integrable, and so define distributions. The space of test functions [math]\displaystyle{ C_c^\infty(U) }[/math] is sequentially dense in [math]\displaystyle{ \mathcal{D}'(U) }[/math] with respect to the strong topology on [math]\displaystyle{ \mathcal{D}'(U). }[/math][49] This means that for any [math]\displaystyle{ T \in \mathcal{D}'(U), }[/math] there is a sequence of test functions, [math]\displaystyle{ (\phi_i)_{i=1}^\infty, }[/math] that converges to [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] (in its strong dual topology) when considered as a sequence of distributions. Or equivalently, [math]\displaystyle{ \langle \phi_i, \psi \rangle \to \langle T, \psi \rangle \qquad \text{ for all } \psi \in \mathcal{D}(U). }[/math]

Distributions with compact support

The inclusion map [math]\displaystyle{ \operatorname{In}: C_c^\infty(U) \to C^\infty(U) }[/math] is a continuous injection whose image is dense in its codomain, so the transpose map [math]\displaystyle{ {}^{t}\operatorname{In}: (C^\infty(U))'_b \to \mathcal{D}'(U) = (C_c^\infty(U))'_b }[/math] is also a continuous injection. Thus the image of the transpose, denoted by [math]\displaystyle{ \mathcal{E}'(U), }[/math] forms a space of distributions.[13]

The elements of [math]\displaystyle{ \mathcal{E}'(U) = (C^\infty(U))'_b }[/math] can be identified as the space of distributions with compact support.[13] Explicitly, if [math]\displaystyle{ T }[/math] is a distribution on U then the following are equivalent,

  • [math]\displaystyle{ T \in \mathcal{E}'(U). }[/math]
  • The support of [math]\displaystyle{ T }[/math] is compact.
  • The restriction of [math]\displaystyle{ T }[/math] to [math]\displaystyle{ C_c^\infty(U), }[/math] when that space is equipped with the subspace topology inherited from [math]\displaystyle{ C^\infty(U) }[/math] (a coarser topology than the canonical LF topology), is continuous.[13]
  • There is a compact subset K of U such that for every test function [math]\displaystyle{ \phi }[/math] whose support is completely outside of K, we have [math]\displaystyle{ T(\phi) = 0. }[/math]

Compactly supported distributions define continuous linear functionals on the space [math]\displaystyle{ C^\infty(U) }[/math]; recall that the topology on [math]\displaystyle{ C^\infty(U) }[/math] is defined such that a sequence of test functions [math]\displaystyle{ \phi_k }[/math] converges to 0 if and only if all derivatives of [math]\displaystyle{ \phi_k }[/math] converge uniformly to 0 on every compact subset of U. Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from [math]\displaystyle{ C_c^\infty(U) }[/math] to [math]\displaystyle{ C^\infty(U). }[/math]

Distributions of finite order

Let [math]\displaystyle{ k \in \N. }[/math] The inclusion map [math]\displaystyle{ \operatorname{In}: C_c^\infty(U) \to C_c^k(U) }[/math] is a continuous injection whose image is dense in its codomain, so the transpose [math]\displaystyle{ {}^{t}\operatorname{In}: (C_c^k(U))'_b \to \mathcal{D}'(U) = (C_c^\infty(U))'_b }[/math] is also a continuous injection. Consequently, the image of [math]\displaystyle{ {}^{t}\operatorname{In}, }[/math] denoted by [math]\displaystyle{ \mathcal{D}'^{k}(U), }[/math] forms a space of distributions. The elements of [math]\displaystyle{ \mathcal{D}'^k(U) }[/math] are the distributions of order [math]\displaystyle{ \,\leq k. }[/math][16] The distributions of order [math]\displaystyle{ \,\leq 0, }[/math] which are also called distributions of order 0 are exactly the distributions that are Radon measures (described above).

For [math]\displaystyle{ 0 \neq k \in \N, }[/math] a distribution of order k is a distribution of order [math]\displaystyle{ \,\leq k }[/math] that is not a distribution of order [math]\displaystyle{ \,\leq k - 1 }[/math].[16]

A distribution is said to be of finite order if there is some integer [math]\displaystyle{ k }[/math] such that it is a distribution of order [math]\displaystyle{ \,\leq k, }[/math] and the set of distributions of finite order is denoted by [math]\displaystyle{ \mathcal{D}'^{F}(U). }[/math] Note that if [math]\displaystyle{ k \leq l }[/math] then [math]\displaystyle{ \mathcal{D}'^k(U) \subseteq \mathcal{D}'^l(U) }[/math] so that [math]\displaystyle{ \mathcal{D}'^{F}(U) := \bigcup_{n=0}^\infty \mathcal{D}'^n(U) }[/math] is a vector subspace of [math]\displaystyle{ \mathcal{D}'(U) }[/math], and furthermore, if and only if [math]\displaystyle{ \mathcal{D}'^{F}(U) = \mathcal{D}'(U). }[/math][16]

Structure of distributions of finite order

Every distribution with compact support in U is a distribution of finite order.[16] Indeed, every distribution in U is locally a distribution of finite order, in the following sense:[16] If V is an open and relatively compact subset of U and if [math]\displaystyle{ \rho_{VU} }[/math] is the restriction mapping from U to V, then the image of [math]\displaystyle{ \mathcal{D}'(U) }[/math] under [math]\displaystyle{ \rho_{VU} }[/math] is contained in [math]\displaystyle{ \mathcal{D}'^{F}(V). }[/math]

The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures:

Theorem[16] — Suppose [math]\displaystyle{ T \in \mathcal{D}'(U) }[/math] has finite order and [math]\displaystyle{ I =\{p \in \N^n : |p| \leq k\}. }[/math] Given any open subset V of U containing the support of [math]\displaystyle{ T, }[/math] there is a family of Radon measures in U, [math]\displaystyle{ (\mu_p)_{p \in I}, }[/math] such that for very [math]\displaystyle{ p \in I, \operatorname{supp}(\mu_p) \subseteq V }[/math] and [math]\displaystyle{ T = \sum_{|p| \leq k} \partial^p \mu_p. }[/math]

Example. (Distributions of infinite order) Let [math]\displaystyle{ U := (0, \infty) }[/math] and for every test function [math]\displaystyle{ f, }[/math] let [math]\displaystyle{ S f := \sum_{m=1}^\infty (\partial^m f)\left(\frac{1}{m}\right). }[/math]

Then [math]\displaystyle{ S }[/math] is a distribution of infinite order on U. Moreover, [math]\displaystyle{ S }[/math] can not be extended to a distribution on [math]\displaystyle{ \R }[/math]; that is, there exists no distribution [math]\displaystyle{ T }[/math] on [math]\displaystyle{ \R }[/math] such that the restriction of [math]\displaystyle{ T }[/math] to U is equal to [math]\displaystyle{ S. }[/math][50]

Tempered distributions and Fourier transform

Defined below are the tempered distributions, which form a subspace of [math]\displaystyle{ \mathcal{D}'(\R^n), }[/math] the space of distributions on [math]\displaystyle{ \R^n. }[/math] This is a proper subspace: while every tempered distribution is a distribution and an element of [math]\displaystyle{ \mathcal{D}'(\R^n), }[/math] the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in [math]\displaystyle{ \mathcal{D}'(\R^n). }[/math]

Schwartz space

The Schwartz space [math]\displaystyle{ \mathcal{S}(\R^n) }[/math] is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus [math]\displaystyle{ \phi:\R^n\to\R }[/math] is in the Schwartz space provided that any derivative of [math]\displaystyle{ \phi, }[/math] multiplied with any power of [math]\displaystyle{ |x|, }[/math] converges to 0 as [math]\displaystyle{ |x| \to \infty. }[/math] These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices [math]\displaystyle{ \alpha }[/math] and [math]\displaystyle{ \beta }[/math] define [math]\displaystyle{ p_{\alpha, \beta}(\phi) = \sup_{x \in \R^n} \left|x^\alpha \partial^\beta \phi(x) \right|. }[/math]

Then [math]\displaystyle{ \phi }[/math] is in the Schwartz space if all the values satisfy [math]\displaystyle{ p_{\alpha, \beta}(\phi) \lt \infty. }[/math]

The family of seminorms [math]\displaystyle{ p_{\alpha,\beta} }[/math] defines a locally convex topology on the Schwartz space. For [math]\displaystyle{ n = 1, }[/math] the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology:[51] [math]\displaystyle{ |f|_{m,k} = \sup_{|p|\le m} \left(\sup_{x \in \R^n} \left\{(1 + |x|)^k \left|(\partial^\alpha f)(x) \right|\right\}\right), \qquad k,m \in \N. }[/math]

Otherwise, one can define a norm on [math]\displaystyle{ \mathcal{S}(\R^n) }[/math] via [math]\displaystyle{ \|\phi\|_k = \max_{|\alpha| + |\beta| \leq k} \sup_{x \in \R^n} \left| x^\alpha \partial^\beta \phi(x)\right|, \qquad k \ge 1. }[/math]

The Schwartz space is a Fréchet space (that is, a complete metrizable locally convex space). Because the Fourier transform changes [math]\displaystyle{ \partial^\alpha }[/math] into multiplication by [math]\displaystyle{ x^\alpha }[/math] and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function.

A sequence [math]\displaystyle{ \{f_i\} }[/math] in [math]\displaystyle{ \mathcal{S}(\R^n) }[/math] converges to 0 in [math]\displaystyle{ \mathcal{S}(\R^n) }[/math] if and only if the functions [math]\displaystyle{ (1 + |x|)^k (\partial^p f_i)(x) }[/math] converge to 0 uniformly in the whole of [math]\displaystyle{ \R^n, }[/math] which implies that such a sequence must converge to zero in [math]\displaystyle{ C^\infty(\R^n). }[/math][52]

[math]\displaystyle{ \mathcal{D}(\R^n) }[/math] is dense in [math]\displaystyle{ \mathcal{S}(\R^n). }[/math] The subset of all analytic Schwartz functions is dense in [math]\displaystyle{ \mathcal{S}(\R^n) }[/math] as well.[53]

The Schwartz space is nuclear, and the tensor product of two maps induces a canonical surjective TVS-isomorphisms [math]\displaystyle{ \mathcal{S}(\R^m)\ \widehat{\otimes}\ \mathcal{S}(\R^n) \to \mathcal{S}(\R^{m+n}), }[/math] where [math]\displaystyle{ \widehat{\otimes} }[/math] represents the completion of the injective tensor product (which in this case is identical to the completion of the projective tensor product).[54]

Tempered distributions

The inclusion map [math]\displaystyle{ \operatorname{In}: \mathcal{D}(\R^n) \to \mathcal{S}(\R^n) }[/math] is a continuous injection whose image is dense in its codomain, so the transpose [math]\displaystyle{ {}^{t}\operatorname{In}: (\mathcal{S}(\R^n))'_b \to \mathcal{D}'(\R^n) }[/math] is also a continuous injection. Thus, the image of the transpose map, denoted by [math]\displaystyle{ \mathcal{S}'(\R^n), }[/math] forms a space of distributions.

The space [math]\displaystyle{ \mathcal{S}'(\R^n) }[/math] is called the space of tempered distributions. It is the continuous dual space of the Schwartz space. Equivalently, a distribution [math]\displaystyle{ T }[/math] is a tempered distribution if and only if [math]\displaystyle{ \left(\text{ for all } \alpha, \beta \in \N^n: \lim_{m\to \infty} p_{\alpha, \beta} (\phi_m) = 0 \right) \Longrightarrow \lim_{m\to \infty} T(\phi_m)=0. }[/math]

The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of Lp space [math]\displaystyle{ L^p(\R^n) }[/math] for [math]\displaystyle{ p \geq 1 }[/math] are tempered distributions.

The tempered distributions can also be characterized as slowly growing, meaning that each derivative of [math]\displaystyle{ T }[/math] grows at most as fast as some polynomial. This characterization is dual to the rapidly falling behaviour of the derivatives of a function in the Schwartz space, where each derivative of [math]\displaystyle{ \phi }[/math] decays faster than every inverse power of [math]\displaystyle{ |x|. }[/math] An example of a rapidly falling function is [math]\displaystyle{ |x|^n\exp (-\lambda |x|^\beta) }[/math] for any positive [math]\displaystyle{ n, \lambda, \beta. }[/math]

Fourier transform

To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform [math]\displaystyle{ F : \mathcal{S}(\R^n) \to \mathcal{S}(\R^n) }[/math] is a TVS-automorphism of the Schwartz space, and the Fourier transform is defined to be its transpose [math]\displaystyle{ {}^{t}F : \mathcal{S}'(\R^n) \to \mathcal{S}'(\R^n), }[/math] which (abusing notation) will again be denoted by [math]\displaystyle{ F. }[/math] So the Fourier transform of the tempered distribution [math]\displaystyle{ T }[/math] is defined by [math]\displaystyle{ (FT)(\psi) = T(F \psi) }[/math] for every Schwartz function [math]\displaystyle{ \psi. }[/math] [math]\displaystyle{ FT }[/math] is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that [math]\displaystyle{ F \dfrac{dT}{dx} = ixFT }[/math] and also with convolution: if [math]\displaystyle{ T }[/math] is a tempered distribution and [math]\displaystyle{ \psi }[/math] is a slowly increasing smooth function on [math]\displaystyle{ \R^n, }[/math] [math]\displaystyle{ \psi T }[/math] is again a tempered distribution and [math]\displaystyle{ F(\psi T) = F \psi * FT }[/math] is the convolution of [math]\displaystyle{ FT }[/math] and [math]\displaystyle{ F \psi. }[/math] In particular, the Fourier transform of the constant function equal to 1 is the [math]\displaystyle{ \delta }[/math] distribution.

Expressing tempered distributions as sums of derivatives

If [math]\displaystyle{ T \in \mathcal{S}'(\R^n) }[/math] is a tempered distribution, then there exists a constant [math]\displaystyle{ C \gt 0, }[/math] and positive integers [math]\displaystyle{ M }[/math] and [math]\displaystyle{ N }[/math] such that for all Schwartz functions [math]\displaystyle{ \phi \in \mathcal{S}(\R^n) }[/math] [math]\displaystyle{ \langle T, \phi \rangle \le C\sum\nolimits_{|\alpha|\le N, |\beta|\le M}\sup_{x \in \R^n} \left|x^\alpha \partial^\beta \phi(x) \right|=C\sum\nolimits_{|\alpha|\le N, |\beta|\le M} p_{\alpha, \beta}(\phi). }[/math]

This estimate, along with some techniques from functional analysis, can be used to show that there is a continuous slowly increasing function [math]\displaystyle{ F }[/math] and a multi-index [math]\displaystyle{ \alpha }[/math] such that [math]\displaystyle{ T = \partial^\alpha F. }[/math]

Restriction of distributions to compact sets

If [math]\displaystyle{ T \in \mathcal{D}'(\R^n), }[/math] then for any compact set [math]\displaystyle{ K \subseteq \R^n, }[/math] there exists a continuous function [math]\displaystyle{ F }[/math]compactly supported in [math]\displaystyle{ \R^n }[/math] (possibly on a larger set than K itself) and a multi-index [math]\displaystyle{ \alpha }[/math] such that [math]\displaystyle{ T = \partial^\alpha F }[/math] on [math]\displaystyle{ C_c^\infty(K). }[/math]

Using holomorphic functions as test functions

The success of the theory led to an investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example, Feynman integrals.

See also

Differential equations related

Generalizations of distributions

Notes

  1. Note that [math]\displaystyle{ i }[/math] being an integer implies [math]\displaystyle{ i \neq \infty. }[/math] This is sometimes expressed as [math]\displaystyle{ 0 \leq i \lt k + 1. }[/math] Since [math]\displaystyle{ \infty + 1 = \infty, }[/math] the inequality "[math]\displaystyle{ 0 \leq i \lt k + 1 }[/math]" means: [math]\displaystyle{ 0 \leq i \lt \infty }[/math] if [math]\displaystyle{ k = \infty, }[/math] while if [math]\displaystyle{ k \neq \infty }[/math] then it means [math]\displaystyle{ 0 \leq i \leq k. }[/math]
  2. The image of the compact set [math]\displaystyle{ K }[/math] under a continuous [math]\displaystyle{ \R }[/math]-valued map (for example, [math]\displaystyle{ x \mapsto \left|\partial^p f(x)\right| }[/math] for [math]\displaystyle{ x \in U }[/math]) is itself a compact, and thus bounded, subset of [math]\displaystyle{ \R. }[/math] If [math]\displaystyle{ K \neq \varnothing }[/math] then this implies that each of the functions defined above is [math]\displaystyle{ \R }[/math]-valued (that is, none of the supremums above are ever equal to [math]\displaystyle{ \infty }[/math]).
  3. Exactly as with [math]\displaystyle{ C^k(K;U), }[/math] the space [math]\displaystyle{ C^k(K; U') }[/math] is defined to be the vector subspace of [math]\displaystyle{ C^k(U') }[/math] consisting of maps with support contained in [math]\displaystyle{ K }[/math] endowed with the subspace topology it inherits from [math]\displaystyle{ C^k(U') }[/math].
  4. Even though the topology of [math]\displaystyle{ C_c^\infty(U) }[/math] is not metrizable, a linear functional on [math]\displaystyle{ C_c^\infty(U) }[/math] is continuous if and only if it is sequentially continuous.
  5. A null sequence is a sequence that converges to the origin.
  6. If [math]\displaystyle{ \mathcal{P} }[/math] is also directed under the usual function comparison then we can take the finite collection to consist of a single element.
  7. The extension theorem for mappings defined from a subspace S of a topological vector space E to the topological space E itself works for non-linear mappings as well, provided they are assumed to be uniformly continuous. But, unfortunately, this is not our case, we would desire to “extend” a linear continuous mapping A from a tvs E into another tvs F, in order to obtain a linear continuous mapping from the dual E’ to the dual F’ (note the order of spaces). In general, this is not even an extension problem, because (in general) E is not necessarily a subset of its own dual E’. Moreover, It is not a classic topological transpose problem, because the transpose of A goes from F’ to E’ and not from E’ to F’. Our case needs, indeed, a new order of ideas, involving the specific topological properties of the Laurent Schwartz spaces D(U) and D’(U), together with the fundamental concept of weak (or Schwartz) adjoint of the linear continuous operator A.
  8. For example, let [math]\displaystyle{ U = \R }[/math] and take [math]\displaystyle{ P }[/math] to be the ordinary derivative for functions of one real variable and assume the support of [math]\displaystyle{ \phi }[/math] to be contained in the finite interval [math]\displaystyle{ (a,b), }[/math] then since [math]\displaystyle{ \operatorname{supp}(\phi) \subseteq (a, b) }[/math] [math]\displaystyle{ \begin{align} \int_\R \phi'(x)f(x)\,dx &= \int_a^b \phi'(x)f(x) \,dx \\ &= \phi(x)f(x)\big\vert_a^b - \int_a^b f'(x) \phi(x) \,d x \\ &= \phi(b)f(b) - \phi(a)f(a) - \int_a^b f'(x) \phi(x) \,d x \\ &=-\int_a^b f'(x) \phi(x) \,d x \end{align} }[/math] where the last equality is because [math]\displaystyle{ \phi(a) = \phi(b) = 0. }[/math]

References

  1. 1.0 1.1 Trèves 2006, pp. 222-223.
  2. Grubb 2009, p. 14
  3. Trèves 2006, pp. 85-89.
  4. 4.0 4.1 Trèves 2006, pp. 142-149.
  5. Trèves 2006, pp. 356-358.
  6. 6.0 6.1 Trèves 2006, pp. 131-134.
  7. 7.0 7.1 7.2 7.3 7.4 7.5 7.6 Rudin 1991, pp. 149-181.
  8. Trèves 2006, pp. 526-534.
  9. Trèves 2006, p. 357.
  10. See for example Grubb 2009, p. 14.
  11. 11.0 11.1 11.2 11.3 Trèves 2006, pp. 245-247.
  12. 12.0 12.1 12.2 12.3 12.4 12.5 12.6 Trèves 2006, pp. 253-255.
  13. 13.0 13.1 13.2 13.3 13.4 Trèves 2006, pp. 255-257.
  14. Trèves 2006, pp. 264-266.
  15. Rudin 1991, p. 165.
  16. 16.0 16.1 16.2 16.3 16.4 16.5 16.6 Trèves 2006, pp. 258-264.
  17. Rudin 1991, pp. 169-170.
  18. Strichartz, Robert (1993) (in English). A Guide to Distribution Theory and Fourier Transforms. USA. pp. 17. 
  19. Strichartz 1994, §2.3; Trèves 2006.
  20. Rudin 1991, p. 180.
  21. 21.0 21.1 Trèves 2006, pp. 247-252.
  22. 22.0 22.1 22.2 22.3 22.4 Trèves 2006, p. 423.
  23. Trèves 2006, p. 261.
  24. Per Persson (username: md2perpe) (Jun 27, 2017). "Multiplication of two distributions whose singular supports are disjoint". Stack Exchange Network. https://math.stackexchange.com/q/2338283. 
  25. Lyons, T. (1998). "Differential equations driven by rough signals". Revista Matemática Iberoamericana: 215–310. doi:10.4171/RMI/240. 
  26. Hairer, Martin (2014). "A theory of regularity structures". Inventiones Mathematicae 198 (2): 269–504. doi:10.1007/s00222-014-0505-4. Bibcode2014InMat.198..269H. 
  27. See for example Hörmander 1983, Theorem 6.1.1.
  28. See Hörmander 1983, Theorem 6.1.2.
  29. 29.0 29.1 29.2 Trèves 2006, pp. 278-283.
  30. 30.0 30.1 30.2 30.3 30.4 30.5 30.6 30.7 30.8 30.9 Trèves 2006, pp. 284-297.
  31. See for example Rudin 1991, §6.29.
  32. 32.0 32.1 Trèves 2006, Chapter 27.
  33. Hörmander 1983, §IV.2 proves the uniqueness of such an extension.
  34. See for instance Gel'fand & Shilov 1966–1968, v. 1, pp. 103–104 and Benedetto 1997, Definition 2.5.8.
  35. Trèves 2006, p. 294.
  36. Folland, G.B. (1989). Harmonic Analysis in Phase Space. Princeton, NJ: Princeton University Press. 
  37. Horváth, John (1966). Topological Vector Spaces and Distributions. Reading, MA: Addison-Wesley Publishing Company. 
  38. Barros-Neto, José (1973). An Introduction to the Theory of Distributions. New York, NY: Dekker. 
  39. Petersen, Bent E. (1983). Introduction to the Fourier Transform and Pseudo-Differential Operators. Boston, MA: Pitman Publishing. 
  40. Woodward, P.M. (1953). Probability and Information Theory with Applications to Radar. Oxford, UK: Pergamon Press. 
  41. Trèves 2006, pp. 318-319.
  42. Friedlander, F.G.; Joshi, M.S. (1998). Introduction to the Theory of Distributions. Cambridge, UK: Cambridge University Press. 
  43. Schwartz 1951.
  44. 44.0 44.1 44.2 Trèves 2006, pp. 416-419.
  45. Trèves 2006, pp. 150-160.
  46. Trèves 2006, pp. 240-252.
  47. Trèves 2006, pp. 262–264.
  48. Trèves 2006, p. 218.
  49. Trèves 2006, pp. 300-304.
  50. Rudin 1991, pp. 177-181.
  51. Trèves 2006, pp. 92-94.
  52. Trèves 2006, pp. 92–94.
  53. Trèves 2006, p. 160.
  54. Trèves 2006, p. 531.

Bibliography

Further reading