Poisson summation formula

From HandWiki
Short description: Equation in Fourier analysis

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

Forms of the equation

Consider an aperiodic function [math]\displaystyle{ s(x) }[/math] with Fourier transform [math]\displaystyle{ S(f) \triangleq \int_{-\infty}^{\infty} s(x)\ e^{-i2\pi fx}\, dx, }[/math] alternatively designated by [math]\displaystyle{ \hat s(f) }[/math] and [math]\displaystyle{ \mathcal{F}\{s\}(f). }[/math]

The basic Poisson summation formula is:

[math]\displaystyle{ \sum_{n=-\infty}^\infty s(n)=\sum_{k=-\infty}^\infty S(k). }[/math]

 

 

 

 

(Eq.1 )

Also consider periodic functions, where parameters [math]\displaystyle{ T\gt 0 }[/math] and [math]\displaystyle{ P\gt 0 }[/math] are in the same units as [math]\displaystyle{ x }[/math]:

[math]\displaystyle{ s_{_P}(x) \triangleq \sum_{n=-\infty}^{\infty} s(x + nP) \quad \text{and} \quad S_{1/T}(f) \triangleq \sum_{k=-\infty}^{\infty} S(f + k/T). }[/math]

Then Eq.1 is a special case (P=1, x=0) of this generalization:[1][2]

[math]\displaystyle{ s_{_P}(x) = \sum_{k=-\infty}^{\infty} \underbrace{\frac{1}{P}\cdot S\left(\frac{k}{P}\right)}_{S[k]}\ e^{i 2\pi \frac{k}{P} x }, }[/math]

 

 

 

 

(Eq.2 )

which is a Fourier series expansion with coefficients that are samples of the function [math]\displaystyle{ S(f). }[/math]  Similarly:

[math]\displaystyle{ S_{1/T}(f) = \sum_{n=-\infty}^{\infty} \underbrace{T\cdot s(nT)}_{s[n]}\ e^{-i 2\pi n Tf}, }[/math]

 

 

 

 

(Eq.3 )

also known as the important Discrete-time Fourier transform.

Derivations

A proof may be found in either Pinsky[1] or Zygmund.[2] Eq.2, for instance, holds in the sense that if [math]\displaystyle{ s(x) \in L_1(\mathbb{R}) }[/math], then the right-hand side is the (possibly divergent) Fourier series of the left-hand side. It follows from the dominated convergence theorem that [math]\displaystyle{ s_{_P}(x) }[/math] exists and is finite for almost every [math]\displaystyle{ x }[/math]. Furthermore it follows that [math]\displaystyle{ s_{_P} }[/math] is integrable on any interval of length [math]\displaystyle{ P. }[/math] So it is sufficient to show that the Fourier series coefficients of [math]\displaystyle{ s_{_P}(x) }[/math] are [math]\displaystyle{ \frac{1}{P} S\left(\frac{k}{P}\right). }[/math] Proceeding from the definition of the Fourier coefficients we have:

[math]\displaystyle{ \begin{align} S[k]\ &\triangleq \ \frac{1}{P}\int_0^{P} s_{_P}(x)\cdot e^{-i 2\pi \frac{k}{P} x}\, dx\\ &=\ \frac{1}{P}\int_0^{P} \left(\sum_{n=-\infty}^{\infty} s(x \pm nP)\right) \cdot e^{-i 2\pi\frac{k}{P} x}\, dx\\ &=\ \frac{1}{P} \sum_{n=-\infty}^{\infty} \int_0^{P} s(x \pm nP)\cdot e^{-i 2\pi\frac{k}{P} x}\, dx, \end{align} }[/math]

where the interchange of summation with integration is once again justified by dominated convergence. With a change of variables ([math]\displaystyle{ \tau = x + nP }[/math]) this becomes:

[math]\displaystyle{ \begin{align} S[k] = \frac{1}{P} \sum_{n=-\infty}^{\infty} \int_{nP}^{nP + P} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} \ \underbrace{e^{i 2\pi k n}}_{1}\,d\tau \ =\ \frac{1}{P} \int_{-\infty}^{\infty} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} d\tau \triangleq \frac{1}{P}\cdot S\left(\frac{k}{P}\right) \end{align} }[/math]

Distributional formulation

These equations can be interpreted in the language of distributions[3][4](§7.2) for a function [math]\displaystyle{ s }[/math] whose derivatives are all rapidly decreasing (see Schwartz function). The Poisson summation formula arises as a particular case of the Convolution Theorem on tempered distributions, using the Dirac comb distribution and its Fourier series:

[math]\displaystyle{ \sum_{n=-\infty}^\infty \delta(x \pm nT) \equiv \sum_{k=-\infty}^\infty \frac{1}{T}\cdot e^{\pm i 2\pi \frac{k}{T} x} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{T}\cdot \sum_{k=-\infty}^{\infty} \delta (f \pm k/T). }[/math]

In other words, the periodization of a Dirac delta [math]\displaystyle{ \delta, }[/math] resulting in a Dirac comb, corresponds to the discretization of its spectrum which is constantly one. Hence, this again is a Dirac comb but with reciprocal increments.

For the case [math]\displaystyle{ T = 1, }[/math] Eq.1 readily follows:

[math]\displaystyle{ \begin{align} \sum_{k=-\infty}^\infty S(k) &= \sum_{k=-\infty}^\infty \left(\int_{-\infty}^{\infty} s(x)\ e^{-i 2\pi k x} dx \right) = \int_{-\infty}^{\infty} s(x) \underbrace{\left(\sum_{k=-\infty}^\infty e^{-i 2\pi k x}\right)}_{\sum_{n=-\infty}^\infty \delta(x-n)} dx \\ &= \sum_{n=-\infty}^\infty \left(\int_{-\infty}^{\infty} s(x)\ \delta(x-n)\ dx \right) = \sum_{n=-\infty}^\infty s(n). \end{align} }[/math]

Similarly:

[math]\displaystyle{ \begin{align} \sum_{k=-\infty}^{\infty} S(f + k/T) &= \sum_{k=-\infty}^{\infty} \mathcal{F}\left \{ s(x)\cdot e^{-i 2\pi\frac{k}{T}x}\right \}\\ &= \mathcal{F} \bigg \{s(x)\underbrace{\sum_{k=-\infty}^{\infty} e^{-i 2\pi\frac{k}{T}x}}_{T \sum_{n=-\infty}^{\infty} \delta(x-nT)}\bigg \} = \mathcal{F}\left \{\sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \delta(x-nT)\right \}\\ &= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \mathcal{F}\left \{\delta(x-nT)\right \} = \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot e^{-i 2\pi nT f}. \end{align} }[/math]

Or:[5]:{{{1}}}

[math]\displaystyle{ \begin{align} \sum_{k=-\infty}^{\infty} S(f - k/T) &= S(f) * \sum_{k=-\infty}^{\infty} \delta(f - k/T) \\ &= S(f) * \mathcal{F}\left \{T \sum_{n=-\infty}^{\infty} \delta(x-nT)\right \} \\ &= \mathcal{F}\left \{s(x)\cdot T \sum_{n=-\infty}^{\infty} \delta(x-nT)\right \} = \mathcal{F}\left \{\sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \delta(x-nT)\right \} \quad \text{as above}. \end{align} }[/math]

The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as[6] [math]\displaystyle{ 0 \to \Z \to \R \to \R / \Z \to 0. }[/math]

Elliptic interpretation of the Poisson formula

Basic Elliptic Formulas

The number theory based Poisson summation formula states that the quotient of the Jacobian theta standard function value of the Complementary Elliptic Nome divided by the Theta standard function value of the Elliptic Nome itself is equal to the square root of the real Period Ratio exactly:

[math]\displaystyle{ \frac{\vartheta _{00}\bigl[q'(k)\bigr]}{\vartheta _{00}\bigl[q(k)\bigr]} = \biggl[ \frac{K' (k)}{K(k)} \biggr]^{1/2} }[/math]

Both the nome and the complementary nome as well as the period ratio are defined via the Complete Elliptic Integral of the First Kind given by following limited integral formulas:

[math]\displaystyle{ K(\varepsilon) =\int_{0}^{1} \frac{2}{\sqrt{(x^2 + 1)^2 - 4\,\varepsilon^2 x^2}} \,\mathrm{d}x }[/math]
[math]\displaystyle{ K'(\varepsilon) =\int_{0}^{1} \frac{2}{\sqrt{(x^2 - 1)^2 + 4\,\varepsilon^2 x^2} } \,\mathrm{d}x }[/math]

Alternatively, the following definition of the integrals [math]\displaystyle{ K }[/math] and [math]\displaystyle{ K' }[/math] via the Central Binomial Coefficient can be established that are identical to the just mentioned formulas:

[math]\displaystyle{ K(\varepsilon) = \frac{\pi}{2} \biggl\{1 + \biggl[ \sum_{n = 1}^{\infty} \frac{\operatorname{CBC}(n)^2}{16^n} \, \varepsilon^{2n} \biggr]\biggr\} }[/math]
[math]\displaystyle{ K'(\varepsilon) = \frac{\pi}{2} \biggl\{1 + \biggl[ \sum_{n = 1}^{\infty} \frac{\operatorname{CBC}(n)^2}{16^n} \, (1 - \varepsilon^2)^{n} \biggr]\biggr\} }[/math]

The complementary integral [math]\displaystyle{ K' }[/math] is defined as the K-integral of the Pythagorean complementary modulus:

[math]\displaystyle{ K'(k) = K(\sqrt{1 - k^2}\,) }[/math]

The following MacLaurin sum series formula defines the standard Jacobian theta function:

[math]\displaystyle{ \vartheta_{00}(w)= 1 + 2\sum_{n = 1}^{\infty} w^{\Box(n)} = 1 + 2w + 2w^4 + 2w^9 + 2w^{16} + 2w^{25} +\ldots }[/math]

Elliptic Proof

The Jacobic theta function has this identity:

[math]\displaystyle{ \vartheta_{00}\bigl[q(k)\bigr] = \vartheta_{00}\biggl\{ \exp\biggl[ -\pi\,\frac{K'(k)}{K (k)} \biggr] \biggr\} = \sum_{n = -\infty}^{\infty} \exp\biggl[ -n^2 \pi\,\frac{K'(k)}{K (k)} \biggr] = \biggl[ \frac{2}{\pi} K(k) \biggr]^{1/2} }[/math]

And by replacing the elliptic modulus [math]\displaystyle{ k }[/math] as an inner function by the Pythagorean complementary modulus [math]\displaystyle{ k' = \sqrt{1 - k^2} }[/math], this formula arises:

[math]\displaystyle{ \vartheta_{00}\bigl[q'(k)\bigr] = \vartheta_{00}\biggl\{ \exp\biggl[ -\pi\,\frac{K(k)}{K '(k)} \biggr] \biggr\} = \sum_{n = -\infty}^{\infty} \exp\biggl[-n^2 \pi\,\frac{K(k)}{K '(k)}\biggr] = \biggl[ \frac{2}{\pi} K'(k) \biggr]^{1/2} }[/math]

This is how the number theory-based Poisson summation formula emerges:

[math]\displaystyle{ \frac{\vartheta _{00}\bigl[q'(k)\bigr]}{\vartheta _{00}\bigl[q(k)\bigr]} = \biggl[ \frac{K' (k)}{K(k)} \biggr]^{1/2} }[/math]

The quotient of the last two formulas mentioned directly results in Poisson's empirical formula in the form shown in the table.

Following way of displaying that result shows the Real Period Ratio at every single position:

[math]\displaystyle{ \frac{\vartheta _{00}\bigl\{ \exp\bigl[ -\pi\, K(k) \div K'(k) \bigr] \bigr\}}{\vartheta_{00}\bigl\{ \exp\bigl[ -\pi\, K'(k) \div K(k) \bigr] \bigr\}} = \biggl[ \frac{K' (k)}{K(k)} \biggr]^{1/2} }[/math]

Now, by substituting the period ratio with a parameter [math]\displaystyle{ p = K'(k) \div K(k) }[/math], the mentioned Poisson empirical formula is completely freed from the elliptic integrals:

[math]\displaystyle{ \frac{\vartheta _{00}\bigl[ \exp(-\pi \div p) \bigr]}{\vartheta _{00}\bigl[ \exp(-\pi \times p) \bigr ]} = \sqrt{p} }[/math]

In this way the result from the previous sections of the article is proven with the help of elliptic integrals!

Applicability

Eq.2 holds provided [math]\displaystyle{ s(x) }[/math] is a continuous integrable function which satisfies [math]\displaystyle{ |s(x)| + |S(x)| \le C (1+|x|)^{-1-\delta} }[/math] for some [math]\displaystyle{ C \gt 0,\delta \gt 0 }[/math] and every [math]\displaystyle{ x. }[/math][7][8] Note that such [math]\displaystyle{ s(x) }[/math] is uniformly continuous, this together with the decay assumption on [math]\displaystyle{ s }[/math], show that the series defining [math]\displaystyle{ s_{_P} }[/math] converges uniformly to a continuous function.   Eq.2 holds in the strong sense that both sides converge uniformly and absolutely to the same limit.[8]

Eq.2 holds in a pointwise sense under the strictly weaker assumption that [math]\displaystyle{ s }[/math] has bounded variation and[2] [math]\displaystyle{ 2 \cdot s(x)=\lim_{\varepsilon\to 0} s(x+\varepsilon) + \lim_{\varepsilon\to 0} s(x-\varepsilon). }[/math] The Fourier series on the right-hand side of Eq.2 is then understood as a (conditionally convergent) limit of symmetric partial sums.

As shown above, Eq.2 holds under the much less restrictive assumption that [math]\displaystyle{ s(x) }[/math] is in [math]\displaystyle{ L^1(\mathbb{R}) }[/math], but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of [math]\displaystyle{ s_{_P}(x). }[/math][2] In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way Eq.2, case [math]\displaystyle{ x=0, }[/math] holds under the less restrictive conditions that [math]\displaystyle{ s(x) }[/math] is integrable and 0 is a point of continuity of [math]\displaystyle{ s_{_P}(x) }[/math]. However Eq.2 may fail to hold even when both [math]\displaystyle{ s }[/math] and [math]\displaystyle{ S }[/math] are integrable and continuous, and the sums converge absolutely.[9]

Applications

Method of images

In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on [math]\displaystyle{ \mathbb{R}^2 }[/math] is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions.[7] In one dimension, the resulting solution is called a theta function.

In electrodynamics, the method is also used to accelerate the computation of periodic Green's functions.[10]

Sampling

In the statistical study of time-series, if [math]\displaystyle{ s }[/math] is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function [math]\displaystyle{ s }[/math] is band-limited, meaning that there is some cutoff frequency [math]\displaystyle{ f_o }[/math] such that [math]\displaystyle{ S(f) }[/math] is zero for frequencies exceeding the cutoff: [math]\displaystyle{ S(f)=0 }[/math] for [math]\displaystyle{ |f|\gt f_o. }[/math] For band-limited functions, choosing the sampling rate [math]\displaystyle{ \tfrac{1}{T} \gt 2 f_o }[/math] guarantees that no information is lost: since [math]\displaystyle{ S }[/math] can be reconstructed from these sampled values. Then, by Fourier inversion, so can [math]\displaystyle{ s. }[/math] This leads to the Nyquist–Shannon sampling theorem.[1]

Ewald summation

Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space.[11] (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation.

Approximations of integrals

The Poisson summation formula is also useful to bound the errors obtained when an integral is approximated by a (Riemann) sum. Consider an approximation of [math]\displaystyle{ S(0)=\int_{-\infty}^\infty dx \, s(x) }[/math] as [math]\displaystyle{ \delta \sum_{n=-\infty}^\infty s(n \delta) }[/math], where [math]\displaystyle{ \delta \ll 1 }[/math] is the size of the bin. Then, according to Eq.2 this approximation coincides with [math]\displaystyle{ \sum_{k=-\infty}^\infty S(k/ \delta) }[/math]. The error in the approximation can then be bounded as [math]\displaystyle{ \left| \sum_{k \ne 0} S(k/ \delta) \right| \le \sum_{k \ne 0} | S(k/ \delta)| }[/math]. This is particularly useful when the Fourier transform of [math]\displaystyle{ s(x) }[/math] is rapidly decaying if [math]\displaystyle{ 1/\delta \gg 1 }[/math].

Lattice points in a sphere

The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points in a large Euclidean sphere. It can also be used to show that if an integrable function, [math]\displaystyle{ s }[/math] and [math]\displaystyle{ S }[/math] both have compact support then [math]\displaystyle{ s = 0. }[/math][1]

Number theory

In number theory, Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function.[12]

One important such use of Poisson summation concerns theta functions: periodic summations of Gaussians . Put [math]\displaystyle{ q= e^{i\pi \tau } }[/math], for [math]\displaystyle{ \tau }[/math] a complex number in the upper half plane, and define the theta function:

[math]\displaystyle{ \theta ( \tau) = \sum_n q^{n^2}. }[/math]

The relation between [math]\displaystyle{ \theta (-1/\tau) }[/math] and [math]\displaystyle{ \theta (\tau) }[/math] turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing [math]\displaystyle{ s(x)= e^{-\pi x^2} }[/math] and using the fact that [math]\displaystyle{ S(f) = e^{-\pi f ^2}, }[/math] one can conclude:

[math]\displaystyle{ \theta \left({-1\over\tau}\right) = \sqrt{\tau \over i} \theta (\tau), }[/math] by putting [math]\displaystyle{ {1/\lambda} = \sqrt{\tau/i}. }[/math]

It follows from this that [math]\displaystyle{ \theta^8 }[/math] has a simple transformation property under [math]\displaystyle{ \tau \mapsto {-1/ \tau} }[/math] and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares.

Sphere packings

Cohn & Elkies[13] proved an upper bound on the density of sphere packings using the Poisson summation formula, which subsequently led to a proof of optimal sphere packings in dimension 8 and 24.

Other

  • Let [math]\displaystyle{ s(x) = e^{-ax} }[/math] for [math]\displaystyle{ 0 \leq x }[/math] and [math]\displaystyle{ s(x) = 0 }[/math] for [math]\displaystyle{ x \lt 0 }[/math] to get [math]\displaystyle{ \coth(x) = x\sum_{n \in \Z} \frac{1}{x^2+\pi^2n^2} = \frac{1}{x}+ 2x \sum_{n \in \Z_+} \frac{1}{x^2+\pi^2n^2}. }[/math]
  • It can be used to prove the functional equation for the theta function.
  • Poisson's summation formula appears in Ramanujan's notebooks and can be used to prove some of his formulas, in particular it can be used to prove one of the formulas in Ramanujan's first letter to Hardy.[clarification needed]
  • It can be used to calculate the quadratic Gauss sum.

Generalizations

The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let [math]\displaystyle{ \Lambda }[/math] be the lattice in [math]\displaystyle{ \mathbb{R}^d }[/math] consisting of points with integer coordinates. For a function [math]\displaystyle{ s }[/math] in [math]\displaystyle{ L^1(\mathbb{R}^d) }[/math], consider the series given by summing the translates of [math]\displaystyle{ s }[/math] by elements of [math]\displaystyle{ \Lambda }[/math]:

[math]\displaystyle{ \sum_{\nu\in\Lambda} s(x+\nu). }[/math]

Theorem For [math]\displaystyle{ s }[/math] in [math]\displaystyle{ L^1(\mathbb{R}^d) }[/math], the above series converges pointwise almost everywhere, and thus defines a periodic function [math]\displaystyle{ \mathbb{P}s }[/math] on [math]\displaystyle{ \Lambda. }[/math]  [math]\displaystyle{ \mathbb{P}s }[/math] lies in [math]\displaystyle{ L^1 }[/math] with [math]\displaystyle{ \| \mathbb{P}s \|_1 \le \| s \|_1. }[/math]
Moreover, for all [math]\displaystyle{ \nu }[/math] in [math]\displaystyle{ \Lambda, }[/math]  [math]\displaystyle{ \mathbb{P}S(\nu) }[/math] (Fourier transform on [math]\displaystyle{ \Lambda }[/math]) equals [math]\displaystyle{ S(\nu) }[/math] (Fourier transform on [math]\displaystyle{ \mathbb{R}^d }[/math]).

When [math]\displaystyle{ s }[/math] is in addition continuous, and both [math]\displaystyle{ s }[/math] and [math]\displaystyle{ S }[/math] decay sufficiently fast at infinity, then one can "invert" the domain back to [math]\displaystyle{ \mathbb{R}^d }[/math] and make a stronger statement. More precisely, if

[math]\displaystyle{ |s(x)| + |S(x)| \le C (1+|x|)^{-d-\delta} }[/math]

for some C, δ > 0, then[8](VII §2) [math]\displaystyle{ \sum_{\nu\in\Lambda} s(x+\nu) = \sum_{\nu\in\Lambda} S(\nu) e^{i 2\pi x\cdot\nu}, }[/math] where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives Eq.1 above.

More generally, a version of the statement holds if Λ is replaced by a more general lattice in [math]\displaystyle{ \mathbb{R}^d }[/math]. The dual lattice Λ′ can be defined as a subset of the dual vector space or alternatively by Pontryagin duality. Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization.

This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis.

Selberg trace formula

Further generalization to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character.

A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups [math]\displaystyle{ G }[/math] with a discrete subgroup [math]\displaystyle{ \Gamma }[/math] such that [math]\displaystyle{ G/\Gamma }[/math] has finite volume. For example, [math]\displaystyle{ G }[/math] can be the real points of [math]\displaystyle{ SL_n }[/math] and [math]\displaystyle{ \Gamma }[/math] can be the integral points of [math]\displaystyle{ SL_n }[/math]. In this setting, [math]\displaystyle{ G }[/math] plays the role of the real number line in the classical version of Poisson summation, and [math]\displaystyle{ \Gamma }[/math] plays the role of the integers [math]\displaystyle{ n }[/math] that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula, and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of Eq.1 becomes a sum over irreducible unitary representations of [math]\displaystyle{ G }[/math], and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of [math]\displaystyle{ \Gamma }[/math], and is called "the geometric side."

The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory.

Convolution theorem

The Poisson summation formula is a particular case of the convolution theorem on tempered distributions. If one of the two factors is the Dirac comb, one obtains periodic summation on one side and sampling on the other side of the equation. Applied to the Dirac delta function and its Fourier transform, the function that is constantly 1, this yields the Dirac comb identity.

See also

References

  1. 1.0 1.1 1.2 1.3 Pinsky, M. (2002), Introduction to Fourier Analysis and Wavelets., Brooks Cole, ISBN 978-0-534-37660-4 
  2. 2.0 2.1 2.2 2.3 Zygmund, Antoni (1968), Trigonometric Series (2nd ed.), Cambridge University Press (published 1988), ISBN 978-0-521-35885-9 
  3. Córdoba, A., "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I 306: 373–376 
  4. Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 3-540-12104-8 
  5. Oppenheim, Alan V.; Schafer, Ronald W.; Buck, John R. (1999). Discrete-time signal processing (2nd ed.). Upper Saddle River, N.J.: Prentice Hall. ISBN 0-13-754920-2. https://archive.org/details/discretetimesign00alan. "samples of the Fourier transform of an aperiodic sequence x[n] can be thought of as DFS coefficients of a periodic sequence obtained through summing periodic replicas of x[n]." 
  6. Deitmar, Anton; Echterhoff, Siegfried (2014), Principles of Harmonic Analysis, Universitext (2 ed.), doi:10.1007/978-3-319-05792-7, ISBN 978-3-319-05791-0 
  7. 7.0 7.1 Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Pearson Education, Inc., pp. 253–257, ISBN 0-13-035399-X 
  8. 8.0 8.1 8.2 Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN 978-0-691-08078-9, https://archive.org/details/introductiontofo0000stei 
  9. Katznelson, Yitzhak (1976), An introduction to harmonic analysis (Second corrected ed.), New York: Dover Publications, Inc, ISBN 0-486-63331-4 
  10. Kinayman, Noyan; Aksun, M. I. (1995). "Comparative study of acceleration techniques for integrals and series in electromagnetic problems". Radio Science 30 (6): 1713–1722. doi:10.1029/95RS02060. Bibcode1995RaSc...30.1713K. 
  11. Woodward, Philipp M. (1953). Probability and Information Theory, with Applications to Radar. Academic Press, p. 36.
  12. H. M. Edwards (1974). Riemann's Zeta Function. Academic Press, pp. 209–11. ISBN:0-486-41740-9.
  13. Cohn, Henry; Elkies, Noam (2003), "New upper bounds on sphere packings I", Ann. of Math., 2 157 (2): 689–714, doi:10.4007/annals.2003.157.689 

Further reading