# Fourier analysis

Short description: Branch of mathematics
Bass guitar time signal of open string A note (55 Hz).
Fourier transform of bass guitar time signal of open string A note (55 Hz). Fourier analysis reveals the oscillatory components of signals and functions.

In mathematics, Fourier analysis (/ˈfʊri, -iər/)[1] is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.

The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.

To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis.[2][3] Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.[4]

## Applications

Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.

This wide applicability stems from many useful properties of the transforms:

In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.[9]

Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.

In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.

When a function $\displaystyle{ s(t) }$ is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function $\displaystyle{ S(f) }$ at frequency $\displaystyle{ f }$ represents the amplitude of a frequency component whose initial phase is given by the angle of $\displaystyle{ S(f) }$ (polar coordinates).

Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.

When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.[10]

Some examples include:

## Variants of Fourier analysis

A Fourier transform and 3 variations caused by periodic sampling (at interval T) and/or periodic summation (at interval P) of the underlying time-domain function. The relative computational ease of the DFT sequence and the insight it gives into S(f) make it a popular analysis tool.

### (Continuous) Fourier transform

Main page: Fourier transform

Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (t), and the domain of the output (final) function is ordinary frequency, the transform of function s(t) at frequency f is given by the complex number:

$\displaystyle{ S(f) = \int_{-\infty}^{\infty} s(t) \cdot e^{- i2\pi f t} \, dt. }$

Evaluating this quantity for all values of f produces the frequency-domain function. Then s(t) can be represented as a recombination of complex exponentials of all possible frequencies:

$\displaystyle{ s(t) = \int_{-\infty}^{\infty} S(f) \cdot e^{i2\pi f t} \, df, }$

which is the inverse transform formula. The complex number, S(f), conveys both amplitude and phase of frequency f.

• conventions for amplitude normalization and frequency scaling/units
• transform properties
• tabulated transforms of specific functions
• an extension/generalization for functions of multiple dimensions, such as images.

### Fourier series

Main page: Fourier series

The Fourier transform of a periodic function, sP(t), with period P, becomes a Dirac comb function, modulated by a sequence of complex coefficients:

$\displaystyle{ S[k] = \frac{1}{P}\int_{P} s_P(t)\cdot e^{-i2\pi \frac{k}{P} t}\, dt, \quad k\in\Z, }$     (where P is the integral over any interval of length P).

The inverse transform, known as Fourier series, is a representation of sP(t) in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:

$\displaystyle{ s_P(t)\ \ =\ \ \mathcal{F}^{-1}\left\{\sum_{k=-\infty}^{+\infty} S[k]\, \delta \left(f-\frac{k}{P}\right)\right\}\ \ =\ \ \sum_{k=-\infty}^\infty S[k]\cdot e^{i2\pi \frac{k}{P} t}. }$

Any sP(t) can be expressed as a periodic summation of another function, s(t):

$\displaystyle{ s_P(t) \,\triangleq\, \sum_{m=-\infty}^\infty s(t-mP), }$

and the coefficients are proportional to samples of S(f) at discrete intervals of 1/P:

$\displaystyle{ S[k] =\frac{1}{P}\cdot S\left(\frac{k}{P}\right). }$[upper-alpha 1]

Note that any s(t) whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering s(t) (and therefore S(f)) from just these samples (i.e. from the Fourier series) is that the non-zero portion of s(t) be confined to a known interval of duration P, which is the frequency domain dual of the Nyquist–Shannon sampling theorem.

### Discrete-time Fourier transform (DTFT)

Main page: Discrete-time Fourier transform

The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:

$\displaystyle{ S_\frac{1}{T}(f)\ \triangleq\ \underbrace{\sum_{k=-\infty}^{\infty} S\left(f - \frac{k}{T}\right) \equiv \overbrace{\sum_{n=-\infty}^{\infty} s[n] \cdot e^{-i2\pi f n T}}^{\text{Fourier series (DTFT)}}}_{\text{Poisson summation formula}} = \mathcal{F} \left \{ \sum_{n=-\infty}^{\infty} s[n]\ \delta(t-nT)\right \},\, }$

which is known as the DTFT. Thus the DTFT of the s[n] sequence is also the Fourier transform of the modulated Dirac comb function.[upper-alpha 2]

The Fourier series coefficients (and inverse transform), are defined by:

$\displaystyle{ s[n]\ \triangleq\ T \int_\frac{1}{T} S_\frac{1}{T}(f)\cdot e^{i2\pi f nT} \,df = T \underbrace{\int_{-\infty}^{\infty} S(f)\cdot e^{i2\pi f nT} \,df}_{\triangleq\, s(nT)}. }$

Parameter T corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula.  Thus we have the important result that when a discrete data sequence, s[n], is proportional to samples of an underlying continuous function, s(t), one can observe a periodic summation of the continuous Fourier transform, S(f). Note that any s(t) with the same discrete sample values produces the same DTFT  But under certain idealized conditions one can theoretically recover S(f) and s(t) exactly. A sufficient condition for perfect recovery is that the non-zero portion of S(f) be confined to a known frequency interval of width 1/T.  When that interval is [−1/2T, 1/2T], the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.

Another reason to be interested in S1/T(f) is that it often provides insight into the amount of aliasing caused by the sampling process.

Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:

• normalized frequency units
• windowing (finite-length sequences)
• transform properties
• tabulated transforms of specific functions

### Discrete Fourier transform (DFT)

Main page: Discrete Fourier transform

Similar to a Fourier series, the DTFT of a periodic sequence, $\displaystyle{ s_N[n] }$, with period $\displaystyle{ N }$, becomes a Dirac comb function, modulated by a sequence of complex coefficients (see DTFT § Periodic data):

$\displaystyle{ S[k] = \sum_n s_N[n]\cdot e^{-i2\pi \frac{k}{N} n}, \quad k\in\Z, }$     (where Σn is the sum over any sequence of length N).

The S[k] sequence is what is customarily known as the DFT of one cycle of sN. It is also N-periodic, so it is never necessary to compute more than N coefficients. The inverse transform, also known as a discrete Fourier series, is given by:

$\displaystyle{ s_N[n] = \frac{1}{N} \sum_{k} S[k]\cdot e^{i2\pi \frac{n}{N}k}, }$   where Σk is the sum over any sequence of length N.

When sN[n] is expressed as a periodic summation of another function:

$\displaystyle{ s_N[n]\, \triangleq\, \sum_{m=-\infty}^{\infty} s[n-mN], }$   and   $\displaystyle{ s[n]\, \triangleq\, s(nT), }$[upper-alpha 3]

the coefficients are proportional to samples of S1/T(f) at disrete intervals of 1/P = 1/NT:

$\displaystyle{ S[k] = \frac{1}{T}\cdot S_\frac{1}{T}\left(\frac{k}{P}\right). }$[upper-alpha 4]

Conversely, when one wants to compute an arbitrary number (N) of discrete samples of one cycle of a continuous DTFT, S1/T(f), it can be done by computing the relatively simple DFT of sN[n], as defined above. In most cases, N is chosen equal to the length of non-zero portion of s[n]. Increasing N, known as zero-padding or interpolation, results in more closely spaced samples of one cycle of S1/T(f). Decreasing N, causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see Discrete-time Fourier transform § L=N×I) In most cases of practical interest, the s[n] sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.

The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.

• transform properties
• applications
• tabulated transforms of specific functions

### Summary

For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.

It is common in practice for the duration of s(•) to be limited to the period, P or N.  But these formulas do not require that condition.

s(t) transforms (continuous-time)
Continuous frequency Discrete frequencies
Transform $\displaystyle{ S(f)\, \triangleq\, \int_{-\infty}^{\infty} s(t) \cdot e^{-i2\pi f t} \,dt }$ $\displaystyle{ \overbrace{\frac{1}{P}\cdot S\left(\frac{k}{P}\right)}^{S[k]}\, \triangleq\, \frac{1}{P} \int_{-\infty}^{\infty} s(t) \cdot e^{-i2\pi \frac{k}{P} t}\,dt \equiv \frac{1}{P} \int_P s_P(t) \cdot e^{-i2\pi \frac{k}{P} t} \,dt }$
Inverse $\displaystyle{ s(t) = \int_{-\infty}^{\infty} S(f) \cdot e^{ i2\pi f t}\, df }$ $\displaystyle{ \underbrace{s_P(t) = \sum_{k=-\infty}^{\infty} S[k] \cdot e^{i2\pi \frac{k}{P} t}}_{\text{Poisson summation formula (Fourier series)}}\, }$
s(nT) transforms (discrete-time)
Continuous frequency Discrete frequencies
Transform $\displaystyle{ \underbrace{\frac{1}{T} S_\frac{1}{T}(f)\, \triangleq\, \sum_{n=-\infty}^{\infty} s(nT)\cdot e^{-i2\pi f nT}}_{\text{Poisson summation formula (DTFT)}} }$

\displaystyle{ \begin{align} \overbrace{\frac{1}{T} S_\frac{1}{T}\left(\frac{k}{NT}\right)}^{S[k]}\, &\triangleq\, \sum_{n=-\infty}^{\infty} s(nT)\cdot e^{-i2\pi \frac{kn}{N}}\\ &\equiv \underbrace{\sum_{n} s_P(nT)\cdot e^{-i2\pi \frac{kn}{N}}}_{\text{DFT}}\, \end{align} }

Inverse $\displaystyle{ s(nT) = T \int_\frac{1}{T} \frac{1}{T} S_\frac{1}{T}(f)\cdot e^{i2\pi f nT} \,df }$

$\displaystyle{ \sum_{n=-\infty}^{\infty} s(nT)\cdot \delta(t-nT) = \underbrace{\int_{-\infty}^{\infty} \frac{1}{T}\ S_\frac{1}{T}(f)\cdot e^{i2\pi f t}\,df}_{\text{inverse Fourier transform}}\, }$

\displaystyle{ \begin{align} s_P(nT) &= \overbrace{\frac{1}{N} \sum_{k} S[k]\cdot e^{i2\pi \frac{kn}{N}}}^{\text{inverse DFT}}\\ &= \tfrac{1}{P} \sum_{k} S_\frac{1}{T}\left(\frac{k}{P}\right)\cdot e^{i2\pi \frac{kn}{N}} \end{align} }

## Symmetry properties

When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:[11]

$\displaystyle{ \begin{array}{rccccccccc} \text{Time domain} & s & = & s_{_{\text{RE}}} & + & s_{_{\text{RO}}} & + & i s_{_{\text{IE}}} & + & \underbrace{i\ s_{_{\text{IO}}}} \\ &\Bigg\Updownarrow\mathcal{F} & &\Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F}\\ \text{Frequency domain} & S & = & S_\text{RE} & + & \overbrace{\,i\ S_\text{IO}\,} & + & i S_\text{IE} & + & S_\text{RO} \end{array} }$

From this, various relationships are apparent, for example:

• The transform of a real-valued function (sRE + sRO) is the even symmetric function SRE + i SIO. Conversely, an even-symmetric transform implies a real-valued time-domain.
• The transform of an imaginary-valued function (i sIE + i sIO) is the odd symmetric function SRO + i SIE, and the converse is true.
• The transform of an even-symmetric function (sRE + i sIO) is the real-valued function SRE + SRO, and the converse is true.
• The transform of an odd-symmetric function (sRO + i sIE) is the imaginary-valued function i SIE + i SIO, and the converse is true.

## History

An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).[12][13][14][15]

The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see Deferent and epicycle § Mathematical formalism).

In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,[16] which has been described as the first formula for the DFT,[17] and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string.[17] Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.[18] Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.[17]

An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:[19] Lagrange transformed the roots x1, x2, x3 into the resolvents:

\displaystyle{ \begin{align} r_1 &= x_1 + x_2 + x_3\\ r_2 &= x_1 + \zeta x_2 + \zeta^2 x_3\\ r_3 &= x_1 + \zeta^2 x_2 + \zeta x_3 \end{align} }

where ζ is a cubic root of unity, which is the DFT of order 3.

A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation,[20] but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series.

Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.[17]

The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.

The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.[18][16]

## Time–frequency transforms

In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.

As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.

## Fourier transforms on arbitrary locally compact abelian topological groups

The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.

More specific, Fourier analysis can be done on cosets,[21] even discrete cosets.

## Notes

1. $\displaystyle{ \int_{P} \left(\sum_{m=-\infty}^{\infty} s(t-mP)\right) \cdot e^{-i2\pi \frac{k}{P} t} \,dt = \underbrace{\int_{-\infty}^{\infty} s(t) \cdot e^{-i2\pi \frac{k}{P} t} \,dt}_{\triangleq\, S\left(\frac{k}{P}\right)} }$
2. We may also note that:
\displaystyle{ \begin{align} \sum_{n=-\infty}^{+\infty} T\cdot s(nT) \delta(t-nT) &= \sum_{n=-\infty}^{+\infty} T\cdot s(t) \delta(t-nT) \\ &= s(t)\cdot T \sum_{n=-\infty}^{+\infty} \delta(t-nT). \end{align} }
Consequently, a common practice is to model "sampling" as a multiplication by the Dirac comb function, which of course is only "possible" in a purely mathematical sense.

3. Note that this definition intentionally differs from the DTFT section by a factor of T. This facilitates the "$\displaystyle{ s(nT) }$ transforms" table. Alternatively, $\displaystyle{ s[n] }$ can be defined as $\displaystyle{ T\cdot s(nT), }$ in which case $\displaystyle{ S[k] = S_\frac{1}{T}\left(\frac{k}{P}\right). }$
4. $\displaystyle{ \sum_{n=0}^{N-1} \left(\sum_{m=-\infty}^{\infty} s([n-mN]T)\right) \cdot e^{-i2\pi \frac{k}{N} n} = \underbrace{\sum_{n=-\infty}^{\infty} s(nT) \cdot e^{-i2\pi \frac{k}{N} n}}_{\triangleq\, \frac{1}{T} S_\frac{1}{T}\left(\frac{k}{NT}\right)} }$

## References

1.
2. D. Scott Birney; David Oesper; Guillermo Gonzalez (2006). Observational Astronomy. Cambridge University Press. ISBN 0-521-85370-2.
3. Press (2007). Numerical Recipes (3rd ed.). Cambridge University Press. ISBN 978-0-521-88068-8.
4. Rudin, Walter (1990). Fourier Analysis on Groups. Wiley-Interscience. ISBN 978-0-471-52364-2.
5. Evans, L. (1998). Partial Differential Equations. American Mathematical Society. ISBN 978-3-540-76124-2.
6. Knuth, Donald E. (1997). The Art of Computer Programming Volume 2: Seminumerical Algorithms (3rd ed.). Addison-Wesley Professional. Section 4.3.3.C: Discrete Fourier transforms, pg.305. ISBN 978-0-201-89684-8.
7. Conte, S. D.; de Boor, Carl (1980). Elementary Numerical Analysis (Third ed.). New York: McGraw Hill, Inc.. ISBN 978-0-07-066228-5.
8. Saferstein, Richard (2013). Criminalistics: An Introduction to Forensic Science.
9. Rabiner, Lawrence R.; Gold, Bernard (1975). Theory and Application of Digital Signal Processing. Englewood Cliffs, NJ. ISBN 9780139141010.
10. Proakis, John G.; Manolakis, Dimitri G. (1996) (in en), Digital Signal Processing: Principles, Algorithms and Applications (3 ed.), New Jersey: Prentice-Hall International, p. 291, sAcfAQAAIAAJ, ISBN 9780133942897
11. Prestini, Elena (2004). The Evolution of Applied Harmonic Analysis: Models of the Real World. Birkhäuser. p. 62. ISBN 978-0-8176-4125-2.
12. Rota, Gian-Carlo; Palombi, Fabrizio (1997). Indiscrete Thoughts. Birkhäuser. p. 11. ISBN 978-0-8176-3866-5.
13.
14. "Analyzing shell structure from Babylonian and modern times". International Journal of Modern Physics E 13 (1): 247. 2004. doi:10.1142/S0218301304002028. Bibcode2004IJMPE..13..247B.
15.
16. Briggs, William L.; Henson, Van Emden (1995). The DFT: An Owner's Manual for the Discrete Fourier Transform. SIAM. pp. 2–4. ISBN 978-0-89871-342-8.
17. Heideman, M.T.; Johnson, D. H.; Burrus, C. S. (1984). "Gauss and the history of the fast Fourier transform". IEEE ASSP Magazine 1 (4): 14–21. doi:10.1109/MASSP.1984.1162257.
18. Knapp, Anthony W. (2006). Basic Algebra. Springer. p. 501. ISBN 978-0-8176-3248-9.
19. Narasimhan, T.N. (February 1999). "Fourier's heat conduction equation: History, influence, and connections". Reviews of Geophysics 37 (1): 151–172. doi:10.1029/1998RG900006. ISSN 1944-9208. OCLC 5156426043. Bibcode1999RvGeo..37..151N.
20. Forrest, Brian. (1998). Fourier Analysis on Coset Spaces. Rocky Mountain Journal of Mathematics. 28. 10.1216/rmjm/1181071828.