Stationary phase approximation

From HandWiki
Revision as of 06:49, 27 June 2023 by NBrush (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential. This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.[1] It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others.

Basics

The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times[clarification needed].

Formula

Letting [math]\displaystyle{ \Sigma }[/math] denote the set of critical points of the function [math]\displaystyle{ f }[/math] (i.e. points where [math]\displaystyle{ \nabla f =0 }[/math]), under the assumption that [math]\displaystyle{ g }[/math] is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. [math]\displaystyle{ \det(\mathrm{Hess}(f(x_0)))\neq 0 }[/math] for [math]\displaystyle{ x_0 \in \Sigma }[/math]) we have the following asymptotic formula, as [math]\displaystyle{ k\to \infty }[/math]:

[math]\displaystyle{ \int_{\mathbb{R}^n}g(x)e^{ikf(x)} dx=\sum_{x_0\in \Sigma} e^{ik f(x_0)}|\det({\mathrm{Hess}}(f(x_0)))|^{-1/2}e^{\frac{i\pi}{4} \mathrm{sgn}(\mathrm{Hess}(f(x_0)))}(2\pi/k)^{n/2}g(x_0)+o(k^{-n/2}) }[/math]

Here [math]\displaystyle{ \mathrm{Hess}(f) }[/math] denotes the Hessian of [math]\displaystyle{ f }[/math], and [math]\displaystyle{ \mathrm{sgn}(\mathrm{Hess}(f)) }[/math] denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues.

For [math]\displaystyle{ n=1 }[/math], this reduces to:

[math]\displaystyle{ \int_\mathbb{R}g(x)e^{ikf(x)}dx=\sum_{x_0\in \Sigma} g(x_0)e^{ik f(x_0)+\mathrm{sign}(f''(x_0))i\pi/4}\left(\frac{2\pi}{k |f''(x_0)|}\right)^{1/2}+o(k^{-1/2}) }[/math]

In this case the assumptions on [math]\displaystyle{ f }[/math] reduce to all the critical points being non-degenerate.

This is just the Wick-rotated version of the formula for the method of steepest descent.

An example

Consider a function

[math]\displaystyle{ f(x,t) = \frac{1}{2\pi} \int_{\mathbb R} F(\omega) e^{i [k(\omega) x - \omega t]} \, d\omega }[/math].

The phase term in this function, [math]\displaystyle{ \phi = k(\omega) x - \omega t }[/math], is stationary when

[math]\displaystyle{ \frac{d}{d\omega}\mathopen{}\left(k(\omega) x - \omega t\right)\mathclose{} = 0 }[/math]

or equivalently,

[math]\displaystyle{ \frac{d k(\omega)}{d\omega}\Big|_{\omega = \omega_0} = \frac{t}{x} }[/math].

Solutions to this equation yield dominant frequencies [math]\displaystyle{ \omega_0 }[/math] for some [math]\displaystyle{ x }[/math] and [math]\displaystyle{ t }[/math]. If we expand [math]\displaystyle{ \phi }[/math] as a Taylor series about [math]\displaystyle{ \omega_0 }[/math] and neglect terms of order higher than [math]\displaystyle{ (\omega-\omega_0)^2 }[/math], we have

[math]\displaystyle{ \phi = \left[k(\omega_0) x - \omega_0 t\right] + \frac{1}{2} x k''(\omega_0) (\omega - \omega_0)^2 + \cdots }[/math]

where [math]\displaystyle{ k'' }[/math] denotes the second derivative of [math]\displaystyle{ k }[/math]. When [math]\displaystyle{ x }[/math] is relatively large, even a small difference [math]\displaystyle{ (\omega-\omega_0) }[/math] will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula,

[math]\displaystyle{ \int_{\mathbb R} e^{\frac{1}{2}ic x^2} d x=\sqrt{\frac{2i\pi}{c}}=\sqrt{\frac{2\pi}{|c|}}e^{\pm i\frac{\pi}{4}} }[/math].
[math]\displaystyle{ f(x, t) \approx \frac{1}{2\pi} e^{i \left[k(\omega_0) x - \omega_0 t\right]} \left|F(\omega_0)\right| \int_{\mathbb R} e^{\frac{1}{2} i x k''(\omega_0) (\omega - \omega_0)^2} \, d\omega }[/math].

This integrates to

[math]\displaystyle{ f(x, t) \approx \frac{\left|F(\omega_0)\right|}{2\pi} \sqrt{\frac{2\pi}{x \left|k''(\omega_0)\right|}} \cos\left[k(\omega_0) x - \omega_0 t \pm \frac{\pi}{4}\right] }[/math].

Reduction steps

The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma.

The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by

[math]\displaystyle{ (x_1^2 + x_2^2 + \cdots + x_j^2) - (x_{j + 1}^2 + x_{j + 2}^2 + \cdots + x_n^2) }[/math].

The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval [−1, 1] and quickly tending to 0 outside it. Take

[math]\displaystyle{ g(x) = \prod_i h(x_i) }[/math],

then Fubini's theorem reduces I(k) to a product of integrals over the real line like

[math]\displaystyle{ J(k) = \int h(x) e^{i k f(x)} \, dx }[/math]

with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate.

In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function).

One-dimensional case

The essential statement is this one:

[math]\displaystyle{ \int_{-1}^1 e^{i k x^2} \, dx = \sqrt{\frac{\pi}{k}} e^{i \pi / 4} + \mathcal O \mathopen{}\left(\frac{1}{k}\right)\mathclose{} }[/math].

In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range [math]\displaystyle{ [-\infty, \infty] }[/math] (for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say, [math]\displaystyle{ [1,\infty] }[/math].[2]

This is the model for all one-dimensional integrals [math]\displaystyle{ I(k) }[/math] with [math]\displaystyle{ f }[/math] having a single non-degenerate critical point at which [math]\displaystyle{ f }[/math] has second derivative [math]\displaystyle{ \gt 0 }[/math]. In fact the model case has second derivative 2 at 0. In order to scale using [math]\displaystyle{ k }[/math], observe that replacing [math]\displaystyle{ k }[/math] by [math]\displaystyle{ ck }[/math] where [math]\displaystyle{ c }[/math] is constant is the same as scaling [math]\displaystyle{ x }[/math] by [math]\displaystyle{ \sqrt{c} }[/math]. It follows that for general values of [math]\displaystyle{ f''(0)\gt 0 }[/math], the factor [math]\displaystyle{ \sqrt{\pi/k} }[/math] becomes

[math]\displaystyle{ \sqrt{\frac{2 \pi}{k f''(0)}} }[/math].

For [math]\displaystyle{ f''(0)\lt 0 }[/math] one uses the complex conjugate formula, as mentioned before.

Lower-order terms

As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved [math]\displaystyle{ f }[/math].

See also

Notes

  1. Courant, Richard; Hilbert, David (1953), Methods of mathematical physics, 1 (2nd revised ed.), New York: Interscience Publishers, p. 474, OCLC 505700 
  2. See for example Jean Dieudonné, Infinitesimal Calculus, p. 119 or Jean Dieudonné, Calcul Infinitésimal, p.135.

References

  • Bleistein, N. and Handelsman, R. (1975), Asymptotic Expansions of Integrals, Dover, New York.
  • Victor Guillemin and Shlomo Sternberg (1990), Geometric Asymptotics, (see Chapter 1).
  • Hörmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN 978-3-540-00662-6 .
  • Aki, Keiiti; & Richards, Paul G. (2002). "Quantitative Seismology" (2nd ed.), pp 255–256. University Science Books, ISBN 0-935702-96-2
  • Wong, R. (2001), Asymptotic Approximations of Integrals, Classics in Applied Mathematics, Vol. 34. Corrected reprint of the 1989 original. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. xviii+543 pages, ISBN 0-89871-497-4.
  • Dieudonné, J. (1980), Calcul Infinitésimal, Hermann, Paris

External links