Series acceleration

From HandWiki

In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.

Definition

Given a sequence

[math]\displaystyle{ S=\{ s_n \}_{n\in\N} }[/math]

having a limit

[math]\displaystyle{ \lim_{n\to\infty} s_n = \ell, }[/math]

an accelerated series is a second sequence

[math]\displaystyle{ S'=\{ s'_n \}_{n\in\N} }[/math]

which converges faster to [math]\displaystyle{ \ell }[/math] than the original sequence, in the sense that

[math]\displaystyle{ \lim_{n\to\infty} \frac{s'_n-\ell}{s_n-\ell} = 0. }[/math]

If the original sequence is divergent, the sequence transformation acts as an extrapolation method to the antilimit [math]\displaystyle{ \ell }[/math].

The mappings from the original to the transformed series may be linear (as defined in the article sequence transformations), or non-linear. In general, the non-linear sequence transformations tend to be more powerful.

Overview

Two classical techniques for series acceleration are Euler's transformation of series[1] and Kummer's transformation of series.[2] A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method.

For alternating series, several powerful techniques, offering convergence rates from [math]\displaystyle{ 5.828^{-n} }[/math] all the way to [math]\displaystyle{ 17.93^{-n} }[/math] for a summation of [math]\displaystyle{ n }[/math] terms, are described by Cohen et al.[3]

Euler's transform

A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by

[math]\displaystyle{ \sum_{n=0}^\infty (-1)^n a_n = \sum_{n=0}^\infty (-1)^n \frac{(\Delta^n a)_0}{2^{n+1}} }[/math]

where [math]\displaystyle{ \Delta }[/math] is the forward difference operator, for which one has the formula

[math]\displaystyle{ (\Delta^n a)_0 = \sum_{k=0}^n (-1)^k {n \choose k} a_{n-k}. }[/math]

If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges.

A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation.[4]

Conformal mappings

A series

[math]\displaystyle{ S = \sum_{n=0}^{\infty} a_n }[/math]

can be written as f(1), where the function f is defined as

[math]\displaystyle{ f(z) = \sum_{n=0}^{\infty} a_n z^n. }[/math]

The function f(z) can have singularities in the complex plane (branch point singularities, poles or essential singularities), which limit the radius of convergence of the series. If the point z = 1 is close to or on the boundary of the disk of convergence, the series for S will converge very slowly. One can then improve the convergence of the series by means of a conformal mapping that moves the singularities such that the point that is mapped to z = 1 ends up deeper in the new disk of convergence.

The conformal transform [math]\displaystyle{ z = \Phi(w) }[/math] needs to be chosen such that [math]\displaystyle{ \Phi(0) = 0 }[/math], and one usually chooses a function that has a finite derivative at w = 0. One can assume that [math]\displaystyle{ \Phi(1) = 1 }[/math] without loss of generality, as one can always rescale w to redefine [math]\displaystyle{ \Phi }[/math]. We then consider the function

[math]\displaystyle{ g(w) = f(\Phi(w)). }[/math]

Since [math]\displaystyle{ \Phi(1) = 1 }[/math], we have f(1) = g(1). We can obtain the series expansion of g(w) by putting [math]\displaystyle{ z = \Phi(w) }[/math] in the series expansion of f(z) because [math]\displaystyle{ \Phi(0)=0 }[/math]; the first n terms of the series expansion for f(z) will yield the first n terms of the series expansion for g(w) if [math]\displaystyle{ \Phi'(0) \neq 0 }[/math]. Putting w = 1 in that series expansion will thus yield a series such that if it converges, it will converge to the same value as the original series.

Non-linear sequence transformations

Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations.

Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and may be used as highly effective extrapolation methods.

Aitken method

Main page: Aitken's delta-squared process

A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,

[math]\displaystyle{ \mathbb{A} : S \to S'=\mathbb{A}(S) = {(s'_n)}_{n\in\N} }[/math]

defined by

[math]\displaystyle{ s'_n = s_{n+2} - \frac{(s_{n+2}-s_{n+1})^2}{s_{n+2}-2s_{n+1}+s_n}. }[/math]

This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.

See also

References

  1. Abramowitz, Milton; Stegun, Irene Ann, eds (1983). "Chapter 3, eqn 3.6.27". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 16. LCCN 65-12253. ISBN 978-0-486-61272-0. http://www.math.sfu.ca/~cbm/aands/page_16.htm. 
  2. Abramowitz, Milton; Stegun, Irene Ann, eds (1983). "Chapter 3, eqn 3.6.26". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 16. LCCN 65-12253. ISBN 978-0-486-61272-0. http://www.math.sfu.ca/~cbm/aands/page_16.htm. 
  3. Henri Cohen, Fernando Rodriguez Villegas, and Don Zagier, "Convergence Acceleration of Alternating Series", Experimental Mathematics, 9:1 (2000) page 3.
  4. William H. Press, et al., Numerical Recipes in C, (1987) Cambridge University Press, ISBN 0-521-43108-5 (See section 5.1).
  • C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North-Holland, 1991.
  • G. A. Baker Jr. and P. Graves-Morris, Padé Approximants, Cambridge U.P., 1996.
  • Weisstein, Eric W.. "Convergence Improvement". http://mathworld.wolfram.com/ConvergenceImprovement.html. 
  • Herbert H. H. Homeier: Scalar Levin-Type Sequence Transformations, Journal of Computational and Applied Mathematics, vol. 122, no. 1–2, p 81 (2000). Homeier, H. H. H. (2000). "Scalar Levin-type sequence transformations". Journal of Computational and Applied Mathematics 122 (1–2): 81–147. doi:10.1016/S0377-0427(00)00359-9. Bibcode2000JCoAM.122...81H. , arXiv:math/0005209.
  • Brezinski Claude and Redivo-Zaglia Michela : "The genesis and early developments of Aitken's process, Shanks transformation, the [math]\displaystyle{ \epsilon }[/math]-algorithm, and related fixed point methods", Numerical Algorithms, Vol.80, No.1, (2019), pp.11-133.
  • Delahaye J. P. : "Sequence Transformations", Springer-Verlag, Berlin, ISBN 978-3540152835 (1988).
  • Sidi Avram : "Vector Extrapolation Methods with Applications", SIAM, ISBN 978-1-61197-495-9 (2017).
  • Brezinski Claude, Redivo-Zaglia Michela and Saad Yousef : "Shanks Sequence Transformations and Anderson Acceleration", SIAM Review, Vol.60, No.3 (2018), pp.646–669. doi:10.1137/17M1120725 .
  • Brezinski Claude : "Reminiscences of Peter Wynn", Numerical Algorithms, Vol.80(2019), pp.5-10.
  • Brezinski Claude and Redivo-Zaglia Michela : "Extrapolation and Rational Approximation", Springer, ISBN 978-3-030-58417-7 (2020).

External links