Clenshaw algorithm

From HandWiki

In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials.[1][2] The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials. It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.[3]

Clenshaw algorithm

In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions [math]\displaystyle{ \phi_k(x) }[/math]:

[math]\displaystyle{ S(x) = \sum_{k=0}^n a_k \phi_k(x) }[/math]

where [math]\displaystyle{ \phi_k,\; k=0, 1, \ldots }[/math] is a sequence of functions that satisfy the linear recurrence relation

[math]\displaystyle{ \phi_{k+1}(x) = \alpha_k(x)\,\phi_k(x) + \beta_k(x)\,\phi_{k-1}(x), }[/math]

where the coefficients [math]\displaystyle{ \alpha_k(x) }[/math] and [math]\displaystyle{ \beta_k(x) }[/math] are known in advance.

The algorithm is most useful when [math]\displaystyle{ \phi_k(x) }[/math] are functions that are complicated to compute directly, but [math]\displaystyle{ \alpha_k(x) }[/math] and [math]\displaystyle{ \beta_k(x) }[/math] are particularly simple. In the most common applications, [math]\displaystyle{ \alpha(x) }[/math] does not depend on [math]\displaystyle{ k }[/math], and [math]\displaystyle{ \beta }[/math] is a constant that depends on neither [math]\displaystyle{ x }[/math] nor [math]\displaystyle{ k }[/math].

To perform the summation for given series of coefficients [math]\displaystyle{ a_0, \ldots, a_n }[/math], compute the values [math]\displaystyle{ b_k(x) }[/math] by the "reverse" recurrence formula:

[math]\displaystyle{ \begin{align} b_{n+1}(x) &= b_{n+2}(x) = 0, \\ b_k(x) &= a_k + \alpha_k(x)\,b_{k+1}(x) + \beta_{k+1}(x)\,b_{k+2}(x). \end{align} }[/math]

Note that this computation makes no direct reference to the functions [math]\displaystyle{ \phi_k(x) }[/math]. After computing [math]\displaystyle{ b_2(x) }[/math] and [math]\displaystyle{ b_1(x) }[/math], the desired sum can be expressed in terms of them and the simplest functions [math]\displaystyle{ \phi_0(x) }[/math] and [math]\displaystyle{ \phi_1(x) }[/math]:

[math]\displaystyle{ S(x) = \phi_0(x)\,a_0 + \phi_1(x)\,b_1(x) + \beta_1(x)\,\phi_0(x)\,b_2(x). }[/math]

See Fox and Parker[4] for more information and stability analyses.

Examples

Horner as a special case of Clenshaw

A particularly simple case occurs when evaluating a polynomial of the form

[math]\displaystyle{ S(x) = \sum_{k=0}^n a_k x^k }[/math].

The functions are simply

[math]\displaystyle{ \begin{align} \phi_0(x) &= 1, \\ \phi_k(x) &= x^k = x\phi_{k-1}(x) \end{align} }[/math]

and are produced by the recurrence coefficients [math]\displaystyle{ \alpha(x) = x }[/math] and [math]\displaystyle{ \beta = 0 }[/math].

In this case, the recurrence formula to compute the sum is

[math]\displaystyle{ b_k(x) = a_k + x b_{k+1}(x) }[/math]

and, in this case, the sum is simply

[math]\displaystyle{ S(x) = a_0 + x b_1(x) = b_0(x) }[/math],

which is exactly the usual Horner's method.

Special case for Chebyshev series

Consider a truncated Chebyshev series

[math]\displaystyle{ p_n(x) = a_0 + a_1T_1(x) + a_2T_2(x) + \cdots + a_nT_n(x). }[/math]

The coefficients in the recursion relation for the Chebyshev polynomials are

[math]\displaystyle{ \alpha(x) = 2x, \quad \beta = -1, }[/math]

with the initial conditions

[math]\displaystyle{ T_0(x) = 1, \quad T_1(x) = x. }[/math]

Thus, the recurrence is

[math]\displaystyle{ b_k(x) = a_k + 2xb_{k+1}(x) - b_{k+2}(x) }[/math]

and the final sum is

[math]\displaystyle{ p_n(x) = a_0 + xb_1(x) - b_2(x). }[/math]

One way to evaluate this is to continue the recurrence one more step, and compute

[math]\displaystyle{ b_0(x) = a_0 + 2xb_1(x) - b_2(x), }[/math]

(note the doubled a0 coefficient) followed by

[math]\displaystyle{ p_n(x) = \frac{1}{2}\left[a_0+b_0(x) - b_2(x)\right]. }[/math]

Meridian arc length on the ellipsoid

Clenshaw summation is extensively used in geodetic applications.[2] A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form

[math]\displaystyle{ m(\theta) = C_0\,\theta + C_1\sin \theta + C_2\sin 2\theta + \cdots + C_n\sin n\theta. }[/math]

Leaving off the initial [math]\displaystyle{ C_0\,\theta }[/math] term, the remainder is a summation of the appropriate form. There is no leading term because [math]\displaystyle{ \phi_0(\theta) = \sin 0\theta = \sin 0 = 0 }[/math].

The recurrence relation for [math]\displaystyle{ \sin k\theta }[/math] is

[math]\displaystyle{ \sin (k+1)\theta = 2 \cos\theta \sin k\theta - \sin (k-1)\theta }[/math],

making the coefficients in the recursion relation

[math]\displaystyle{ \alpha_k(\theta) = 2\cos\theta, \quad \beta_k = -1. }[/math]

and the evaluation of the series is given by

[math]\displaystyle{ \begin{align} b_{n+1}(\theta) &= b_{n+2}(\theta) = 0, \\ b_k(\theta) &= C_k + 2\cos \theta \,b_{k+1}(\theta) - b_{k+2}(\theta),\quad\mathrm{for\ } n\ge k \ge 1. \end{align} }[/math]

The final step is made particularly simple because [math]\displaystyle{ \phi_0(\theta) = \sin 0 = 0 }[/math], so the end of the recurrence is simply [math]\displaystyle{ b_1(\theta)\sin(\theta) }[/math]; the [math]\displaystyle{ C_0\,\theta }[/math] term is added separately:

[math]\displaystyle{ m(\theta) = C_0\,\theta + b_1(\theta)\sin \theta. }[/math]

Note that the algorithm requires only the evaluation of two trigonometric quantities [math]\displaystyle{ \cos \theta }[/math] and [math]\displaystyle{ \sin \theta }[/math].

Difference in meridian arc lengths

Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write

[math]\displaystyle{ m(\theta_1)-m(\theta_2) = C_0(\theta_1-\theta_2) + \sum_{k=1}^n 2 C_k \sin\bigl({\textstyle\frac12}k(\theta_1-\theta_2)\bigr) \cos\bigl({\textstyle\frac12}k(\theta_1+\theta_2)\bigr). }[/math]

Clenshaw summation can be applied in this case[5] provided we simultaneously compute [math]\displaystyle{ m(\theta_1)+m(\theta_2) }[/math] and perform a matrix summation,

[math]\displaystyle{ \mathsf M(\theta_1,\theta_2) = \begin{bmatrix} (m(\theta_1) + m(\theta_2)) / 2\\ (m(\theta_1) - m(\theta_2)) / (\theta_1 - \theta_2) \end{bmatrix} = C_0 \begin{bmatrix}\mu\\1\end{bmatrix} + \sum_{k=1}^n C_k \mathsf F_k(\theta_1,\theta_2), }[/math]

where

[math]\displaystyle{ \begin{align} \delta &= {\textstyle\frac12}(\theta_1-\theta_2), \\ \mu &= {\textstyle\frac12}(\theta_1+\theta_2), \\ \mathsf F_k(\theta_1,\theta_2) &= \begin{bmatrix} \cos k \delta \sin k \mu\\ \displaystyle\frac{\sin k \delta}\delta \cos k \mu \end{bmatrix}. \end{align} }[/math]

The first element of [math]\displaystyle{ \mathsf M(\theta_1,\theta_2) }[/math] is the average value of [math]\displaystyle{ m }[/math] and the second element is the average slope. [math]\displaystyle{ \mathsf F_k(\theta_1,\theta_2) }[/math] satisfies the recurrence relation

[math]\displaystyle{ \mathsf F_{k+1}(\theta_1,\theta_2) = \mathsf A(\theta_1,\theta_2) \mathsf F_k(\theta_1,\theta_2) - \mathsf F_{k-1}(\theta_1,\theta_2), }[/math]

where

[math]\displaystyle{ \mathsf A(\theta_1,\theta_2) = 2\begin{bmatrix} \cos \delta \cos \mu & -\delta\sin \delta \sin \mu \\ - \displaystyle\frac{\sin \delta}\delta \sin \mu & \cos \delta \cos \mu \end{bmatrix} }[/math]

takes the place of [math]\displaystyle{ \alpha }[/math] in the recurrence relation, and [math]\displaystyle{ \beta=-1 }[/math]. The standard Clenshaw algorithm can now be applied to yield

[math]\displaystyle{ \begin{align} \mathsf B_{n+1} &= \mathsf B_{n+2} = \mathsf 0, \\ \mathsf B_k &= C_k \mathsf I + \mathsf A \mathsf B_{k+1} - \mathsf B_{k+2}, \qquad\mathrm{for\ } n\ge k \ge 1,\\ \mathsf M(\theta_1,\theta_2) &= C_0 \begin{bmatrix}\mu\\1\end{bmatrix} + \mathsf B_1 \mathsf F_1(\theta_1,\theta_2), \end{align} }[/math]

where [math]\displaystyle{ \mathsf B_k }[/math] are 2×2 matrices. Finally we have

[math]\displaystyle{ \frac{m(\theta_1) - m(\theta_2)}{\theta_1 - \theta_2} = \mathsf M_2(\theta_1, \theta_2). }[/math]

This technique can be used in the limit [math]\displaystyle{ \theta_2 = \theta_1 = \mu }[/math] and [math]\displaystyle{ \delta = 0\, }[/math] to simultaneously compute [math]\displaystyle{ m(\mu) }[/math] and the derivative [math]\displaystyle{ dm(\mu)/d\mu }[/math], provided that, in evaluating [math]\displaystyle{ \mathsf F_1 }[/math] and [math]\displaystyle{ \mathsf A }[/math], we take [math]\displaystyle{ \lim_{\delta\rightarrow0}(\sin k \delta)/\delta = k }[/math].

See also

References

  1. Clenshaw, C. W. (July 1955). "A note on the summation of Chebyshev series". Mathematical Tables and Other Aids to Computation 9 (51): 118. doi:10.1090/S0025-5718-1955-0071856-0. ISSN 0025-5718. https://www.ams.org/journals/mcom/1955-09-051/S0025-5718-1955-0071856-0/.  Note that this paper is written in terms of the Shifted Chebyshev polynomials of the first kind [math]\displaystyle{ T^*_n(x) = T_n(2x-1) }[/math].
  2. 2.0 2.1 Tscherning, C. C.; Poder, K. (1982), "Some Geodetic applications of Clenshaw Summation", Bolletino di Geodesia e Scienze Affini 41 (4): 349–375, http://cct.gfy.ku.dk/publ_cct/cct80.pdf, retrieved 2012-08-02 
  3. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 5.4.2. Clenshaw's Recurrence Formula", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, http://apps.nrbook.com/empanel/index.html?pg=222 
  4. Fox, Leslie; Parker, Ian B. (1968), Chebyshev Polynomials in Numerical Analysis, Oxford University Press, ISBN 0-19-859614-6 
  5. Karney, C. F. F. (2014), Clenshaw evaluation of differenced sums, http://geographiclib.sourceforge.net/C++/1.38/rhumb.html#dividedclenshaw