Clenshaw algorithm

In numerical analysis, the Clenshaw algorithm,[1] also called Clenshaw summation,[2] is a recursive method to evaluate a linear combination of Chebyshev polynomials. It is a generalization of Horner's method for evaluating a linear combination of monomials.

It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.[3]

Clenshaw algorithm

In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions ${\displaystyle \phi _{k}(x)}$:

${\displaystyle S(x)=\sum _{k=0}^{n}a_{k}\phi _{k}(x)}$

where ${\displaystyle \phi _{k},\;k=0,1,\ldots }$ is a sequence of functions that satisfy the linear recurrence relation

${\displaystyle \phi _{k+1}(x)=\alpha _{k}(x)\,\phi _{k}(x)+\beta _{k}(x)\,\phi _{k-1}(x),}$

where the coefficients ${\displaystyle \alpha _{k}(x)}$ and ${\displaystyle \beta _{k}(x)}$ are known in advance.

The algorithm is most useful when ${\displaystyle \phi _{k}(x)}$ are complex functions to compute directly, but ${\displaystyle \alpha _{k}(x)}$ and ${\displaystyle \beta _{k}(x)}$ are particularly simple. In the most common applications, ${\displaystyle \alpha (x)}$ does not depend on ${\displaystyle k}$, and ${\displaystyle \beta }$ is a constant that depends on neither ${\displaystyle x}$ nor ${\displaystyle k}$.

To perform the summation for given series of coefficients ${\displaystyle a_{0},\ldots ,a_{n}}$, compute the values ${\displaystyle b_{k}(x)}$ by the "reverse" recurrence formula:

{\displaystyle {\begin{aligned}b_{n+1}(x)&=b_{n+2}(x)=0,\\b_{k}(x)&=a_{k}+\alpha _{k}(x)\,b_{k+1}(x)+\beta _{k+1}(x)\,b_{k+2}(x).\end{aligned}}}

Note that this computation makes no direct reference to the functions ${\displaystyle \phi _{k}(x)}$. After computing ${\displaystyle b_{2}(x)}$ and ${\displaystyle b_{1}(x)}$, the desired sum can be expressed in terms of them and the simplest functions ${\displaystyle \phi _{0}(x)}$ and ${\displaystyle \phi _{1}(x)}$:

${\displaystyle S(x)=\phi _{0}(x)\,a_{0}+\phi _{1}(x)\,b_{1}(x)+\beta _{1}(x)\,\phi _{0}(x)\,b_{2}(x).}$

Examples

Horner as a special case of Clenshaw

A particularly simple case occurs when evaluating a polynomial of the form

${\displaystyle S(x)=\sum _{k=0}^{n}a_{k}x^{k}}$.

The functions are simply

{\displaystyle {\begin{aligned}\phi _{0}(x)&=1,\\\phi _{k}(x)&=x^{k}=x\phi _{k-1}(x)\end{aligned}}}

and are produced by the recurrence coefficients ${\displaystyle \alpha (x)=x}$ and ${\displaystyle \beta =0}$.

In this case, the recurrence formula to compute the sum is

${\displaystyle b_{k}(x)=a_{k}+xb_{k+1}(x)}$

and, in this case, the sum is simply

${\displaystyle S(x)=a_{0}+xb_{1}(x)=b_{0}(x)}$,

which is exactly the usual Horner's method.

Special case for Chebyshev series

Consider a truncated Chebyshev series

${\displaystyle p_{n}(x)=a_{0}+a_{1}T_{1}(x)+a_{2}T_{2}(x)+\cdots +a_{n}T_{n}(x).}$

The coefficients in the recursion relation for the Chebyshev polynomials are

${\displaystyle \alpha (x)=2x,\quad \beta =-1,}$

with the initial conditions

${\displaystyle T_{0}(x)=1,\quad T_{1}(x)=x.}$

Thus, the recurrence is

${\displaystyle b_{k}(x)=a_{k}+2xb_{k+1}(x)-b_{k+2}(x)}$

and the final sum is

${\displaystyle p_{n}(x)=a_{0}+xb_{1}(x)-b_{2}(x).}$

One way to evaluate this is to continue the recurrence one more step, and compute

${\displaystyle b_{0}(x)=2a_{0}+2xb_{1}(x)-b_{2}(x),}$

(note the doubled a0 coefficient) followed by

${\displaystyle p_{n}(x)={\frac {1}{2}}\left[b_{0}(x)-b_{2}(x)\right].}$

Meridian arc length on the ellipsoid

Clenshaw summation is extensively used in geodetic applications.[2] A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form

${\displaystyle m(\theta )=C_{0}\,\theta +C_{1}\sin \theta +C_{2}\sin 2\theta +\cdots +C_{n}\sin n\theta .}$

Leaving off the initial ${\displaystyle C_{0}\,\theta }$ term, the remainder is a summation of the appropriate form. There is no leading term because ${\displaystyle \phi _{0}(\theta )=\sin 0\theta =\sin 0=0}$.

${\displaystyle \sin(k+1)\theta =2\cos \theta \sin k\theta -\sin(k-1)\theta }$,

making the coefficients in the recursion relation

${\displaystyle \alpha _{k}(\theta )=2\cos \theta ,\quad \beta _{k}=-1.}$

and the evaluation of the series is given by

{\displaystyle {\begin{aligned}b_{n+1}(\theta )&=b_{n+2}(\theta )=0,\\b_{k}(\theta )&=C_{k}+2\cos \theta \,b_{k+1}(\theta )-b_{k+2}(\theta ),\quad \mathrm {for\ } n\geq k\geq 1.\end{aligned}}}

The final step is made particularly simple because ${\displaystyle \phi _{0}(\theta )=\sin 0=0}$, so the end of the recurrence is simply ${\displaystyle b_{1}(\theta )\sin(\theta )}$; the ${\displaystyle C_{0}\,\theta }$ term is added separately:

${\displaystyle m(\theta )=C_{0}\,\theta +b_{1}(\theta )\sin \theta .}$

Note that the algorithm requires only the evaluation of two trigonometric quantities ${\displaystyle \cos \theta }$ and ${\displaystyle \sin \theta }$.

Difference in meridian arc lengths

Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write

${\displaystyle m(\theta _{1})-m(\theta _{2})=\sum _{k=1}^{n}2C_{k}\sin {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}-\theta _{2}){\bigr )}\cos {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}+\theta _{2}){\bigr )}.}$

Clenshaw summation can be applied in this case[5] provided we simultaneously compute ${\displaystyle m(\theta _{1})+m(\theta _{2})}$ and perform a matrix summation,

${\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})={\begin{bmatrix}(m(\theta _{1})+m(\theta _{2}))/2\\(m(\theta _{1})-m(\theta _{2}))/(\theta _{1}-\theta _{2})\end{bmatrix}}=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+\sum _{k=1}^{n}C_{k}{\mathsf {F}}_{k}(\theta _{1},\theta _{2}),}$

where

{\displaystyle {\begin{aligned}\delta &={\textstyle {\frac {1}{2}}}(\theta _{1}-\theta _{2}),\\\mu &={\textstyle {\frac {1}{2}}}(\theta _{1}+\theta _{2}),\\{\mathsf {F}}_{k}(\theta _{1},\theta _{2})&={\begin{bmatrix}\cos k\delta \sin k\mu \\\displaystyle {\frac {\sin k\delta }{\delta }}\cos k\mu \end{bmatrix}}.\end{aligned}}}

The first element of ${\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})}$ is the average value of ${\displaystyle m}$ and the second element is the average slope. ${\displaystyle {\mathsf {F}}_{k}(\theta _{1},\theta _{2})}$ satisfies the recurrence relation

${\displaystyle {\mathsf {F}}_{k+1}(\theta _{1},\theta _{2})={\mathsf {A}}(\theta _{1},\theta _{2}){\mathsf {F}}_{k}(\theta _{1},\theta _{2})-{\mathsf {F}}_{k-1}(\theta _{1},\theta _{2}),}$

where

${\displaystyle {\mathsf {A}}(\theta _{1},\theta _{2})=2{\begin{bmatrix}\cos \delta \cos \mu &-\delta \sin \delta \sin \mu \\-\displaystyle {\frac {\sin \delta }{\delta }}\sin \mu &\cos \delta \cos \mu \end{bmatrix}}}$

takes the place of ${\displaystyle \alpha }$ in the recurrence relation, and ${\displaystyle \beta =-1}$. The standard Clenshaw algorithm can now be applied to yield

{\displaystyle {\begin{aligned}{\mathsf {B}}_{n+1}&={\mathsf {B}}_{n+2}={\mathsf {0}},\\{\mathsf {B}}_{k}&=C_{k}{\mathsf {I}}+{\mathsf {A}}{\mathsf {B}}_{k+1}-{\mathsf {B}}_{k+2},\qquad \mathrm {for\ } n\geq k\geq 1,\\{\mathsf {M}}(\theta _{1},\theta _{2})&=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+{\mathsf {B}}_{1}{\mathsf {F}}_{1}(\theta _{1},\theta _{2}),\end{aligned}}}

where ${\displaystyle {\mathsf {B}}_{k}}$ are 2×2 matrices. Finally we have

${\displaystyle {\frac {m(\theta _{1})-m(\theta _{2})}{\theta _{1}-\theta _{2}}}={\mathsf {M}}_{2}(\theta _{1},\theta _{2}).}$

References

1. Template:Cite doi Note that this paper is written in terms of the Shifted Chebyshev polynomials of the first kind ${\displaystyle T_{n}^{*}(x)=T_{n}(2x-1)}$.
2. {{#invoke:citation/CS1|citation |CitationClass=citation }}
3. {{#invoke:citation/CS1|citation |CitationClass=citation }}
4. {{#invoke:citation/CS1|citation |CitationClass=citation }}
5. {{#invoke:citation/CS1|citation |CitationClass=citation }}