Lagrange polynomial

From HandWiki
Short description: Polynomials used for interpolation
This image shows, for four points ((−9, 5), (−4, 2), (−1, −2), (7, 9)), the (cubic) interpolation polynomial L(x) (dashed, black), which is the sum of the scaled basis polynomials y00(x), y11(x), y22(x) and y33(x). The interpolation polynomial passes through all four control points, and each scaled basis polynomial passes through its respective control point and is 0 where x corresponds to the other three control points.

In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.

Given a data set of coordinate pairs [math]\displaystyle{ (x_j, y_j) }[/math] with [math]\displaystyle{ 0 \leq j \leq k, }[/math] the [math]\displaystyle{ x_j }[/math] are called nodes and the [math]\displaystyle{ y_j }[/math] are called values. The Lagrange polynomial [math]\displaystyle{ L(x) }[/math] has degree [math]\displaystyle{ \leq k }[/math] and assumes each value at the corresponding node, [math]\displaystyle{ L(x_j) = y_j. }[/math]

Although named after Joseph-Louis Lagrange, who published it in 1795,[1] the method was first discovered in 1779 by Edward Waring.[2] It is also an easy consequence of a formula published in 1783 by Leonhard Euler.[3]

Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration, Shamir's secret sharing scheme in cryptography, and Reed–Solomon error correction in coding theory.

For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation.

Definition

Given a set of [math]\displaystyle{ k + 1 }[/math] nodes [math]\displaystyle{ \{x_0, x_1, \ldots, x_k\} }[/math], which must all be distinct, [math]\displaystyle{ x_j \neq x_m }[/math] for indices [math]\displaystyle{ j \neq m }[/math], the Lagrange basis for polynomials of degree [math]\displaystyle{ \leq k }[/math] for those nodes is the set of polynomials [math]\displaystyle{ \{\ell_0(x), \ell_1(x), \ldots, \ell_k(x)\} }[/math] each of degree [math]\displaystyle{ k }[/math] which take values [math]\displaystyle{ \ell_j(x_m) = 0 }[/math] if [math]\displaystyle{ m \neq j }[/math] and [math]\displaystyle{ \ell_j(x_j) = 1 }[/math]. Using the Kronecker delta this can be written [math]\displaystyle{ \ell_j(x_m) = \delta_{jm}. }[/math] Each basis polynomial can be explicitly described by the product:

[math]\displaystyle{ \begin{aligned} \ell_j(x) &= \frac{(x-x_0)}{(x_j-x_0)} \cdots \frac{(x-x_{j-1})}{(x_j-x_{j - 1})} \frac{(x-x_{j+1})}{(x_j-x_{j+1})} \cdots \frac{(x-x_k)}{(x_j-x_k)} \\[10mu] &= \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m}. \end{aligned} }[/math]

Notice that the numerator [math]\displaystyle{ \prod_{m \neq j}(x - x_m) }[/math] has [math]\displaystyle{ k }[/math] roots at the nodes [math]\displaystyle{ \{x_m\}_{m \neq j} }[/math] while the denominator [math]\displaystyle{ \prod_{m \neq j}(x_j - x_m) }[/math] scales the resulting polynomial so that [math]\displaystyle{ \ell_j(x_j) = 1. }[/math]

The Lagrange interpolating polynomial for those nodes through the corresponding values [math]\displaystyle{ \{y_0, y_1, \ldots, y_k\} }[/math] is the linear combination:

[math]\displaystyle{ L(x) = \sum_{j=0}^{k} y_j \ell_j(x). }[/math]

Each basis polynomial has degree [math]\displaystyle{ k }[/math], so the sum [math]\displaystyle{ L(x) }[/math] has degree [math]\displaystyle{ \leq k }[/math], and it interpolates the data because [math]\displaystyle{ L(x_m) = \sum_{j=0}^{k} y_j \ell_j(x_m) = \sum_{j=0}^{k} y_j \delta_{mj} = y_m. }[/math]

The interpolating polynomial is unique. Proof: assume the polynomial [math]\displaystyle{ M(x) }[/math] of degree [math]\displaystyle{ \leq k }[/math] interpolates the data. Then the difference [math]\displaystyle{ M(x) - L(x) }[/math] is zero at [math]\displaystyle{ k + 1 }[/math] distinct nodes [math]\displaystyle{ \{x_0, x_1, \ldots, x_k\}. }[/math] But the only polynomial of degree [math]\displaystyle{ \leq k }[/math] with more than [math]\displaystyle{ k }[/math] roots is the constant zero function, so [math]\displaystyle{ M(x) - L(x) = 0, }[/math] or [math]\displaystyle{ M(x) = L(x). }[/math]

Barycentric form

Each Lagrange basis polynomial [math]\displaystyle{ \ell_j(x) }[/math] can be rewritten as the product of three parts, a function [math]\displaystyle{ \ell(x) = \prod_m (x - x_m) }[/math] common to every basis polynomial, a node-specific constant [math]\displaystyle{ w_j = \prod_{m\neq j}(x_j - x_m)^{-1} }[/math] (called the barycentric weight), and a part representing the displacement from [math]\displaystyle{ x_j }[/math] to [math]\displaystyle{ x }[/math]:[4]

[math]\displaystyle{ \ell_j(x) = \ell(x) \dfrac{w_j}{x - x_j} }[/math]

By factoring [math]\displaystyle{ \ell(x) }[/math] out from the sum, we can write the Lagrange polynomial in the so-called first barycentric form:

[math]\displaystyle{ L(x) = \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j. }[/math]

If the weights [math]\displaystyle{ w_j }[/math] have been pre-computed, this requires only [math]\displaystyle{ \mathcal O(k) }[/math] operations compared to [math]\displaystyle{ \mathcal O(k^2) }[/math] for evaluating each Lagrange basis polynomial [math]\displaystyle{ \ell_j(x) }[/math] individually.

The barycentric interpolation formula can also easily be updated to incorporate a new node [math]\displaystyle{ x_{k+1} }[/math] by dividing each of the [math]\displaystyle{ w_j }[/math], [math]\displaystyle{ j=0 \dots k }[/math] by [math]\displaystyle{ (x_j - x_{k+1}) }[/math] and constructing the new [math]\displaystyle{ w_{k+1} }[/math] as above.

For any [math]\displaystyle{ x, }[/math] [math]\displaystyle{ \sum_{j=0}^k \ell_j(x) = 1 }[/math] because the constant function [math]\displaystyle{ g(x) = 1 }[/math] is the unique polynomial of degree [math]\displaystyle{ \leq k }[/math] interpolating the data [math]\displaystyle{ \{(x_0, 1), (x_1, 1), \ldots, (x_k, 1) \}. }[/math] We can thus further simplify the barycentric formula by dividing [math]\displaystyle{ L(x) = L(x) / g(x)\colon }[/math]

[math]\displaystyle{ \begin{aligned} L(x) &= \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j \Bigg/ \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j} \\[10mu] &= \sum_{j=0}^k \frac{w_j}{x-x_j}y_j \Bigg/ \sum_{j=0}^k \frac{w_j}{x-x_j}. \end{aligned} }[/math]

This is called the second form or true form of the barycentric interpolation formula.

This second form has advantages in computation cost and accuracy: it avoids evaluation of [math]\displaystyle{ \ell(x) }[/math]; the work to compute each term in the denominator [math]\displaystyle{ w_j/(x-x_j) }[/math] has already been done in computing [math]\displaystyle{ \bigl(w_j/(x-x_j)\bigr)y_j }[/math] and so computing the sum in the denominator costs only [math]\displaystyle{ k-1 }[/math] addition operations; for evaluation points [math]\displaystyle{ x }[/math] which are close to one of the nodes [math]\displaystyle{ x_j }[/math], catastrophic cancelation would ordinarily be a problem for the value [math]\displaystyle{ (x-x_j) }[/math], however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result.

Using this formula to evaluate [math]\displaystyle{ L(x) }[/math] at one of the nodes [math]\displaystyle{ x_j }[/math] will result in the indeterminate [math]\displaystyle{ \infty y_j/\infty }[/math]; computer implementations must replace such results by [math]\displaystyle{ L(x_j) = y_j. }[/math]

Each Lagrange basis polynomial can also be written in barycentric form:

[math]\displaystyle{ \ell_j(x) = \frac{w_j}{x-x_j} \Bigg/ \sum_{m=0}^k \frac{w_m}{x-x_m}. }[/math]

A perspective from linear algebra

Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial [math]\displaystyle{ L(x) = \sum_{j=0}^k x^j m_j }[/math], we must invert the Vandermonde matrix [math]\displaystyle{ (x_i)^j }[/math] to solve [math]\displaystyle{ L(x_i) = y_i }[/math] for the coefficients [math]\displaystyle{ m_j }[/math] of [math]\displaystyle{ L(x) }[/math]. By choosing a better basis, the Lagrange basis, [math]\displaystyle{ L(x) = \sum_{j=0}^k l_j(x) y_j }[/math], we merely get the identity matrix, [math]\displaystyle{ \delta {ij} }[/math], which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.

This construction is analogous to the Chinese remainder theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.

Furthermore, when the order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial.

Example

We wish to interpolate [math]\displaystyle{ f(x) = x^2 }[/math] over the domain [math]\displaystyle{ 1 \leq x \leq 3 }[/math] at the three nodes [math]\displaystyle{ \{1,\, 2,\, 3\} }[/math]:

[math]\displaystyle{ \begin{align} x_0 & = 1, & & & y_0 = f(x_0) & = 1, \\[3mu] x_1 & = 2, & & & y_1 = f(x_1) & = 4, \\[3mu] x_2 & = 3, & & & y_2 = f(x_2) & =9. \end{align} }[/math]

The node polynomial [math]\displaystyle{ \ell }[/math] is

[math]\displaystyle{ \ell(x) = (x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x - 6. }[/math]

The barycentric weights are

[math]\displaystyle{ \begin{align} w_0 &= (1-2)^{-1}(1-3)^{-1} = \tfrac12, \\[3mu] w_1 &= (2-1)^{-1}(2-3)^{-1} = -1, \\[3mu] w_2 &= (3-1)^{-1}(3-2)^{-1} = \tfrac12. \end{align} }[/math]

The Lagrange basis polynomials are

[math]\displaystyle{ \begin{align} \ell_0(x) &= \frac{x - 2}{1 - 2}\cdot\frac{x - 3}{1 - 3} = \tfrac12x^2 - \tfrac52x + 3, \\[5mu] \ell_1(x) &= \frac{x - 1}{2 - 1}\cdot\frac{x - 3}{2 - 3} = -x^2 + 4x - 3, \\[5mu] \ell_2(x) &= \frac{x - 1}{3 - 1}\cdot\frac{x - 2}{3 - 2} = \tfrac12x^2 - \tfrac32x + 1. \end{align} }[/math]

The Lagrange interpolating polynomial is:

[math]\displaystyle{ \begin{align} L(x) &= 1\cdot\frac{x - 2}{1 - 2}\cdot\frac{x - 3}{1 - 3} + 4\cdot\frac{x - 1}{2 - 1}\cdot\frac{x - 3}{2 - 3} + 9\cdot\frac{x - 1}{3 - 1}\cdot\frac{x - 2}{3 - 2} \\[6mu] &= x^2. \end{align} }[/math]

In (second) barycentric form,

[math]\displaystyle{ L(x) = \frac {\displaystyle \sum_{j=0}^2 \frac{w_j}{x-x_j}y_j} {\displaystyle \sum_{j=0}^2 \frac{w_j}{x-x_j}} = \frac {\displaystyle \frac{\tfrac12}{x - 1} + \frac{-4}{x - 2} + \frac{\tfrac92}{x - 3}} {\displaystyle \frac{\tfrac12}{x - 1} + \frac{-1}{x - 2} + \frac{\tfrac12}{x - 3}}. }[/math]

Notes

Example of interpolation divergence for a set of Lagrange polynomials.

The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant.

But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials.

Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes.[5]

The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas.

Remainder in Lagrange interpolation formula

When interpolating a given function f by a polynomial of degree k at the nodes [math]\displaystyle{ x_0,...,x_k }[/math] we get the remainder [math]\displaystyle{ R(x) = f(x) - L(x) }[/math] which can be expressed as[6]

[math]\displaystyle{ R(x) = f[x_0,\ldots,x_k,x] \ell(x) = \ell(x) \frac{f^{(k+1)}(\xi)}{(k+1)!}, \quad \quad x_0 \lt \xi \lt x_k, }[/math]

where [math]\displaystyle{ f[x_0,\ldots,x_k,x] }[/math] is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as

[math]\displaystyle{ R(x) = \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)(t-x_0) \cdots (t-x_k)} dt = \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)\ell(t)} dt. }[/math]

The remainder can be bound as

[math]\displaystyle{ |R(x)| \leq \frac{(x_k-x_0)^{k+1}}{(k+1)!}\max_{x_0 \leq \xi \leq x_k} |f^{(k+1)}(\xi)|. }[/math]

Derivation[7]

Clearly, [math]\displaystyle{ R(x) }[/math] is zero at nodes. To find [math]\displaystyle{ R(x) }[/math] at a point [math]\displaystyle{ x_p }[/math], define a new function [math]\displaystyle{ F(x)=R(x)-\tilde{R}(x)=f(x)-L(x)-\tilde{R}(x) }[/math] and choose [math]\displaystyle{ \tilde{R}(x)=C\cdot\prod_{i=0}^k(x-x_i) }[/math] where [math]\displaystyle{ C }[/math] is the constant we are required to determine for a given [math]\displaystyle{ x_p }[/math]. We choose [math]\displaystyle{ C }[/math] so that [math]\displaystyle{ F(x) }[/math] has [math]\displaystyle{ k+2 }[/math] zeroes (at all nodes and [math]\displaystyle{ x_p }[/math]) between [math]\displaystyle{ x_0 }[/math] and [math]\displaystyle{ x_k }[/math] (including endpoints). Assuming that [math]\displaystyle{ f(x) }[/math] is [math]\displaystyle{ k+1 }[/math]-times differentiable, since [math]\displaystyle{ L(x) }[/math] and [math]\displaystyle{ \tilde{R}(x) }[/math] are polynomials, and therefore, are infinitely differentiable, [math]\displaystyle{ F(x) }[/math] will be [math]\displaystyle{ k+1 }[/math]-times differentiable. By Rolle's theorem, [math]\displaystyle{ F^{(1)}(x) }[/math] has [math]\displaystyle{ k+1 }[/math] zeroes, [math]\displaystyle{ F^{(2)}(x) }[/math] has [math]\displaystyle{ k }[/math] zeroes... [math]\displaystyle{ F^{(k+1)} }[/math] has 1 zero, say [math]\displaystyle{ \xi,\, x_0\lt \xi\lt x_k }[/math]. Explicitly writing [math]\displaystyle{ F^{(k+1)}(\xi) }[/math]:

[math]\displaystyle{ F^{(k+1)}(\xi)=f^{(k+1)}(\xi)-L^{(k+1)}(\xi)-\tilde{R}^{(k+1)}(\xi) }[/math]
[math]\displaystyle{ L^{(k+1)}=0,\tilde{R}^{(k+1)}=C\cdot(k+1)! }[/math] (Because the highest power of [math]\displaystyle{ x }[/math] in [math]\displaystyle{ \tilde{R}(x) }[/math] is [math]\displaystyle{ k+1 }[/math])
[math]\displaystyle{ 0=f^{(k+1)}(\xi)-C\cdot(k+1)! }[/math]

The equation can be rearranged as

[math]\displaystyle{ C=\frac{f^{(k+1)}(\xi)}{(k+1)!} }[/math]

Since [math]\displaystyle{ F(x_p) = 0 }[/math] we have [math]\displaystyle{ R(x_p)=\tilde{R}(x_p) = \frac{f^{k+1}(\xi)}{(k+1)!}\prod_{i=0}^k(x_p-x_i) }[/math]

Derivatives

The dth derivative of a Lagrange interpolating polynomial can be written in terms of the derivatives of the basis polynomials,

[math]\displaystyle{ L^{(d)}(x) := \sum_{j=0}^{k} y_j \ell_j^{(d)}(x). }[/math]

Recall (see § Definition above) that each Lagrange basis polynomial is

[math]\displaystyle{ \begin{aligned} \ell_j(x) &= \prod_{\begin{smallmatrix}m = 0\\ m\neq j\end{smallmatrix}}^k \frac{x-x_m}{x_j-x_m}. \end{aligned} }[/math]

The first derivative can be found using the product rule:

[math]\displaystyle{ \begin{align} \ell_j'(x) &= \sum_{\begin{smallmatrix}i=0 \\ i\not=j\end{smallmatrix}}^k \Biggl[ \frac{1}{x_j-x_i}\prod_{\begin{smallmatrix}m=0 \\ m\not = (i , j)\end{smallmatrix}}^k \frac{x-x_m}{x_j-x_m} \Biggr] \\[5mu] &= \ell_j(x)\sum_{\begin{smallmatrix}i=0 \\i\not=j\end{smallmatrix}}^k \frac{1}{x-x_i}. \end{align} }[/math]

The second derivative is

[math]\displaystyle{ \begin{align} \ell_j''(x) &= \sum_{\begin{smallmatrix}i=0 \\ i\ne j\end{smallmatrix}}^{k} \frac{1}{x_j-x_i} \Biggl[ \sum_{\begin{smallmatrix}m=0 \\ m\ne(i,j)\end{smallmatrix}}^{k} \Biggl( \frac{1}{x_j-x_m}\prod_{\begin{smallmatrix}n=0 \\ n\ne(i,j,m)\end{smallmatrix}}^{k} \frac{x-x_n}{x_j-x_n} \Biggr) \Biggr] \\[10mu] &= \ell_j(x) \sum_{0 \leq i \lt m \leq k} \frac{2}{(x-x_i)(x - x_m)} \\[10mu] &= \ell_j(x)\Biggl[\Biggl(\sum_{\begin{smallmatrix}i=0 \\i\not=j\end{smallmatrix}}^k \frac{1}{x-x_i}\Biggr)^2-\sum_{\begin{smallmatrix}i=0 \\i\not=j\end{smallmatrix}}^k \frac{1}{(x-x_i)^2}\Biggr]. \end{align} }[/math]

The third derivative is

[math]\displaystyle{ \begin{align} \ell_j'''(x) &= \ell_j(x) \sum_{0 \leq i \lt m \lt n \leq k} \frac{3!}{(x-x_i)(x - x_m)(x - x_n)} \end{align} }[/math]

and likewise for higher derivatives.

Note that all of these formulas for derivatives are invalid at or near a node. A method of evaluating all orders of derivatives of a Lagrange polynomial efficiently at all points of the domain, including the nodes, is converting the Lagrange polynomial to power basis form and then evaluating the derivatives.

Finite fields

The Lagrange polynomial can also be computed in finite fields. This has applications in cryptography, such as in Shamir's Secret Sharing scheme.

See also

References

  1. Lagrange, Joseph-Louis (1795). "Leçon Cinquième. Sur l'usage des courbes dans la solution des problèmes" (in fr). Leçons Elémentaires sur les Mathématiques. Paris.  Republished in Lagrange, Joseph-Louis (1877). Serret, Joseph-Alfred. ed. Oeuvres de Lagrange. 7. Gauthier-Villars. pp. 271–287.  Translated as Lagrange, Joseph-Louis (1901). "Lecture V. On the Employment of Curves in the Solution of Problems". Lectures on Elementary Mathematics (2nd ed.). Open Court. pp. 127–149. https://archive.org/details/lecturesonelemen00lagriala/page/127. 
  2. "Problems concerning interpolations". Philosophical Transactions of the Royal Society 69: 59–67. 1779. doi:10.1098/rstl.1779.0008. https://archive.org/details/philosophicaltra6917roya/page/59. 
  3. Meijering, Erik (2002). "A chronology of interpolation: from ancient astronomy to modern signal and image processing". Proceedings of the IEEE 90 (3): 319–342. doi:10.1109/5.993400. http://bigwww.epfl.ch/publications/meijering0201.pdf. 
  4. Berrut, Jean-Paul (2004). "Barycentric Lagrange Interpolation". SIAM Review 46 (3): 501–517. doi:10.1137/S0036144502417715. Bibcode2004SIAMR..46..501B. https://people.maths.ox.ac.uk/trefethen/barycentric.pdf. 
  5. Quarteroni, Alfio; Saleri, Fausto (2003). Scientific Computing with MATLAB. Texts in computational science and engineering. 2. Springer. p. 66. ISBN 978-3-540-44363-6. https://books.google.com/books?id=fE1W5jsU4zoC&pg=PA66. .
  6. Abramowitz, Milton; Stegun, Irene Ann, eds (1983). "Chapter 25, eqn 25.2.3". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 878. LCCN 65-12253. ISBN 978-0-486-61272-0. http://www.math.sfu.ca/~cbm/aands/page_878.htm. 
  7. "Interpolation". https://sam.nitk.ac.in/sites/default/Numerical_Methods/Interpolation/interpolation.pdf. 

External links