Equioscillation theorem

From HandWiki
Short description: Theorem

In mathematics, the equioscillation theorem concerns the approximation of continuous functions using polynomials when the merit function is the maximum difference (uniform norm). Its discovery is attributed to Chebyshev.[1]

Statement

Let f be a continuous function from [a,b] to . Among all the polynomials of degree n, the polynomial g minimizes the uniform norm of the difference fg if and only if there are n+2 points ax0<x1<<xn+1b such that f(xi)g(xi)=σ(1)ifg where σ is either -1 or +1.[1][2]

That is, the polynomial g oscillates above and below f at the interpolation points, and does so to the same degree.

Proof

Let us define the equioscillation condition as the condition in the theorem statement, that is, the condition that there exists n+2 ordered points in the interval such that the difference f(xi)g(xi) alternates in sign and is equal in magnitude to the uniform-norm of f(x)g(x).

We need to prove that this condition is 'sufficient' for the polynomial g being the best uniform approximation to f, and we need to prove that this condition is 'necessary' for a polynomial to be the best uniform approximation.

Sufficiency

Assume by contradiction that a polynomial p(x) of degree less than or equal to n existed that provides a uniformly better approximation to f, which means that fp<fg. Then the polynomial

h(x)=g(x)p(x)=(g(x)f(x))(p(x)f(x))

is also of degree less than or equal to n. However, for every xi of the n+2 points x0,x1,...xn, we know that |p(xi)f(xi)|<|g(x)f(x)| because |p(xi)f(xi)|fp and fp||<fg (since p is a better approximation than g).

Therefore, h(xi)=(g(xi)f(xi))(p(xi)f(xi)) will have the same sign as g(xi)f(xi) (because the second term has a smaller magnitude than the first). Thus, h(xi) will also alternate sign on these n+2 points, and thus have at least n+1 roots. However, since h is a 'polynomial' of at most degree n, it should only have at most n roots. This is a contradiction.

Necessity

Given a polynomial g, let us define M=f(x)g(x). We will call a point x an upper point if f(x)g(x)=M and a lower point if it equals M instead.

Define an alternating set (given polynomial g and function f) to be a set of ordered points x0,...xn in [a,b] such that for every point xi in the alternating set, we have f(xi)g(xi)=σ(1)iM, where σ equals 1 or 1 as before.

Define a sectioned alternating set to be an alternating set x0,...xn along with nonempty closed intervals I0,...In called sections such that 1. the sections partition [a,b] (meaning that the union of the sections is the whole interval, and the intersection of any two sections is either empty or a single common endpoint) 2. For every i, the ith alternating point xi is in the ith section Ii 3. If xi is an upper point, then Ii contains no lower points. Likewise, if xi is a lower point, then Ii contains no upper points.

Given an approximating polynomial g that does not satisfy the equioscillation condition, it is possible to show that the polynomial will have a two point alternating set. This alternating set can then be expanded to a sectioned alternating set. We can then use this sectioned alternating set to improve the approximation, unless the sectioned alternating set has more than n+2 points in which case our improvement cannot be guaranteed to still be a polynomial of degree at most n [clarification needed]

Variants

The equioscillation theorem is also valid when polynomials are replaced by rational functions: among all rational functions whose numerator has degree n and denominator has degree m, the rational function g=p/q, with p and q being relatively prime polynomials of degree nν and mμ, minimizes the uniform norm of the difference fg if and only if there are m+n+2min{μ,ν} points ax0<x1<<xm+n+1min{μ,ν}b such that f(xi)g(xi)=σ(1)ifg where σ is either -1 or +1.[1]

Algorithms

Several minimax approximation algorithms are available, the most common being the Remez algorithm.

References