# Gibbs phenomenon

__: Oscillatory error in Fourier series__

**Short description**In mathematics, the **Gibbs phenomenon** is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The [math]\displaystyle{ N }[/math]^{th} partial Fourier series of the function (formed by summing the [math]\displaystyle{ N }[/math] lowest constituent sinusoids of the Fourier series of the function) produces large peaks around the jump which overshoot and undershoot the function values. As more sinusoids are used, this approximation error approaches a limit of about 9% of the jump, though the infinite Fourier series sum does eventually converge almost everywhere (pointwise convergence on continuous points) except points of discontinuity.^{[1]}

The Gibbs phenomenon was observed by experimental physicists and was believed to be due to imperfections in the measuring apparatus,^{[2]} but it is in fact a mathematical result. It is one cause of ringing artifacts in signal processing.

## Description

The Gibbs phenomenon is a behavior of the Fourier series of a function with a jump discontinuity and is described as the following:

As more Fourier series constituents or components are taken, the Fourier series shows the first overshoot in the oscillatory behavior around the jump point approaching ~ 9% of the (full) jump and this oscillation does not disappear but gets closer to the point so that the integral of the oscillation approaches to zero (i.e., zero energy in the oscillation).

At the jump point, the Fourier series gives the average of the function's both side limits toward the point.

### Square wave example

The three pictures on the right demonstrate the Gibbs phenomenon for a square wave (with peak-to-peak amplitude of [math]\displaystyle{ c }[/math] from [math]\displaystyle{ -c/2 }[/math] to [math]\displaystyle{ c/2 }[/math] and the periodicity [math]\displaystyle{ L }[/math]) whose [math]\displaystyle{ N }[/math]^{th} partial Fourier series is
[math]\displaystyle{ \frac{2c}{\pi}\left ( \sin(\omega x) + \frac{1}{3} \sin(3\omega x) + \cdots + \frac{1}{N-1} \sin((N-1)\omega x) \right ) }[/math]

where [math]\displaystyle{ \omega = 2\pi/L }[/math]. More precisely, this square wave is the function [math]\displaystyle{ f(x) }[/math] which equals [math]\displaystyle{ \tfrac{c}{2} }[/math] between [math]\displaystyle{ 2n(L/2) }[/math] and [math]\displaystyle{ (2n+1)(L/2) }[/math] and [math]\displaystyle{ -\tfrac{c}{2} }[/math] between [math]\displaystyle{ (2n+1)(L/2) }[/math] and [math]\displaystyle{ (2n+2)(L/2) }[/math] for every integer [math]\displaystyle{ n }[/math]; thus, this square wave has a jump discontinuity of peak-to-peak height [math]\displaystyle{ c }[/math] at every integer multiple of [math]\displaystyle{ L/2 }[/math].

As more sinusoidal terms are added (i.e., increasing [math]\displaystyle{ N }[/math]), the error of the partial Fourier series converges to a fixed height. But because the width of the error continues to narrow, the area of the error – and hence the energy of the error – converges to 0.^{[3]} The square wave analysis reveals that the error exceeds the height (from zero) [math]\displaystyle{ \tfrac{c}{2} }[/math] of the square wave by
[math]\displaystyle{ \frac{c}{\pi} \int_0^\pi \frac{\sin(t)}{t}\ dt - \frac{c}{2} = c \cdot (0.089489872236\dots). }[/math](OEIS: A243268)

or about 9% of the full jump [math]\displaystyle{ c }[/math]. More generally, at any discontinuity of a piecewise continuously differentiable function with a jump of [math]\displaystyle{ c }[/math], the [math]\displaystyle{ N }[/math]^{th} partial Fourier series of the function will (for a very large [math]\displaystyle{ N }[/math] value) overshoot this jump by an error approaching [math]\displaystyle{ c \cdot (0.089489872236\dots) }[/math] at one end and undershoot it by the same amount at the other end; thus the "full jump" in the partial Fourier series will be about 18% larger than the full jump in the original function. At the discontinuity, the partial Fourier series will converge to the midpoint of the jump (regardless of the actual value of the original function at the discontinuity) as a consequence of Dirichlet's theorem.^{[4]} The quantity
[math]\displaystyle{ \int_0^\pi \frac{\sin t}{t}\ dt = (1.851937051982\dots) = \frac{\pi}{2} + \pi \cdot (0.089489872236\dots) }[/math](OEIS: A036792)
is sometimes known as the *Wilbraham–Gibbs constant*.^{[5]}

### History

The Gibbs phenomenon was first noticed and analyzed by Henry Wilbraham in an 1848 paper.^{[6]} The paper attracted little attention until 1914 when it was mentioned in Heinrich Burkhardt's review of mathematical analysis in Klein's encyclopedia.^{[7]} In 1898, Albert A. Michelson developed a device that could compute and re-synthesize the Fourier series.^{[8]} A widespread myth says that when the Fourier coefficients for a square wave were input to the machine, the graph would oscillate at the discontinuities, and that because it was a physical device subject to manufacturing flaws, Michelson was convinced that the overshoot was caused by errors in the machine. In fact the graphs produced by the machine were not good enough to exhibit the Gibbs phenomenon clearly, and Michelson may not have noticed it as he made no mention of this effect in his paper (Michelson Stratton) about his machine or his later letters to *Nature*.^{[9]}

Inspired by correspondence in *Nature* between Michelson and A. E. H. Love about the convergence of the Fourier series of the square wave function, J. Willard Gibbs published a note in 1898 pointing out the important distinction between the limit of the graphs of the partial sums of the Fourier series of a sawtooth wave and the graph of the limit of those partial sums. In his first letter Gibbs failed to notice the Gibbs phenomenon, and the limit that he described for the graphs of the partial sums was inaccurate. In 1899 he published a correction in which he described the overshoot at the point of discontinuity (*Nature*, April 27, 1899, p. 606). In 1906, Maxime Bôcher gave a detailed mathematical analysis of that overshoot, coining the term "Gibbs phenomenon"^{[10]} and bringing the term into widespread use.^{[9]}

After the existence of Henry Wilbraham's paper became widely known, in 1925 Horatio Scott Carslaw remarked, "We may still call this property of Fourier's series (and certain other series) Gibbs's phenomenon; but we must no longer claim that the property was first discovered by Gibbs."^{[11]}

### Explanation

Informally, the Gibbs phenomenon reflects the difficulty inherent in approximating a discontinuous function by a *finite* series of continuous sinusoidal waves. It is important to put emphasis on the word *finite*, because even though every partial sum of the Fourier series overshoots around each discontinuity it is approximating, the limit of summing an infinite number of sinusoidal waves does not. The overshoot peaks moves closer and closer to the discontinuity as more terms are summed, so convergence is possible.

There is no contradiction (between the overshoot error converging to a non-zero height even though the infinite sum has no overshoot), because the overshoot peaks move toward the discontinuity. The Gibbs phenomenon thus exhibits pointwise convergence, but not uniform convergence. For a piecewise continuously differentiable (class *C*^{1}) function, the Fourier series converges to the function at *every point* except at jump discontinuities. At jump discontinuities, the infinite sum will converge to the jump discontinuity's midpoint (i.e. the average of the values of the function on either side of the jump), as a consequence of Dirichlet's theorem.^{[4]}

The Gibbs phenomenon is closely related to the principle that the smoothness of a function controls the decay rate of its Fourier coefficients. Fourier coefficients of smoother functions will more rapidly decay (resulting in faster convergence), whereas Fourier coefficients of discontinuous functions will slowly decay (resulting in slower convergence). For example, the discontinuous square wave has Fourier coefficients [math]\displaystyle{ (\tfrac{1}{1},{\scriptstyle\text{0}},\tfrac{1}{3},{\scriptstyle\text{0}},\tfrac{1}{5},{\scriptstyle\text{0}},\tfrac{1}{7},{\scriptstyle\text{0}},\tfrac{1}{9},{\scriptstyle\text{0}},\dots) }[/math] that decay only at the rate of [math]\displaystyle{ \tfrac{1}{n} }[/math], while the continuous triangle wave has Fourier coefficients [math]\displaystyle{ (\tfrac{1}{1^2},{\scriptstyle\text{0}},\tfrac{-1}{3^2},{\scriptstyle\text{0}},\tfrac{1}{5^2},{\scriptstyle\text{0}},\tfrac{-1}{7^2},{\scriptstyle\text{0}},\tfrac{1}{9^2},{\scriptstyle\text{0}},\dots) }[/math] that decay at a much faster rate of [math]\displaystyle{ \tfrac{1}{n^2} }[/math].

This only provides a partial explanation of the Gibbs phenomenon, since Fourier series with absolutely convergent Fourier coefficients would be uniformly convergent by the Weierstrass M-test and would thus be unable to exhibit the above oscillatory behavior. By the same token, it is impossible for a discontinuous function to have absolutely convergent Fourier coefficients, since the function would thus be the uniform limit of continuous functions and therefore be continuous, a contradiction. See Convergence of Fourier series § Absolute convergence.

### Solutions

Since the Gibbs phenomenon comes from undershooting, it may be eliminated by using kernels that are never negative, such as the Fejér kernel.^{[12]}^{[13]}

In practice, the difficulties associated with the Gibbs phenomenon can be ameliorated by using a smoother method of Fourier series summation, such as Fejér summation or Riesz summation, or by using sigma-approximation. Using a continuous wavelet transform, the wavelet Gibbs phenomenon never exceeds the Fourier Gibbs phenomenon.^{[14]} Also, using the discrete wavelet transform with Haar basis functions, the Gibbs phenomenon does not occur at all in the case of continuous data at jump discontinuities,^{[15]} and is minimal in the discrete case at large change points. In wavelet analysis, this is commonly referred to as the Longo phenomenon. In the polynomial interpolation setting, the Gibbs phenomenon can be mitigated using the S-Gibbs algorithm.^{[16]}

## Formal mathematical description of the Gibbs phenomenon

Let [math]\displaystyle{ f: {\mathbb R} \to {\mathbb R} }[/math] be a piecewise continuously differentiable function which is periodic with some period [math]\displaystyle{ L \gt 0 }[/math]. Suppose that at some point [math]\displaystyle{ x_0 }[/math], the left limit [math]\displaystyle{ f(x_0^-) }[/math] and right limit [math]\displaystyle{ f(x_0^+) }[/math] of the function [math]\displaystyle{ f }[/math] differ by a non-zero jump of [math]\displaystyle{ c }[/math]: [math]\displaystyle{ f(x_0^+) - f(x_0^-) = c \neq 0. }[/math]

For each positive integer [math]\displaystyle{ N }[/math] ≥ 1, let [math]\displaystyle{ S_N f(x) }[/math] be the [math]\displaystyle{ N }[/math]^{th} partial Fourier series ([math]\displaystyle{ S_N }[/math] can be treated as a mathematical operator on functions.)
[math]\displaystyle{ S_N f(x) := \sum_{-N \leq n \leq N} \widehat f(n) e^{\frac{i2\pi n x}{L}}
= \frac{1}{2} a_0 + \sum_{n=1}^N \left( a_n \cos\left(\frac{2\pi nx}{L}\right) + b_n \sin\left(\frac{2\pi nx}{L}\right) \right), }[/math]

where the Fourier coefficients [math]\displaystyle{ \widehat f(n), a_n, b_n }[/math] for integers [math]\displaystyle{ n }[/math] are given by the usual formulae [math]\displaystyle{ \widehat f(n) := \frac{1}{L} \int_0^L f(x) e^{-\frac{i2\pi nx}{L}}\, dx }[/math] [math]\displaystyle{ a_0 := \frac{1}{L} \int_0^L f(x)\ dx }[/math] [math]\displaystyle{ a_n := \frac{2}{L} \int_0^L f(x) \cos\left(\frac{2\pi nx}{L}\right)\, dx }[/math] [math]\displaystyle{ b_n := \frac{2}{L} \int_0^L f(x) \sin\left(\frac{2\pi nx}{L}\right)\, dx. }[/math]

Then we have [math]\displaystyle{ \lim_{N \to \infty} S_N f\left(x_0 + \frac{L}{2N}\right) = f(x_0^+) + c\cdot (0.089489872236\dots) }[/math] and [math]\displaystyle{ \lim_{N \to \infty} S_N f\left(x_0 - \frac{L}{2N}\right) = f(x_0^-) - c\cdot (0.089489872236\dots) }[/math] but [math]\displaystyle{ \lim_{N \to \infty} S_N f(x_0) = \frac{f(x_0^-) + f(x_0^+)}{2}. }[/math]

More generally, if [math]\displaystyle{ x_N }[/math] is any sequence of real numbers which converges to [math]\displaystyle{ x_0 }[/math] as [math]\displaystyle{ N \to \infty }[/math], and if the jump of [math]\displaystyle{ a }[/math] is positive then [math]\displaystyle{ \limsup_{N \to \infty} S_N f(x_N) \leq f(x_0^+) + c\cdot (0.089489872236\dots) }[/math] and [math]\displaystyle{ \liminf_{N \to \infty} S_N f(x_N) \geq f(x_0^-) - c\cdot (0.089489872236\dots). }[/math]

If instead the jump of [math]\displaystyle{ c }[/math] is negative, one needs to interchange limit superior ([math]\displaystyle{ \limsup }[/math]) with limit inferior ([math]\displaystyle{ \liminf }[/math]), and also interchange the [math]\displaystyle{ \leq }[/math] and [math]\displaystyle{ \ge }[/math] signs, in the above two inequalities.

### Proof of the Gibbs phenomenon in a general case

Stated again, let [math]\displaystyle{ f: {\mathbb R} \to {\mathbb R} }[/math] be a piecewise continuously differentiable function which is periodic with some period [math]\displaystyle{ L \gt 0 }[/math], and this function has multiple jump discontinuity points denoted [math]\displaystyle{ x_i }[/math] where [math]\displaystyle{ i = 0, 1, 2, }[/math] and so on. At each discontinuity, the amount of the vertical full jump is [math]\displaystyle{ c_i }[/math].

Then, [math]\displaystyle{ f }[/math] can be expressed as the sum of a continuous function [math]\displaystyle{ f_c }[/math] and a multi-step function [math]\displaystyle{ f_s }[/math] which is the sum of step functions such as^{[17]}

[math]\displaystyle{ f = f_c + f_s, }[/math][math]\displaystyle{ f_s = f_{s_1} + f_{s_2} + f_{s_3} + \cdots, }[/math][math]\displaystyle{ f_{s_i}(x) = \begin{cases} 0 & \text{if } x \leq x_i, \\ c_i, & \text{if } x \gt x_i. \end{cases} }[/math]

[math]\displaystyle{ S_N f(x) }[/math] as the [math]\displaystyle{ N }[/math]^{th} partial Fourier series of [math]\displaystyle{ f = f_c + f_s = f_c + \left ( f_{s_1} + f_{s_2} + f_{s_3} + \ldots \right ) }[/math] will converge well at all [math]\displaystyle{ x }[/math] points except points near discontinuities [math]\displaystyle{ x_i }[/math]. Around each discontinuity point [math]\displaystyle{ x_i }[/math], [math]\displaystyle{ f_{s_i} }[/math] will only have the Gibbs phenomenon of its own (the maximum oscillatory convergence error of ~ 9% of the jump [math]\displaystyle{ c_i }[/math], as shown in the square wave analysis) because other functions are continuous ([math]\displaystyle{ f_c }[/math]) or flat zero ([math]\displaystyle{ f_{s_j} }[/math] where [math]\displaystyle{ j \neq i }[/math]) around that point. This proves how the Gibbs phenomenon occurs at every discontinuity.

## Signal processing explanation

From a signal processing point of view, the Gibbs phenomenon is the step response of a low-pass filter, and the oscillations are called ringing or ringing artifacts. Truncating the Fourier transform of a signal on the real line, or the Fourier series of a periodic signal (equivalently, a signal on the circle), corresponds to filtering out the higher frequencies with an ideal (brick-wall) low-pass filter. This can be represented as convolution of the original signal with the impulse response of the filter (also known as the kernel), which is the sinc function. Thus, the Gibbs phenomenon can be seen as the result of convolving a Heaviside step function (if periodicity is not required) or a square wave (if periodic) with a sinc function: the oscillations in the sinc function cause the ripples in the output.

In the case of convolving with a Heaviside step function, the resulting function is exactly the integral of the sinc function, the sine integral; for a square wave the description is not as simply stated. For the step function, the magnitude of the undershoot is thus exactly the integral of the left tail until the first negative zero: for the normalized sinc of unit sampling period, this is [math]\displaystyle{ \int_{-\infty}^{-1} \frac{\sin(\pi x)}{\pi x}\,dx. }[/math] The overshoot is accordingly of the same magnitude: the integral of the right tail or (equivalently) the difference between the integral from negative infinity to the first positive zero minus 1 (the non-overshooting value).

The overshoot and undershoot can be understood thus: kernels are generally normalized to have integral 1, so they result in a mapping of constant functions to constant functions – otherwise they have gain. The value of a convolution at a point is a linear combination of the input signal, with coefficients (weights) the values of the kernel.

If a kernel is non-negative, such as for a Gaussian kernel, then the value of the filtered signal will be a convex combination of the input values (the coefficients (the kernel) integrate to 1, and are non-negative), and will thus fall between the minimum and maximum of the input signal – it will not undershoot or overshoot. If, on the other hand, the kernel assumes negative values, such as the sinc function, then the value of the filtered signal will instead be an affine combination of the input values and may fall outside of the minimum and maximum of the input signal, resulting in undershoot and overshoot, as in the Gibbs phenomenon.

Taking a longer expansion – cutting at a higher frequency – corresponds in the frequency domain to widening the brick-wall, which in the time domain corresponds to narrowing the sinc function and increasing its height by the same factor, leaving the integrals between corresponding points unchanged. This is a general feature of the Fourier transform: widening in one domain corresponds to narrowing and increasing height in the other. This results in the oscillations in sinc being narrower and taller, and (in the filtered function after convolution) yields oscillations that are narrower (and thus with smaller *area*) but which do *not* have reduced *magnitude*: cutting off at any finite frequency results in a sinc function, however narrow, with the same tail integrals. This explains the persistence of the overshoot and undershoot.

Thus, the features of the Gibbs phenomenon are interpreted as follows:

- the undershoot is due to the impulse response having a negative tail integral, which is possible because the function takes negative values;
- the overshoot offsets this, by symmetry (the overall integral does not change under filtering);
- the persistence of the oscillations is because increasing the cutoff narrows the impulse response but does not reduce its integral – the oscillations thus move towards the discontinuity, but do not decrease in magnitude.

## Square wave analysis

We examine the [math]\displaystyle{ N }[/math]^{th} partial Fourier series [math]\displaystyle{ S_N f(x) }[/math] of a square wave [math]\displaystyle{ f(x) }[/math] with the periodicity [math]\displaystyle{ L }[/math] and a discontinuity of a vertical "full" jump [math]\displaystyle{ c }[/math] from [math]\displaystyle{ y = y_0 }[/math] at [math]\displaystyle{ x = x_0 }[/math]. Because the case of odd [math]\displaystyle{ N }[/math] is very similar, let us just deal with the case when [math]\displaystyle{ N }[/math] is even:

[math]\displaystyle{ S_N f(x) = \left(y_0 + \frac{c}{2} \right) + \frac{2c}\pi \left ( \sin(\omega (x - x_0) ) + \frac{1}{3} \sin(3\omega (x - x_0)) + \cdots + \frac{1}{N-1} \sin((N-1)\omega (x - x_0)) \right ) }[/math]

with [math]\displaystyle{ \omega = \frac{2\pi}{L} }[/math]. ([math]\displaystyle{ N = 2N' }[/math] where [math]\displaystyle{ N' }[/math] is the number of non-zero sinusoidal Fourier series components so there are literatures using [math]\displaystyle{ N' }[/math] instead of [math]\displaystyle{ N }[/math].) Substituting [math]\displaystyle{ x = x_0 }[/math] (a point of discontinuity), we obtain [math]\displaystyle{ S_N f(x_0) = \left(y_0 + \frac{c}{2}\right) = \frac{f(0^-) + f(0^+)}{2} = \frac{y_0 + (y_0 + c)}{2} }[/math] as claimed above. (The first term that only survives is the average of the Fourier series.)

Next, we find the first maximum of the oscillation around the discontinuity [math]\displaystyle{ x = x_0 }[/math] by checking the first and second derivatives of [math]\displaystyle{ S_N f(x) }[/math]. The first condition for the maximum is that the first derivative equals to zero as

[math]\displaystyle{ \frac{d}{dx} S_N f(x) = \frac{2c}{\pi} \left ( \cos(\omega (x - x_0)) + \cos(3\omega (x - x_0)) + \cdots + \cos((N-1)\omega (x - x_0)) \right ) = \frac{c}{\pi} \frac{\sin(N\omega (x - x_0))}{\sin(\omega (x - x_0))} = 0 }[/math]

where the 2nd equality is from one of Lagrange's trigonometric identities. Solving this condition gives [math]\displaystyle{ x - x_0 = k\pi / (N\omega) = kL / (2N) }[/math] for integers [math]\displaystyle{ k }[/math] excluding multiples of [math]\displaystyle{ N\omega }[/math] to avoid the zero denominator, so [math]\displaystyle{ k = 1, 2, \ldots, N\omega - 1, N\omega + 1, \ldots }[/math] and their negatives are allowed.

The second derivative of [math]\displaystyle{ S_N f(x) }[/math] at [math]\displaystyle{ x - x_0 = kL / (2N) }[/math] is

[math]\displaystyle{ \frac{d^2}{dx^2} S_N f(x) = \frac{c\omega}{\pi} \left ( \frac{N\cos(N\omega (x - x_0))\sin(\omega (x - x_0)) - \sin(N\omega (x - x_0))\cos(\omega (x - x_0))}{\sin^2(\omega (x - x_0))} \right ), }[/math][math]\displaystyle{ \left. \frac{d^2}{dx^2} S_N f(x) \right \vert _{x_0 + kL / (2N)} = \begin{cases} \frac{2c}{L} \frac{N }{\sin(k\pi/N)}, & \text{if }k \text{ is even,} \\[4pt] \frac{2c}{L} \frac{-N }{\sin(k\pi/N)}, & \text{if }k \text{ is odd.} \end{cases} }[/math]

Thus, the first maximum occurs at [math]\displaystyle{ x = x_0 + L / (2N) }[/math] ([math]\displaystyle{ k = 1 }[/math]) and [math]\displaystyle{ S_N f(x) }[/math] at this [math]\displaystyle{ x }[/math] value is [math]\displaystyle{ S_N f\left(x_0 + \frac{L}{2N} \right) = \left(y_0 + \frac{c}{2}\right) + \frac{2c}{\pi} \left ( \sin\left(\frac{\pi}{N}\right) + \frac{1}{3} \sin\left(\frac{3\pi}{N}\right) + \cdots + \frac{1}{N-1} \sin\left( \frac{(N-1)\pi}{N} \right) \right ) }[/math]

If we introduce the normalized sinc function [math]\displaystyle{ \operatorname{sinc}(x) = \frac{\sin(\pi x)}{\pi x} }[/math] for [math]\displaystyle{ x \neq 0 }[/math], we can rewrite this as [math]\displaystyle{ S_N f\left(x_0 + \frac{L}{2N} \right) = (y_0 + \frac{c}{2}) + c \left[ \frac{2}{N} \operatorname{sinc}\left(\frac{1}{N}\right) + \frac{2}{N} \operatorname{sinc}\left(\frac{3}{N}\right)+ \cdots + \frac{2}{N} \operatorname{sinc}\left( \frac{(N-1)}{N} \right) \right]. }[/math]

For a sufficiently large [math]\displaystyle{ N }[/math], the expression in the square brackets is a Riemann sum approximation to the integral [math]\displaystyle{ \int_0^1 \operatorname{sinc}(x)\ dx }[/math] (more precisely, it is a midpoint rule approximation with spacing [math]\displaystyle{ \tfrac{2}{N} }[/math]). Since the sinc function is continuous, this approximation converges to the integral as [math]\displaystyle{ N \to \infty }[/math]. Thus, we have

[math]\displaystyle{ \begin{align} \lim_{N \to \infty} S_N f\left(x_0 + \frac{L}{2N}\right) & = (y_0 + \frac{c}{2}) + c \int_0^1 \operatorname{sinc}(x)\, dx \\[8pt] & = (y_0 + \frac{c}{2}) + \frac{c}{\pi} \int_{x=0}^1 \frac{\sin(\pi x)}{\pi x}\, d(\pi x) \\[8pt] & = (y_0 + \frac{c}{2}) + \frac{c}{\pi} \int_0^\pi \frac{\sin(t)}{t}\ dt \quad = \quad (y_0 + c) + c \cdot (0.089489872236\dots), \end{align} }[/math]

which was claimed in the previous section. A similar computation shows

[math]\displaystyle{ \lim_{N \to \infty} S_N f\left(x_0 -\frac{L}{2N}\right) = -c \int_0^1 \operatorname{sinc}(x)\, dx = y_0 - c \cdot (0.089489872236\dots). }[/math]

## Consequences

The Gibbs phenomenon is undesirable because it causes artifacts, namely clipping from the overshoot and undershoot, and ringing artifacts from the oscillations. In the case of low-pass filtering, these can be reduced or eliminated by using different low-pass filters.

In MRI, the Gibbs phenomenon causes artifacts in the presence of adjacent regions of markedly differing signal intensity. This is most commonly encountered in spinal MRIs where the Gibbs phenomenon may simulate the appearance of syringomyelia.

The Gibbs phenomenon manifests as a cross pattern artifact in the discrete Fourier transform of an image,^{[18]} where most images (e.g. micrographs or photographs) have a sharp discontinuity between boundaries at the top / bottom and left / right of an image. When periodic boundary conditions are imposed in the Fourier transform, this jump discontinuity is represented by continuum of frequencies along the axes in reciprocal space (i.e. a cross pattern of intensity in the Fourier transform).

And although this article mainly focused on the difficulty with trying to construct discontinuities without artifacts in the time domain with only a partial Fourier series, it is also important to consider that because the inverse Fourier transform is extremely similar to the Fourier transform, there equivalently is difficulty with trying to construct discontinuities in the frequency domain using only a partial Fourier series. Thus for instance because idealized brick-wall and rectangular filters have discontinuities in the frequency domain, their exact representation in the time domain necessarily requires an infinitely-long sinc filter impulse response, since a finite impulse response will result in Gibbs rippling in the frequency response near cut-off frequencies, though this rippling can be reduced by windowing finite impulse response filters (at the expense of wider transition bands).^{[19]}

## See also

- Mach bands
- Pinsky phenomenon
- Runge's phenomenon (a similar phenomenon in polynomial approximations)
- σ-approximation which adjusts a Fourier summation to eliminate the Gibbs phenomenon which would otherwise occur at discontinuities
- Sine integral

## Notes

- ↑ H. S. Carslaw (1930). "Chapter IX".
*Introduction to the theory of Fourier's series and integrals*(Third ed.). New York: Dover Publications Inc.. https://books.google.com/books?id=JNVAAAAAIAAJ&q=intitle:Introduction+intitle:to+intitle:the+intitle:theory+intitle:of+intitle:Fourier%27s+intitle:series+intitle:and+intitle:integrals+inauthor:carslaw. - ↑ Vretblad 2000 Section 4.7.
- ↑ "6.7: Gibbs Phenomena" (in en). 2020-05-24. https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Signals_and_Systems_(Baraniuk_et_al.)/06%3A_Continuous_Time_Fourier_Series_(CTFS)/6.07%3A_Gibbs_Phenomena.
- ↑
^{4.0}^{4.1}M. Pinsky (2002).*Introduction to Fourier Analysis and Wavelets*. United states of America: Brooks/Cole. p. 27. https://archive.org/details/introductiontofo00pins_232. - ↑ Steven R. Finch,
*Mathematical Constants*, Cambridge University Press, 2003, Section 4.1 Gibbs-Wilbraham constant, p. 249. - ↑ Wilbraham, Henry (1848) "On a certain periodic function",
*The Cambridge and Dublin Mathematical Journal*,**3**: 198–201. - ↑
*Encyklopädie der Mathematischen Wissenschaften mit Einschluss ihrer Anwendungen*.**II T. 1 H 1**. Wiesbaden: Vieweg+Teubner Verlag. 1914. p. 1049. http://gdz.sub.uni-goettingen.de/pdfcache/PPN360506208/PPN360506208___LOG_0158.pdf. Retrieved 14 September 2016. - ↑ Hammack, Bill; Kranz, Steve; Carpenter, Bruce (2014-10-29) (in en).
*Albert Michelson's Harmonic Analyzer: A Visual Tour of a Nineteenth Century Machine that Performs Fourier Analysis*. Articulate Noise Books. ISBN 9780983966173. http://www.engineerguy.com/fourier/. Retrieved 14 September 2016. - ↑
^{9.0}^{9.1}Hewitt, Edwin; Hewitt, Robert E. (1979). "The Gibbs-Wilbraham phenomenon: An episode in Fourier analysis".*Archive for History of Exact Sciences***21**(2): 129–160. doi:10.1007/BF00330404. Available on-line at: National Chiao Tung University: Open Course Ware: Hewitt & Hewitt, 1979. - ↑ Bôcher, Maxime (April 1906) "Introduction to the theory of Fourier's series",
*Annals of Mathethematics*, second series,**7**(3) : 81–152. The Gibbs phenomenon is discussed on pages 123–132; Gibbs's role is mentioned on page 129. - ↑ Carslaw, H. S. (1 October 1925). "A historical note on Gibbs' phenomenon in Fourier's series and integrals" (in EN).
*Bulletin of the American Mathematical Society***31**(8): 420–424. doi:10.1090/s0002-9904-1925-04081-1. ISSN 0002-9904. https://projecteuclid.org/euclid.bams/1183486614. Retrieved 14 September 2016. - ↑ Gottlieb, David; Shu, Chi-Wang (January 1997). "On the Gibbs Phenomenon and Its Resolution" (in en).
*SIAM Review***39**(4): 644–668. doi:10.1137/S0036144596301390. ISSN 0036-1445. http://epubs.siam.org/doi/10.1137/S0036144596301390. - ↑ Gottlieb, Sigal; Jung, Jae-Hun; Kim, Saeja (March 2011). "A Review of David Gottlieb’s Work on the Resolution of the Gibbs Phenomenon" (in en).
*Communications in Computational Physics***9**(3): 497–519. doi:10.4208/cicp.301109.170510s. ISSN 1815-2406. https://www.cambridge.org/core/journals/communications-in-computational-physics/article/abs/review-of-david-gottliebs-work-on-the-resolution-of-the-gibbs-phenomenon/C77B063AC18D2AF7CF7A6A77028C8528. - ↑ Rasmussen, Henrik O. "The Wavelet Gibbs Phenomenon". In
*Wavelets, Fractals and Fourier Transforms*, Eds M. Farge et al., Clarendon Press, Oxford, 1993. - ↑ Susan E., Kelly (1995). "Gibbs Phenomenon for Wavelets".
*Applied and Computational Harmonic Analysis*(3). http://www.uwlax.edu/faculty/kelly/Publications/GibbsJan.pdf. Retrieved 2012-03-31. - ↑ De Marchi, Stefano; Marchetti, Francesco; Perracchione, Emma; Poggiali, Davide (2020). "Polynomial interpolation via mapped bases without resampling".
*J. Comput. Appl. Math.***364**: 112347. doi:10.1016/j.cam.2019.112347. ISSN 0377-0427. - ↑ Fay, Temple H.; Kloppers, P. Hendrik (2001). "The Gibbs' phenomenon".
*International Journal of Mathematical Education in Science and Technology***32**(1): 73–89. doi:10.1080/00207390117151. https://www.tandfonline.com/doi/abs/10.1080/00207390117151. - ↑ R. Hovden, Y. Jiang, H.L. Xin, L.F. Kourkoutis (2015). "Periodic Artifact Reduction in Fourier Transforms of Full Field Atomic Resolution Images".
*Microscopy and Microanalysis***21**(2): 436–441. doi:10.1017/S1431927614014639. PMID 25597865. https://www.cambridge.org/core/journals/microscopy-and-microanalysis/article/div-classtitleperiodic-artifact-reduction-in-fourier-transforms-of-full-field-atomic-resolution-imagesdiv/80D0E226F0B4B16627AA0B6B9BD24F24. - ↑ "Gibbs phenomenon | RecordingBlogs". https://www.recordingblogs.com/wiki/gibbs-phenomenon.

## References

- Gibbs, J. Willard (1898), "Fourier's Series",
*Nature***59**(1522): 200, doi:10.1038/059200b0, ISSN 0028-0836, https://zenodo.org/record/1429384 - Gibbs, J. Willard (1899), "Fourier's Series",
*Nature***59**(1539): 606, doi:10.1038/059606a0, ISSN 0028-0836 - Michelson, A. A.; Stratton, S. W. (1898), "A new harmonic analyser",
*Philosophical Magazine***5**(45): 85–91 - Zygmund, Antoni (1959).
*Trigonometric Series*(2nd ed.). Cambridge University Press. Volume 1, Volume 2. - Wilbraham, Henry (1848), "On a certain periodic function",
*The Cambridge and Dublin Mathematical Journal***3**: 198–201, https://books.google.com/books?id=JrQ4AAAAMAAJ&pg=PA198 - Paul J. Nahin,
*Dr. Euler's Fabulous Formula,*Princeton University Press, 2006. Ch. 4, Sect. 4. - Vretblad, Anders (2000),
*Fourier Analysis and its Applications*, Graduate Texts in Mathematics,**223**, New York: Springer Publishing, pp. 93, ISBN 978-0-387-00836-3

## External links

- Hazewinkel, Michiel, ed. (2001), "Gibbs phenomenon",
*Encyclopedia of Mathematics*, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4, https://www.encyclopediaofmath.org/index.php?title=p/g044410 - Weisstein, Eric W., "
*Gibbs Phenomenon*". From MathWorld—A Wolfram Web Resource. - Prandoni, Paolo, "
*Gibbs Phenomenon*". - Radaelli-Sanchez, Ricardo, and Richard Baraniuk, "
*Gibbs Phenomenon*". The Connexions Project. (Creative Commons Attribution License) - Horatio S Carslaw : Introduction to the theory of Fourier's series and integrals.pdf (introductiontot00unkngoog.pdf ) at archive.org
- A Python implementation of the S-Gibbs algorithm mitigating the Gibbs Phenomenon https://github.com/pog87/FakeNodes.

Original source: https://en.wikipedia.org/wiki/Gibbs phenomenon.
Read more |