Gauss–Legendre method

From HandWiki

In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s.[1]

All Gauss–Legendre methods are A-stable.[2]

The Gauss–Legendre method of order two is the implicit midpoint rule. Its Butcher tableau is:

1/2 1/2
1

The Gauss–Legendre method of order four has Butcher tableau:

[math]\displaystyle{ \tfrac12 - \tfrac16 \sqrt3 }[/math] [math]\displaystyle{ \tfrac14 }[/math] [math]\displaystyle{ \tfrac14 - \tfrac16 \sqrt3 }[/math]
[math]\displaystyle{ \tfrac12 + \tfrac16 \sqrt3 }[/math] [math]\displaystyle{ \tfrac14 + \tfrac16 \sqrt3 }[/math] [math]\displaystyle{ \tfrac14 }[/math]
[math]\displaystyle{ \tfrac12 }[/math] [math]\displaystyle{ \tfrac12 }[/math]

The Gauss–Legendre method of order six has Butcher tableau:

[math]\displaystyle{ \tfrac12 - \tfrac1{10} \sqrt{15} }[/math] [math]\displaystyle{ \tfrac5{36} }[/math] [math]\displaystyle{ \tfrac29 - \tfrac1{15} \sqrt{15} }[/math] [math]\displaystyle{ \tfrac5{36} - \tfrac1{30} \sqrt{15} }[/math]
[math]\displaystyle{ \tfrac12 }[/math] [math]\displaystyle{ \tfrac5{36} + \tfrac1{24} \sqrt{15} }[/math] [math]\displaystyle{ \tfrac29 }[/math] [math]\displaystyle{ \tfrac5{36} - \tfrac1{24} \sqrt{15} }[/math]
[math]\displaystyle{ \tfrac12 + \tfrac1{10} \sqrt{15} }[/math] [math]\displaystyle{ \tfrac5{36} + \tfrac1{30} \sqrt{15} }[/math] [math]\displaystyle{ \tfrac29 + \tfrac1{15} \sqrt{15} }[/math] [math]\displaystyle{ \tfrac5{36} }[/math]
[math]\displaystyle{ \tfrac5{18} }[/math] [math]\displaystyle{ \tfrac49 }[/math] [math]\displaystyle{ \tfrac5{18} }[/math]

The computational cost of higher-order Gauss–Legendre methods is usually excessive, and thus, they are rarely used.[3]

Intuition

Gauss-Legendre Runge-Kutta (GLRK) methods solve an ordinary differential equation [math]\displaystyle{ \dot{x} = f(x) }[/math] with [math]\displaystyle{ x(0) = x_0 }[/math]. The distinguishing feature of GLRK is the estimation of [math]\displaystyle{ x(h) - x_0 = \int_0^h dt \, f( x(t) ) }[/math] with Gaussian quadrature.

[math]\displaystyle{ x(h) = x(0) + \frac{h}{2} \sum_{i=1}^\ell w_i k_i + O(h^{2\ell}), }[/math]

where [math]\displaystyle{ k_i = f( x( h c_i) ) }[/math] are the sampled velocities, [math]\displaystyle{ w_i }[/math] are the quadrature weights, [math]\displaystyle{ c_i = \frac{1}{2} h (1+r_i) }[/math] are the abscissas, and [math]\displaystyle{ r_i }[/math] are the roots [math]\displaystyle{ P_\ell(r_i) = 0 }[/math] of the Legendre polynomial of degree [math]\displaystyle{ \ell }[/math]. A further approximation is needed, as [math]\displaystyle{ k_i }[/math] is still impossible to evaluate. To maintain truncation error of order [math]\displaystyle{ O(h^{2\ell}) }[/math], we only need [math]\displaystyle{ k_i }[/math] to order [math]\displaystyle{ O(h^{2\ell-1}) }[/math]. The Runge-Kutta implicit definition [math]\displaystyle{ k_i = f{\left(x_0 + h \sum_j a_{ij} k_j \right)} }[/math] is invoked to accomplish this. This is an implicit constraint that must be solved by a root finding algorithm like Newton's method. The values of the Runge-Kutta parameters [math]\displaystyle{ a_{ij} }[/math] can be determined from a Taylor series expansion in [math]\displaystyle{ h }[/math].

Practical example

The Gauss-Legendre methods are implicit, so in general they cannot be applied exactly. Instead one makes an educated guess of [math]\displaystyle{ k_i }[/math], and then uses Newton's method to converge arbitrarily close to the true solution. Below is a Matlab function which implements the Gauss-Legendre method of order four.

% starting point
x = [ 10.5440; 4.1124; 35.8233];

dt = 0.01;
N = 10000;
x_series = [x];
for i = 1:N
  x = gauss_step(x, @lorenz_dynamics, dt, 1e-7, 1, 100);
  x_series = [x_series x];
end

plot3( x_series(1,:), x_series(2,:), x_series(3,:) );
set(gca,'xtick',[],'ytick',[],'ztick',[]);
title('Lorenz Attractor');
return;

function [td, j] = lorenz_dynamics(state)
  % return a time derivative and a Jacobian of that time derivative
  x = state(1);
  y = state(2);
  z = state(3);

  sigma = 10;
  beta  = 8/3;
  rho   = 28;

  td = [sigma*(y-x); x*(rho-z)-y; x*y-beta*z];

  j = [-sigma, sigma, 0;
        rho-z, -1, -x;
        y, x, -beta];
end

function x_next = gauss_step( x, dynamics, dt, threshold, damping, max_iterations )
  [d,~] = size(x);
  sq3 = sqrt(3);
  if damping > 1 || damping <= 0
    error('damping should be between 0 and 1.')
  end

  % Use explicit Euler steps as initial guesses
  [k,~] = dynamics(x);
  x1_guess = x + (1/2-sq3/6)*dt*k;
  x2_guess = x + (1/2+sq3/6)*dt*k;
  [k1,~] = dynamics(x1_guess);
  [k2,~] = dynamics(x2_guess);

  a11 = 1/4;
  a12 = 1/4 - sq3/6;
  a21 = 1/4 + sq3/6;
  a22 = 1/4;

  error = @(k1, k2) [k1 - dynamics(x+(a11*k1+a12*k2)*dt); k2 - dynamics(x+(a21*k1+a22*k2)*dt)];
  er = error(k1, k2);
  iteration=1;
  while (norm(er) > threshold && iteration < max_iterations)
    fprintf('Newton iteration %d: error is %f.\n', iteration, norm(er) );
    iteration = iteration + 1;
   
    [~, j1] = dynamics(x+(a11*k1+a12*k2)*dt);
    [~, j2] = dynamics(x+(a21*k1+a22*k2)*dt);
   
    j = [eye(d) - dt*a11*j1, -dt*a12*j1;
         -dt*a21*j2, eye(d) - dt*a22*j2];
    
    k_next = [k1;k2] - damping * linsolve(j, er);
    k1 = k_next(1:d);
    k2 = k_next(d+(1:d));
   
    er = error(k1, k2);
  end
  if norm(er) > threshold
    error('Newton did not converge by %d iterations.', max_iterations);
  end
  x_next = x + dt / 2 * (k1 + k2);
end

This algorithm is surprisingly cheap. The error in [math]\displaystyle{ k_i }[/math] can fall below [math]\displaystyle{ 10^{-12} }[/math] in as few as 2 Newton steps. The only extra work compared to explicit Runge-Kutta methods is the computation of the Jacobian.

An integrated orbit near the Lorenz attractor.

Time-symmetric variants

At the cost of adding an additional implicit relation, these methods can be adapted to have time reversal symmetry. In these methods, the averaged position [math]\displaystyle{ (x_f+x_i)/2 }[/math] is used in computing [math]\displaystyle{ k_i }[/math] instead of just the initial position [math]\displaystyle{ x_i }[/math] in standard Runge-Kutta methods. The method of order 2 is just an implicit midpoint method.

[math]\displaystyle{ k_1 = f\left(\frac{x_f+x_i}{2}\right) }[/math]
[math]\displaystyle{ x_f = x_i + h k_1 }[/math]

The method of order 4 with 2 stages is as follows.

[math]\displaystyle{ k_1 = f\left( \frac{x_f+x_i}{2} - \frac{\sqrt{3}}{6} h k_2\right) }[/math]
[math]\displaystyle{ k_2 = f\left( \frac{x_f+x_i}{2} + \frac{\sqrt{3}}{6} h k_1\right) }[/math]
[math]\displaystyle{ x_f = x_i + \frac{h}{2}(k_1 + k_2) }[/math]

The method of order 6 with 3 stages is as follows.

[math]\displaystyle{ k_1 = f\left( \frac{x_f + x_i}{2} - \frac{\sqrt{15}}{15} h k_2 - \frac{\sqrt{15}}{30} h k_3 \right) }[/math]
[math]\displaystyle{ k_2 = f\left( \frac{x_f + x_i}{2} + \frac{\sqrt{15}}{24} h k_1 - \frac{\sqrt{15}}{24} h k_3 \right) }[/math]
[math]\displaystyle{ k_3 = f\left( \frac{x_f + x_i}{2} + \frac{\sqrt{15}}{30} h k_1 + \frac{\sqrt{15}}{15} h k_2 \right) }[/math]
[math]\displaystyle{ x_f = x_i + \frac{h}{18}( 5 k_1 + 8k_2 + 5k_3) }[/math]

Notes

  1. Iserles 1996, p. 47
  2. Iserles 1996, p. 63
  3. Iserles 1996, p. 47

References