Modified Richardson iteration
Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.
We seek the solution to a set of linear equations, expressed in matrix terms as
- [math]\displaystyle{ A x = b.\, }[/math]
The Richardson iteration is
- [math]\displaystyle{ x^{(k+1)} = x^{(k)} + \omega \left( b - A x^{(k)} \right), }[/math]
where [math]\displaystyle{ \omega }[/math] is a scalar parameter that has to be chosen such that the sequence [math]\displaystyle{ x^{(k)} }[/math] converges.
It is easy to see that the method has the correct fixed points, because if it converges, then [math]\displaystyle{ x^{(k+1)} \approx x^{(k)} }[/math] and [math]\displaystyle{ x^{(k)} }[/math] has to approximate a solution of [math]\displaystyle{ A x = b }[/math].
Convergence
Subtracting the exact solution [math]\displaystyle{ x }[/math], and introducing the notation for the error [math]\displaystyle{ e^{(k)} = x^{(k)}-x }[/math], we get the equality for the errors
- [math]\displaystyle{ e^{(k+1)} = e^{(k)} - \omega A e^{(k)} = (I-\omega A) e^{(k)}. }[/math]
Thus,
- [math]\displaystyle{ \|e^{(k+1)}\| = \|(I-\omega A) e^{(k)}\|\leq \|I-\omega A\| \|e^{(k)}\|, }[/math]
for any vector norm and the corresponding induced matrix norm. Thus, if [math]\displaystyle{ \|I-\omega A\|\lt 1 }[/math], the method converges.
Suppose that [math]\displaystyle{ A }[/math] is symmetric positive definite and that [math]\displaystyle{ (\lambda_j)_j }[/math] are the eigenvalues of [math]\displaystyle{ A }[/math]. The error converges to [math]\displaystyle{ 0 }[/math] if [math]\displaystyle{ | 1 - \omega \lambda_j |\lt 1 }[/math] for all eigenvalues [math]\displaystyle{ \lambda_j }[/math]. If, e.g., all eigenvalues are positive, this can be guaranteed if [math]\displaystyle{ \omega }[/math] is chosen such that [math]\displaystyle{ 0 \lt \omega \lt \omega_\text{max}\,, \ \omega_\text{max}:= 2/\lambda_{\text{max}}(A) }[/math]. The optimal choice, minimizing all [math]\displaystyle{ | 1 - \omega \lambda_j | }[/math], is [math]\displaystyle{ \omega_\text{opt} := 2/(\lambda_\text{min}(A)+\lambda_\text{max}(A)) }[/math], which gives the simplest Chebyshev iteration. This optimal choice yields a spectral radius of
- [math]\displaystyle{ \min_{\omega\in (0,\omega_\text{max}) } \rho (I-\omega A) = \rho (I-\omega_\text{opt} A) = 1 - \frac{2}{\kappa(A)+1} \,, }[/math]
where [math]\displaystyle{ \kappa(A) }[/math] is the condition number.
If there are both positive and negative eigenvalues, the method will diverge for any [math]\displaystyle{ \omega }[/math] if the initial error [math]\displaystyle{ e^{(0)} }[/math] has nonzero components in the corresponding eigenvectors.
Equivalence to gradient descent
Consider minimizing the function [math]\displaystyle{ F(x) = \frac{1}{2}\|\tilde{A}x-\tilde{b}\|_2^2 }[/math]. Since this is a convex function, a sufficient condition for optimality is that the gradient is zero ([math]\displaystyle{ \nabla F(x) = 0 }[/math]) which gives rise to the equation
- [math]\displaystyle{ \tilde{A}^T\tilde{A}x = \tilde{A}^T\tilde{b}. }[/math]
Define [math]\displaystyle{ A=\tilde{A}^T\tilde{A} }[/math] and [math]\displaystyle{ b=\tilde{A}^T\tilde{b} }[/math]. Because of the form of A, it is a positive semi-definite matrix, so it has no negative eigenvalues.
A step of gradient descent is
- [math]\displaystyle{ x^{(k+1)} = x^{(k)} - t \nabla F(x^{(k)}) = x^{(k)} - t( Ax^{(k)} - b ) }[/math]
which is equivalent to the Richardson iteration by making [math]\displaystyle{ t=\omega }[/math].
See also
References
- Richardson, L.F. (1910). "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam". Philosophical Transactions of the Royal Society A 210: 307–357. doi:10.1098/rsta.1911.0009.
- Hazewinkel, Michiel, ed. (2001), "Chebyshev iteration method", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4, https://www.encyclopediaofmath.org/index.php?title=Main_Page
Original source: https://en.wikipedia.org/wiki/Modified Richardson iteration.
Read more |