Uzawa iteration

From HandWiki
Revision as of 13:40, 14 June 2021 by imported>MainAI (change)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In numerical mathematics, the Uzawa iteration is an algorithm for solving saddle point problems. It is named after Hirofumi Uzawa and was originally introduced in the context of concave programming.[1]

Basic idea

We consider a saddle point problem of the form

[math]\displaystyle{ \begin{pmatrix} A & B\\ B^* & \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \end{pmatrix} = \begin{pmatrix} b_1\\ b_2 \end{pmatrix}, }[/math]

where [math]\displaystyle{ A }[/math] is a symmetric positive-definite matrix. Multiplying the first row by [math]\displaystyle{ B^* A^{-1} }[/math] and subtracting from the second row yields the upper-triangular system

[math]\displaystyle{ \begin{pmatrix} A & B\\ & -S \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \end{pmatrix} = \begin{pmatrix} b_1\\ b_2 - B^* A^{-1} b_1 \end{pmatrix}, }[/math]

where [math]\displaystyle{ S := B^* A^{-1} B }[/math] denotes the Schur complement. Since [math]\displaystyle{ S }[/math] is symmetric positive-definite, we can apply standard iterative methods like the gradient descent method or the conjugate gradient method to

[math]\displaystyle{ S x_2 = B^* A^{-1} b_1 - b_2 }[/math]

in order to compute [math]\displaystyle{ x_2 }[/math]. The vector [math]\displaystyle{ x_1 }[/math] can be reconstructed by solving

[math]\displaystyle{ A x_1 = b_1 - B x_2. \, }[/math]

It is possible to update [math]\displaystyle{ x_1 }[/math] alongside [math]\displaystyle{ x_2 }[/math] during the iteration for the Schur complement system and thus obtain an efficient algorithm.

Implementation

We start the conjugate gradient iteration by computing the residual

[math]\displaystyle{ r_2 := B^* A^{-1} b_1 - b_2 - S x_2 = B^* A^{-1} (b_1 - B x_2) - b_2 = B^* x_1 - b_2, }[/math]

of the Schur complement system, where

[math]\displaystyle{ x_1 := A^{-1} (b_1 - B x_2) }[/math]

denotes the upper half of the solution vector matching the initial guess [math]\displaystyle{ x_2 }[/math] for its lower half. We complete the initialization by choosing the first search direction

[math]\displaystyle{ p_2 := r_2.\, }[/math]

In each step, we compute

[math]\displaystyle{ a_2 := S p_2 = B^* A^{-1} B p_2 = B^* p_1 }[/math]

and keep the intermediate result

[math]\displaystyle{ p_1 := A^{-1} B p_2 }[/math]

for later. The scaling factor is given by

[math]\displaystyle{ \alpha := p_2^* a_2 /p_2^* r_2 }[/math]

and leads to the updates

[math]\displaystyle{ x_2 := x_2 + \alpha p_2, \quad r_2 := r_2 - \alpha a_2. }[/math]

Using the intermediate result [math]\displaystyle{ p_1 }[/math] saved earlier, we can also update the upper part of the solution vector

[math]\displaystyle{ x_1 := x_1 - \alpha p_1.\, }[/math]

Now we only have to construct the new search direction by the Gram–Schmidt process, i.e.,

[math]\displaystyle{ \beta := r_2^* a_2 / p_2^* a_2,\quad p_2 := r_2 - \beta p_2. }[/math]

The iteration terminates if the residual [math]\displaystyle{ r_2 }[/math] has become sufficiently small or if the norm of [math]\displaystyle{ p_2 }[/math] is significantly smaller than [math]\displaystyle{ r_2 }[/math] indicating that the Krylov subspace has been almost exhausted.

Modifications and extensions

If solving the linear system [math]\displaystyle{ A x=b }[/math] exactly is not feasible, inexact solvers can be applied.[2][3][4]

If the Schur complement system is ill-conditioned, preconditioners can be employed to improve the speed of convergence of the underlying gradient method.[2][5]

Inequality constraints can be incorporated, e.g., in order to handle obstacle problems.[5]

References

  1. Uzawa, H. (1958). "Iterative methods for concave programming". in Arrow, K. J.; Hurwicz, L.; Uzawa, H.. Studies in linear and nonlinear programming. Stanford University Press. https://archive.org/details/studiesinlinearn0000arro. 
  2. Jump up to: 2.0 2.1 Elman, H. C.; Golub, G. H. (1994). "Inexact and preconditioned Uzawa algorithms for saddle point problems". SIAM J. Numer. Anal. 31 (6): 1645–1661. doi:10.1137/0731085. 
  3. Bramble, J. H.; Pasciak, J. E.; Vassilev, A. T. (1997). "Analysis of the inexact Uzawa algorithm for saddle point problems". SIAM J. Numer. Anal. 34 (3): 1072–1982. doi:10.1137/S0036142994273343. 
  4. Zulehner, W. (1998). "Analysis of iterative methods for saddle point problems. A unified approach". Math. Comp. 71 (238): 479–505. doi:10.1090/S0025-5718-01-01324-2. 
  5. Jump up to: 5.0 5.1 Gräser, C.; Kornhuber, R. (2007). "On Preconditioned Uzawa-type Iterations for a Saddle Point Problem with Inequality Constraints". Domain Decomposition Methods in Science and Engineering XVI. Lec. Not. Comp. Sci. Eng.. 55. pp. 91–102. doi:10.1007/978-3-540-34469-8_8. ISBN 978-3-540-34468-1. 

Further reading