Conjugate gradient squared method
In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form [math]\displaystyle{ A{\bold x} = {\bold b} }[/math], particularly in cases where computing the transpose [math]\displaystyle{ A^T }[/math] is impractical.[1] The CGS method was developed as an improvement to the biconjugate gradient method.[2][3][4]
Background
A system of linear equations [math]\displaystyle{ A{\bold x} = {\bold b} }[/math] consists of a known matrix [math]\displaystyle{ A }[/math] and a known vector [math]\displaystyle{ {\bold b} }[/math]. To solve the system is to find the value of the unknown vector [math]\displaystyle{ {\bold x} }[/math].[3][5] A direct method for solving a system of linear equations is to take the inverse of the matrix [math]\displaystyle{ A }[/math], then calculate [math]\displaystyle{ \bold x = A^{-1}\bold b }[/math]. However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess [math]\displaystyle{ \bold x^{(0)} }[/math], and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.[6][7]
As with the conjugate gradient method, biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable optimisation problems, such as power-flow analysis, hyperparameter optimisation, and facial recognition.[8]
Algorithm
The algorithm is as follows:[9]
- Choose an initial guess [math]\displaystyle{ {\bold x}^{(0)} }[/math]
- Compute the residual [math]\displaystyle{ {\bold r}^{(0)} = {\bold b} - A{\bold x}^{(0)} }[/math]
- Choose [math]\displaystyle{ \tilde {\bold r}^{(0)} = {\bold r}^{(0)} }[/math]
- For [math]\displaystyle{ i = 1, 2, 3, \dots }[/math] do:
- [math]\displaystyle{ \rho^{(i-1)} = \tilde {\bold r}^{T(i-1)}{\bold r}^{(i-1)} }[/math]
- If [math]\displaystyle{ \rho^{(i-1)} = 0 }[/math], the method fails.
- If [math]\displaystyle{ i=1 }[/math]:
- [math]\displaystyle{ {\bold p}^{(1)} = {\bold u}^{(1)} = {\bold r}^{(0)} }[/math]
- Else:
- [math]\displaystyle{ \beta^{(i-1)} = \rho^{(i-1)}/\rho^{(i-2)} }[/math]
- [math]\displaystyle{ {\bold u}^{(i)} = {\bold r}^{(i-1)} + \beta_{i-1}{\bold q}^{(i-1)} }[/math]
- [math]\displaystyle{ {\bold p}^{(i)} = {\bold u}^{(i)} + \beta^{(i-1)}({\bold q}^{(i-1)} + \beta^{(i-1)}{\bold p}^{(i-1)}) }[/math]
- Solve [math]\displaystyle{ M\hat {\bold p}={\bold p}^{(i)} }[/math], where [math]\displaystyle{ M }[/math] is a pre-conditioner.
- [math]\displaystyle{ \hat {\bold v} = A\hat {\bold p} }[/math]
- [math]\displaystyle{ \alpha^{(i)} = \rho^{(i-1)} / \tilde {\bold r}^T \hat {\bold v} }[/math]
- [math]\displaystyle{ {\bold q}^{(i)} = {\bold u}^{(i)} - \alpha^{(i)}\hat {\bold v} }[/math]
- Solve [math]\displaystyle{ M\hat {\bold u} = {\bold u}^{(i)} + {\bold q}^{(i)} }[/math]
- [math]\displaystyle{ {\bold x}^{(i)} = {\bold x}^{(i-1)} + \alpha^{(i)} \hat {\bold u} }[/math]
- [math]\displaystyle{ \hat {\bold q} = A\hat {\bold u} }[/math]
- [math]\displaystyle{ {\bold r}^{(i)} = {\bold r}^{(i-1)} - \alpha^{(i)}\hat {\bold q} }[/math]
- Check for convergence: if there is convergence, end the loop and return the result
See also
- Biconjugate gradient method
- Biconjugate gradient stabilized method
- Generalized minimal residual method
References
- ↑ "Conjugate Gradient Squared Method". Wolfram Mathworld. https://mathworld.wolfram.com/ConjugateGradientSquaredMethod.html.
- ↑ Mathworks. "cgs". https://au.mathworks.com/help/matlab/ref/cgs.html.
- ↑ 3.0 3.1 Henk van der Vorst (2003). "Bi-Conjugate Gradients". Iterative Krylov Methods for Large Linear Systems. Cambridge University Press. ISBN 0-521-81828-1.
- ↑ Peter Sonneveld (1989). "CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems". SIAM Journal on Scientific and Statistical Computing 10 (1): 36–52. doi:10.1137/0910004. ProQuest 921988114. https://www.proquest.com/docview/921988114.
- ↑ "Linear equations", Matrix Analysis and Applied Linear Algebra, Philadelphia, PA: SIAM, pp. 1-40, doi:10.1137/1.9780898719512.ch1, http://www.matrixanalysis.com/Chapter1.pdf, retrieved 2023-12-18
- ↑ "Iterative Methods for Linear Systems". Mathworks. https://au.mathworks.com/help/matlab/math/iterative-methods-for-linear-systems.html.
- ↑ Jean Gallier. "Iterative Methods for Solving Linear Systems". UPenn. https://www.cis.upenn.edu/~cis5150/cis515-12-sl5.pdf.
- ↑ "Conjugate gradient methods". Cornell University. https://optimization.cbe.cornell.edu/index.php?title=Conjugate_gradient_methods.
- ↑ R. Barrett; M. Berry; T. F. Chan; J. Demmel; J. Donato; J. Dongarra; V. Eijkhout; R. Pozo et al. (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM. https://netlib.org/linalg/html_templates/Templates.html.
Original source: https://en.wikipedia.org/wiki/Conjugate gradient squared method.
Read more |