Hiptmair–Xu preconditioner

From HandWiki
Revision as of 16:43, 26 October 2021 by imported>JTerm (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, Hiptmair–Xu (HX) preconditioners[1] are preconditioners for solving [math]\displaystyle{ H(\operatorname{curl}) }[/math] and [math]\displaystyle{ H(\operatorname{div}) }[/math] problems based on the auxiliary space preconditioning framework.[2] An important ingredient in the derivation of HX preconditioners in two and three dimensions is the so-called regular decomposition, which decomposes a Sobolev space function into a component of higher regularity and a scalar or vector potential. The key to the success of HX preconditioners is the discrete version of this decomposition, which is also known as HX decomposition. The discrete decomposition decomposes a discrete Sobolev space function into a discrete component of higher regularity, a discrete scale or vector potential, and a high-frequency component. HX preconditioners have been used for accelerating a wide variety of solution techniques, thanks to their highly scalable parallel implementations, and are known as AMS[3] and ADS[4] precondition. HX preconditioner was identified by the U.S. Department of Energy as one of the top ten breakthroughs in computational science[5] in recent years. Researchers from Sandia, Los Alamos, and Lawrence Livermore National Labs use this algorithm for modeling fusion with magnetohydrodynamic equations.[6] Moreover, this approach will also be instrumental in developing optimal iterative methods in structural mechanics, electrodynamics, and modeling of complex flows.

HX preconditioner for [math]\displaystyle{ H(\operatorname{curl}) }[/math]

Consider the following [math]\displaystyle{ H(\operatorname{curl}) }[/math] problem: Find [math]\displaystyle{ u \in H_h(\operatorname{curl}) }[/math] such that

[math]\displaystyle{ (\operatorname{curl}~u, \operatorname{curl}~v) + \tau (u, v) = (f, v), \quad \forall v \in H_h(\operatorname{curl}), }[/math] with [math]\displaystyle{ \tau \gt 0 }[/math].

The corresponding matrix form is

[math]\displaystyle{ A_{\operatorname{curl}} u = f. }[/math]

The HX preconditioner for [math]\displaystyle{ H(\operatorname{curl}) }[/math] problem is defined as

[math]\displaystyle{ B_{\operatorname{curl}} = S_{\operatorname{curl}} + \Pi_h^{\operatorname{curl}} \, A_{vgrad}^{-1} \, (\Pi_h^{\operatorname{curl}})^T + \operatorname{grad} \, A_{\operatorname{grad}}^{-1} \, (\operatorname{grad})^T, }[/math]

where [math]\displaystyle{ S_{\operatorname{curl}} }[/math]is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother), [math]\displaystyle{ \Pi_h^{\operatorname{curl}} }[/math] is the canonical interpolation operator for [math]\displaystyle{ H_h(\operatorname{curl}) }[/math] space, [math]\displaystyle{ A_{vgrad} }[/math] is the matrix representation of discrete vector Laplacian defined on [math]\displaystyle{ [H_h(\operatorname{grad})]^n }[/math],[math]\displaystyle{ grad }[/math] is the discrete gradient operator, and [math]\displaystyle{ A_{\operatorname{grad}} }[/math] is the matrix representation of the discrete scalar Laplacian defined on [math]\displaystyle{ H_h(\operatorname{grad}) }[/math]. Based on auxiliary space preconditioning framework, one can show that

[math]\displaystyle{ \kappa(B_{\operatorname{curl}} A_{\operatorname{curl}}) \leq C, }[/math]

where [math]\displaystyle{ \kappa(A) }[/math] denotes the condition number of matrix [math]\displaystyle{ A }[/math].

In practice, inverting [math]\displaystyle{ A_{vgrad} }[/math] and [math]\displaystyle{ A_{grad} }[/math] might be expensive, especially for large scale problems. Therefore, we can replace their inversion by spectrally equivalent approximations, [math]\displaystyle{ B_{vgrad} }[/math] and [math]\displaystyle{ B_{\operatorname{grad}} }[/math], respectively. And the HX preconditioner for [math]\displaystyle{ H(\operatorname{curl}) }[/math] becomes [math]\displaystyle{ B_{\operatorname{curl}} = S_{\operatorname{curl}} + \Pi_h^{\operatorname{curl}} \, B_{vgrad} \, (\Pi_h^{\operatorname{curl}})^T + \operatorname{grad} B_{\operatorname{grad}} (\operatorname{grad})^T. }[/math]

HX Preconditioner for [math]\displaystyle{ H(\operatorname{div}) }[/math]

Consider the following [math]\displaystyle{ H(\operatorname{div}) }[/math] problem: Find [math]\displaystyle{ u \in H_h(\operatorname{div}) }[/math]

[math]\displaystyle{ (\operatorname{div} \,u, \operatorname{div} \,v) + \tau (u, v) = (f, v), \quad \forall v \in H_h(\operatorname{div}), }[/math]with [math]\displaystyle{ \tau \gt 0 }[/math].

The corresponding matrix form is

[math]\displaystyle{ A_{\operatorname{div}} \,u = f. }[/math]

The HX preconditioner for [math]\displaystyle{ H(\operatorname{div}) }[/math] problem is defined as

[math]\displaystyle{ B_{\operatorname{div}} = S_{\operatorname{div}} + \Pi_h^{\operatorname{div}} \, A_{vgrad}^{-1} \, (\Pi_h^{\operatorname{div}})^T + \operatorname{curl} \, A_{\operatorname{curl}}^{-1} \, (\operatorname{curl})^T, }[/math]

where [math]\displaystyle{ S_{\operatorname{div}} }[/math] is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother), [math]\displaystyle{ \Pi_h^{\operatorname{div}} }[/math]is the canonical interpolation operator for [math]\displaystyle{ H(\operatorname{div}) }[/math] space, [math]\displaystyle{ A_{vgrad} }[/math] is the matrix representation of discrete vector Laplacian defined on [math]\displaystyle{ [H_h(\operatorname{grad})]^n }[/math], and [math]\displaystyle{ \operatorname{curl} }[/math] is the discrete curl operator.

Based on the auxiliary space preconditioning framework, one can show that

[math]\displaystyle{ \kappa(B_{\operatorname{div}} A_{\operatorname{div}}) \leq C. }[/math]

For [math]\displaystyle{ A_{\operatorname{curl}}^{-1} }[/math] in the definition of [math]\displaystyle{ B_{\operatorname{div}} }[/math], we can replace it by the HX preconditioner for [math]\displaystyle{ H(\operatorname{curl}) }[/math] problem, e.g., [math]\displaystyle{ B_{\operatorname{curl}} }[/math], since they are spectrally equivalent. Moreover, inverting [math]\displaystyle{ A_{vgrad} }[/math] might be expensive and we can replace it by a spectrally equivalent approximations [math]\displaystyle{ B_{vgrad} }[/math]. These leads to the following practical HX preconditioner for [math]\displaystyle{ H(\operatorname{div}) }[/math]problem,

[math]\displaystyle{ B_{\operatorname{div}} = S_{\operatorname{div}} + \Pi_h^{\operatorname{div}} B_{vgrad} (\Pi_h^{\operatorname{div}})^T + \operatorname{curl} B_{\operatorname{curl}} (\operatorname{curl})^T = S_{\operatorname{div}} + \Pi_h^{\operatorname{div}} B_{vgrad} (\Pi_h^{\operatorname{div}})^T + \operatorname{curl} S_{\operatorname{curl}} (\operatorname{curl})^T + \operatorname{curl} \Pi_h^{\operatorname{curl}} B_{vgrad} (\Pi_h^{\operatorname{curl}})^T (\operatorname{curl})^T. }[/math]

Derivation

The derivation of HX preconditioners is based on the discrete regular decompositions for [math]\displaystyle{ H_h(\operatorname{curl}) }[/math]and [math]\displaystyle{ H_h(\operatorname{div}) }[/math], for the completeness, let us briefly recall them.

Theorem:[Discrete regular decomposition for [math]\displaystyle{ H_h(\operatorname{curl}) }[/math]]

Let [math]\displaystyle{ \Omega }[/math] be a simply connected bounded domain. For any function [math]\displaystyle{ v_h\in H_h(\operatorname{curl} \Omega) }[/math], there exists a vector[math]\displaystyle{ \tilde{v}_h\in H_h(\operatorname{curl} \Omega) }[/math], [math]\displaystyle{ \psi_h\in [H_h (\operatorname{grad} \Omega)]^3 }[/math], [math]\displaystyle{ p_h\in H_h(\operatorname{grad} \Omega) }[/math], such that [math]\displaystyle{ v_h=\tilde{v}_h+\Pi_h^{\operatorname{curl}}\psi_{h}+ \operatorname{grad} p_h }[/math]and[math]\displaystyle{ \Vert h^{-1} \tilde{v}_h\Vert + \Vert\psi_h\Vert_1 + \Vert p_h\Vert_1 \lesssim \Vert v_{h}\Vert_{H(\operatorname{curl})} }[/math]

Theorem:[Discrete regular decomposition for [math]\displaystyle{ H_{h}(\operatorname{div}) }[/math]] Let [math]\displaystyle{ \Omega }[/math] be a simply connected bounded domain. For any function [math]\displaystyle{ v_{h}\in H_{h}(\operatorname{div} \Omega) }[/math], there exists a vector [math]\displaystyle{ \widetilde{v}_h\in H_h(\operatorname{div} \Omega) }[/math] , [math]\displaystyle{ \psi_h\in [H_h(\operatorname{grad} \Omega)]^{3}, }[/math] [math]\displaystyle{ w_h\in H_h(\operatorname{curl} \Omega), }[/math] such that [math]\displaystyle{ v_{h}=\widetilde{v}_h+\Pi_h^{\operatorname{div}}\psi_h+ \operatorname{curl} \, w_h, }[/math] and [math]\displaystyle{ \Vert h^{-1}\widetilde{v}_h\Vert + \Vert\psi_h\Vert_1 + \Vert w_h\Vert_1 \lesssim \Vert v_h \Vert_{H(\operatorname{div})} }[/math]

Based on the above discrete regular decompositions, together with the auxiliary space preconditioning framework, we can derive the HX preconditioners for [math]\displaystyle{ H(\operatorname{curl}) }[/math] and [math]\displaystyle{ H(\operatorname{div}) }[/math] problems as shown before.

References

  1. Hiptmair, Ralf; Xu, Jinchao (2007-01-01). {Nodal auxiliary space preconditioning in $backslash$bf H($backslash$bf curl) and $backslash$bf H($backslash$rm div)} spaces. ResearchGate. pp. 2483. doi:10.1137/060660588. https://www.researchgate.net/publication/257297555. Retrieved 2020-07-06. 
  2. J.Xu, The auxiliary space method and optimal multigrid preconditioning techniques for unstructured grids. Computing. 1996;56(3):215–35.
  3. T. V. Kolev, P. S. Vassilevski, Parallel auxiliary space AMG for H (curl) problems. Journal of Computational Mathematics. 2009 Sep 1:604–23.
  4. T.V. Kolev, P.S. Vassilevski. Parallel auxiliary space AMG solver for H(div) problems. SIAM Journal on Scientific Computing. 2012;34(6):A3079–98.
  5. Report of The Panel on Recent Significant Advancements in Computational Science, https://science.osti.gov/-/media/ascr/pdf/program-documents/docs/Breakthroughs_2008.pdf
  6. E.G. Phillips, J. N. Shadid, E.C. Cyr, S.T. Miller, Enabling Scalable Multifluid Plasma Simulations Through Block Preconditioning. In: van Brummelen H., Corsini A., Perotto S., Rozza G. (eds) Numerical Methods for Flows. Lecture Notes in Computational Science and Engineering, vol 132. Springer, Cham 2020.