SPIKE algorithm
The SPIKE algorithm is a hybrid parallel solver for banded linear systems developed by Eric Polizzi and Ahmed Sameh[1]^ [2]
Overview
The SPIKE algorithm deals with a linear system AX = F, where A is a banded [math]\displaystyle{ n\times n }[/math] matrix of bandwidth much less than [math]\displaystyle{ n }[/math], and F is an [math]\displaystyle{ n\times s }[/math] matrix containing [math]\displaystyle{ s }[/math] right-hand sides. It is divided into a preprocessing stage and a postprocessing stage.
Preprocessing stage
In the preprocessing stage, the linear system AX = F is partitioned into a block tridiagonal form
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{A}_1 & \boldsymbol{B}_1\\ \boldsymbol{C}_2 & \boldsymbol{A}_2 & \boldsymbol{B}_2\\ & \ddots & \ddots & \ddots\\ & & \boldsymbol{C}_{p-1} & \boldsymbol{A}_{p-1} & \boldsymbol{B}_{p-1}\\ & & & \boldsymbol{C}_p & \boldsymbol{A}_p \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1\\ \boldsymbol{X}_2\\ \vdots\\ \boldsymbol{X}_{p-1}\\ \boldsymbol{X}_p \end{bmatrix} = \begin{bmatrix} \boldsymbol{F}_1\\ \boldsymbol{F}_2\\ \vdots\\ \boldsymbol{F}_{p-1}\\ \boldsymbol{F}_p \end{bmatrix}. }[/math]
Assume, for the time being, that the diagonal blocks Aj (j = 1,...,p with p ≥ 2) are nonsingular. Define a block diagonal matrix
- D = diag(A1,...,Ap),
then D is also nonsingular. Left-multiplying D−1 to both sides of the system gives
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I} & \boldsymbol{V}_1\\ \boldsymbol{W}_2 & \boldsymbol{I} & \boldsymbol{V}_2\\ & \ddots & \ddots & \ddots\\ & & \boldsymbol{W}_{p-1} & \boldsymbol{I} & \boldsymbol{V}_{p-1}\\ & & & \boldsymbol{W}_p & \boldsymbol{I} \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1\\ \boldsymbol{X}_2\\ \vdots\\ \boldsymbol{X}_{p-1}\\ \boldsymbol{X}_p \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1\\ \boldsymbol{G}_2\\ \vdots\\ \boldsymbol{G}_{p-1}\\ \boldsymbol{G}_p \end{bmatrix}, }[/math]
which is to be solved in the postprocessing stage. Left-multiplication by D−1 is equivalent to solving [math]\displaystyle{ p }[/math] systems of the form
- Aj[Vj Wj Gj] = [Bj Cj Fj]
(omitting W1 and C1 for [math]\displaystyle{ j=1 }[/math], and Vp and Bp for [math]\displaystyle{ j=p }[/math]), which can be carried out in parallel.
Due to the banded nature of A, only a few leftmost columns of each Vj and a few rightmost columns of each Wj can be nonzero. These columns are called the spikes.
Postprocessing stage
Without loss of generality, assume that each spike contains exactly [math]\displaystyle{ m }[/math] columns ([math]\displaystyle{ m }[/math] is much less than [math]\displaystyle{ n }[/math]) (pad the spike with columns of zeroes if necessary). Partition the spikes in all Vj and Wj into
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{V}_j^{(t)}\\ \boldsymbol{V}_j'\\ \boldsymbol{V}_j^{(b)} \end{bmatrix} }[/math] and [math]\displaystyle{ \begin{bmatrix} \boldsymbol{W}_j^{(t)}\\ \boldsymbol{W}_j'\\ \boldsymbol{W}_j^{(b)}\\ \end{bmatrix} }[/math]
where V (t)
j , V (b)
j , W (t)
j and W (b)
j are of dimensions [math]\displaystyle{ m\times m }[/math]. Partition similarly all Xj and Gj into
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{X}_j^{(t)}\\ \boldsymbol{X}_j'\\ \boldsymbol{X}_j^{(b)} \end{bmatrix} }[/math] and [math]\displaystyle{ \begin{bmatrix} \boldsymbol{G}_j^{(t)}\\ \boldsymbol{G}_j'\\ \boldsymbol{G}_j^{(b)}\\ \end{bmatrix}. }[/math]
Notice that the system produced by the preprocessing stage can be reduced to a block pentadiagonal system of much smaller size (recall that [math]\displaystyle{ m }[/math] is much less than [math]\displaystyle{ n }[/math])
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{(t)}\\ & \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)} & \boldsymbol{0} \\ & & \ddots & \ddots & \ddots & \ddots & \ddots\\ & & & \boldsymbol{0} & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{p-1}^{(t)}\\ & & & & \boldsymbol{W}_{p-1}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)} & \boldsymbol{0}\\ & & & & & \boldsymbol{0} & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & & & & & & \boldsymbol{W}_p^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)}\\ \vdots\\ \boldsymbol{X}_{p-1}^{(t)}\\ \boldsymbol{X}_{p-1}^{(b)}\\ \boldsymbol{X}_p^{(t)}\\ \boldsymbol{X}_p^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)}\\ \vdots\\ \boldsymbol{G}_{p-1}^{(t)}\\ \boldsymbol{G}_{p-1}^{(b)}\\ \boldsymbol{G}_p^{(t)}\\ \boldsymbol{G}_p^{(b)} \end{bmatrix}\text{,} }[/math]
which we call the reduced system and denote by S̃X̃ = G̃.
Once all X (t)
j and X (b)
j are found, all X′j can be recovered with perfect parallelism via
- [math]\displaystyle{ \begin{cases} \boldsymbol{X}_1'=\boldsymbol{G}_1'-\boldsymbol{V}_1'\boldsymbol{X}_2^{(t)}\text{,}\\ \boldsymbol{X}_j'=\boldsymbol{G}_j'-\boldsymbol{V}_j'\boldsymbol{X}_{j+1}^{(t)}-\boldsymbol{W}_j'\boldsymbol{X}_{j-1}^{(b)}\text{,} & j=2,\ldots,p-1\text{,}\\ \boldsymbol{X}_p'=\boldsymbol{G}_p'-\boldsymbol{W}_p\boldsymbol{X}_{p-1}^{(b)}\text{.} \end{cases} }[/math]
SPIKE as a polyalgorithmic banded linear system solver
Despite being logically divided into two stages, computationally, the SPIKE algorithm comprises three stages:
- factorizing the diagonal blocks,
- computing the spikes,
- solving the reduced system.
Each of these stages can be accomplished in several ways, allowing a multitude of variants. Two notable variants are the recursive SPIKE algorithm for non-diagonally-dominant cases and the truncated SPIKE algorithm for diagonally-dominant cases. Depending on the variant, a system can be solved either exactly or approximately. In the latter case, SPIKE is used as a preconditioner for iterative schemes like Krylov subspace methods and iterative refinement.
Recursive SPIKE
Preprocessing stage
The first step of the preprocessing stage is to factorize the diagonal blocks Aj. For numerical stability, one can use LAPACK's XGBTRF
routines to LU factorize them with partial pivoting. Alternatively, one can also factorize them without partial pivoting but with a "diagonal boosting" strategy. The latter method tackles the issue of singular diagonal blocks.
In concrete terms, the diagonal boosting strategy is as follows. Let 0ε denote a configurable "machine zero". In each step of LU factorization, we require that the pivot satisfy the condition
- |pivot| > 0ε‖A‖1.
If the pivot does not satisfy the condition, it is then boosted by
- [math]\displaystyle{ \mathrm{pivot}= \begin{cases} \mathrm{pivot}+\epsilon\lVert\boldsymbol{A}_j\rVert_1 & \text{if }\mathrm{pivot}\geq 0\text{,}\\ \mathrm{pivot}-\epsilon\lVert\boldsymbol{A}_j\rVert_1 & \text{if }\mathrm{pivot}\lt 0 \end{cases} }[/math]
where ε is a positive parameter depending on the machine's unit roundoff, and the factorization continues with the boosted pivot. This can be achieved by modified versions of ScaLAPACK's XDBTRF
routines. After the diagonal blocks are factorized, the spikes are computed and passed on to the postprocessing stage.
Postprocessing stage
The two-partition case
In the two-partition case, i.e., when p = 2, the reduced system S̃X̃ = G̃ has the form
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)} \end{bmatrix}\text{.} }[/math]
An even smaller system can be extracted from the center:
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)} \end{bmatrix}\text{,} }[/math]
which can be solved using the block LU factorization
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m \end{bmatrix} = \begin{bmatrix} \boldsymbol{I}_m\\ \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ & \boldsymbol{I}_m-\boldsymbol{W}_2^{(t)}\boldsymbol{V}_1^{(b)} \end{bmatrix}\text{.} }[/math]
Once X (b)
1 and X (t)
2 are found, X (t)
1 and X (b)
2 can be computed via
- X (t)
1 = G (t)
1 − V (t)
1 X (t)
2 , - X (b)
2 = G (b)
2 − W (b)
2 X (b)
1 .
The multiple-partition case
Assume that p is a power of two, i.e., p = 2d. Consider a block diagonal matrix
- D̃1 = diag(D̃ [1]
1 ,...,D̃ [1]
p/2 )
where
- [math]\displaystyle{ \boldsymbol{\tilde{D}}_k^{[1]}= \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{2k-1}^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{2k-1}^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_{2k}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & \boldsymbol{W}_{2k}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} }[/math]
for k = 1,...,p/2. Notice that D̃1 essentially consists of diagonal blocks of order 4m extracted from S̃. Now we factorize S̃ as
- S̃ = D̃1S̃2.
The new matrix S̃2 has the form
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I}_{3m} & \boldsymbol{0} & \boldsymbol{V}_1^{[2](t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{[2](b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{[2](t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{[2](t)}\\ & \boldsymbol{W}_2^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_{3m} & \boldsymbol{V}_2^{[2](b)} & \boldsymbol{0} \\ & & \ddots & \ddots & \ddots & \ddots & \ddots\\ & & & \boldsymbol{0} & \boldsymbol{W}_{p/2-1}^{[2](t)} & \boldsymbol{I}_{3m} & \boldsymbol{0} & \boldsymbol{V}_{p/2-1}^{[2](t)}\\ & & & & \boldsymbol{W}_{p/2-1}^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p/2-1}^{[2](b)} & \boldsymbol{0}\\ & & & & & \boldsymbol{0} & \boldsymbol{W}_{p/2}^{[2](t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & & & & & & \boldsymbol{W}_{p/2}^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_{3m} \end{bmatrix}\text{.} }[/math]
Its structure is very similar to that of S̃2, only differing in the number of spikes and their height (their width stays the same at m). Thus, a similar factorization step can be performed on S̃2 to produce
- S̃2 = D̃2S̃3
and
- S̃ = D̃1D̃2S̃3.
Such factorization steps can be performed recursively. After d − 1 steps, we obtain the factorization
- S̃ = D̃1⋯D̃d−1S̃d,
where S̃d has only two spikes. The reduced system will then be solved via
- X̃ = S̃ −1
d D̃ −1
d−1 ⋯D̃ −1
1 G̃.
The block LU factorization technique in the two-partition case can be used to handle the solving steps involving D̃1, ..., D̃d−1 and S̃d for they essentially solve multiple independent systems of generalized two-partition forms.
Generalization to cases where p is not a power of two is almost trivial.
Truncated SPIKE
When A is diagonally-dominant, in the reduced system
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{(t)}\\ & \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)} & \boldsymbol{0} \\ & & \ddots & \ddots & \ddots & \ddots & \ddots\\ & & & \boldsymbol{0} & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{p-1}^{(t)}\\ & & & & \boldsymbol{W}_{p-1}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)} & \boldsymbol{0}\\ & & & & & \boldsymbol{0} & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & & & & & & \boldsymbol{W}_p^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)}\\ \vdots\\ \boldsymbol{X}_{p-1}^{(t)}\\ \boldsymbol{X}_{p-1}^{(b)}\\ \boldsymbol{X}_p^{(t)}\\ \boldsymbol{X}_p^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)}\\ \vdots\\ \boldsymbol{G}_{p-1}^{(t)}\\ \boldsymbol{G}_{p-1}^{(b)}\\ \boldsymbol{G}_p^{(t)}\\ \boldsymbol{G}_p^{(b)} \end{bmatrix}\text{,} }[/math]
the blocks V (t)
j and W (b)
j are often negligible. With them omitted, the reduced system becomes block diagonal
- [math]\displaystyle{ \begin{bmatrix} \boldsymbol{I}_m\\ & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m\\ & & & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)}\\ & & & \ddots & \ddots & \ddots\\ & & & & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m\\ & & & & & & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)}\\ & & & & & & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m\\ & & & & & & & & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)}\\ \vdots\\ \boldsymbol{X}_{p-1}^{(t)}\\ \boldsymbol{X}_{p-1}^{(b)}\\ \boldsymbol{X}_p^{(t)}\\ \boldsymbol{X}_p^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)}\\ \vdots\\ \boldsymbol{G}_{p-1}^{(t)}\\ \boldsymbol{G}_{p-1}^{(b)}\\ \boldsymbol{G}_p^{(t)}\\ \boldsymbol{G}_p^{(b)} \end{bmatrix} }[/math]
and can be easily solved in parallel [3].
The truncated SPIKE algorithm can be wrapped inside some outer iterative scheme (e.g., BiCGSTAB or iterative refinement) to improve the accuracy of the solution.
SPIKE for tridiagonal systems
The first SPIKE partitioning and algorithm was presented in [4] and was designed as the means to improve the stability properties of a parallel Givens rotations-based solver for tridiagonal systems. A version of the algorithm, termed g-Spike, that is based on serial Givens rotations applied independently on each block was designed for the NVIDIA GPU [5]. A SPIKE-based algorithm for the GPU that is based on a special block diagonal pivoting strategy is described in [6].
SPIKE as a preconditioner
The SPIKE algorithm can also function as a preconditioner for iterative methods for solving linear systems. To solve a linear system Ax = b using a SPIKE-preconditioned iterative solver, one extracts center bands from A to form a banded preconditioner M and solves linear systems involving M in each iteration with the SPIKE algorithm.
In order for the preconditioner to be effective, row and/or column permutation is usually necessary to move "heavy" elements of A close to the diagonal so that they are covered by the preconditioner. This can be accomplished by computing the weighted spectral reordering of A.
The SPIKE algorithm can be generalized by not restricting the preconditioner to be strictly banded. In particular, the diagonal block in each partition can be a general matrix and thus handled by a direct general linear system solver rather than a banded solver. This enhances the preconditioner, and hence allows better chance of convergence and reduces the number of iterations.
Implementations
Intel offers an implementation of the SPIKE algorithm under the name Intel Adaptive Spike-Based Solver [7]. Tridiagonal solvers have also been developed for the NVIDIA GPU [8][9] and the Xeon Phi co-processors. The method in [10] is the basis for a tridiagonal solver in the cuSPARSE library.[1] The Givens rotations based solver was also implemented for the GPU and the Intel Xeon Phi.[2]
References
- ↑ NVIDIA, Accessed October 28, 2014. CUDA Toolkit Documentation v. 6.5: cuSPARSE, http://docs.nvidia.com/cuda/cusparse.
- ↑ Venetis, Ioannis; Sobczyk, Aleksandros; Kouris, Alexandros; Nakos, Alexandros; Nikoloutsakos, Nikolaos; Gallopoulos, Efstratios (2015-09-03). "A general tridiagonal solver for coprocessors: Adapting g-Spike for the Intel Xeon Phi". https://www.researchgate.net/publication/282286515.
- ^ Polizzi, E.; Sameh, A. H. (2006). "A parallel hybrid banded system solver: the SPIKE algorithm". Parallel Computing 32 (2): 177–194. doi:10.1016/j.parco.2005.07.005.
- ^ Polizzi, E.; Sameh, A. H. (2007). "SPIKE: A parallel environment for solving banded linear systems". Computers & Fluids 36: 113–141. doi:10.1016/j.compfluid.2005.07.005.
- ^ Mikkelsen, C. C. K.; Manguoglu, M. (2008). "Analysis of the Truncated SPIKE Algorithm". SIAM J. Matrix Anal. Appl. 30 (4): 1500–1519. doi:10.1137/080719571.
- ^ Manguoglu, M.; Sameh, A. H.; Schenk, O. (2009). "PSPIKE: A Parallel Hybrid Sparse Linear System Solver". Euro-Par 2009 Parallel Processing. Lecture Notes in Computer Science. 5704. pp. 797–808. doi:10.1007/978-3-642-03869-3_74. ISBN 978-3-642-03868-6. Bibcode: 2009LNCS.5704..797M.
- ^ "Intel Adaptive Spike-Based Solver - Intel Software Network". http://software.intel.com/en-us/articles/intel-adaptive-spike-based-solver/.
- ^ Sameh, A. H.; Kuck, D. J. (1978). "On Stable Parallel Linear System Solvers". Journal of the ACM 25: 81–91. doi:10.1145/322047.322054.
- ^ Venetis, I.E.; Kouris, A.; Sobczyk, A.; Gallopoulos, E.; Sameh, A. H. (2015). "A direct tridiagonal solver based on Givens rotations for GPU architectures". Parallel Computing 25: 101–116. doi:10.1016/j.parco.2015.03.008.
- ^ Chang, L.-W.; Stratton, J.; Kim, H.; Hwu, W.-M. (2012). "A scalable, numerically stable, high-performance tridiagonal solver using GPUs". Proc. Int'l. Conf. High Performance Computing, Networking Storage and Analysis (SC'12) (Los Alamitos, CA, USA: IEEE Computer Soc. Press): 27:1–27:11. ISBN 978-1-4673-0804-5.
Further reading
- Gallopoulos, E.; Philippe, B.; Sameh, A.H. (2015). Parallelism in Matrix Computations. Springer. ISBN 978-94-017-7188-7. https://www.springer.com/in/book/9789401771870.
Original source: https://en.wikipedia.org/wiki/SPIKE algorithm.
Read more |