Kaczmarz method

From HandWiki
Short description: Algorithm

The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems [math]\displaystyle{ A x = b }[/math]. It was first discovered by the Polish mathematician Stefan Kaczmarz,[1] and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART).[2] ART includes the positivity constraint, making it nonlinear.[3]

The Kaczmarz method is applicable to any linear system of equations, but its computational advantage relative to other methods depends on the system being sparse. It has been demonstrated to be superior, in some biomedical imaging applications, to other methods such as the filtered backprojection method.[4]

It has many applications ranging from computed tomography (CT) to signal processing. It can be obtained also by applying to the hyperplanes, described by the linear system, the method of successive projections onto convex sets (POCS).[5][6]

Algorithm 1: Kaczmarz algorithm

Let [math]\displaystyle{ Ax = b }[/math] be a system of linear equations, let [math]\displaystyle{ m }[/math] be the number of rows of A, [math]\displaystyle{ a_{i} }[/math] be the [math]\displaystyle{ i }[/math]th row of complex-valued matrix [math]\displaystyle{ A }[/math], and let [math]\displaystyle{ x^{0} }[/math] be arbitrary complex-valued initial approximation to the solution of [math]\displaystyle{ Ax=b }[/math]. For [math]\displaystyle{ k=0,1,\ldots }[/math] compute:

[math]\displaystyle{ x^{k+1} = x^{k} + \frac{b_{i} - \langle a_{i}, x^{k} \rangle}{\| a_{i} \|^2} \overline{a_{i}} }[/math]

 

 

 

 

(1)

where [math]\displaystyle{ i = k \bmod m , i= 1,2, \ldots m }[/math] and [math]\displaystyle{ \overline{a_i} }[/math] denotes complex conjugation of [math]\displaystyle{ a_i }[/math].

If the system is consistent, [math]\displaystyle{ x^k }[/math] converges to the minimum-norm solution, provided that the iterations start with the zero vector.

A more general algorithm can be defined using a relaxation parameter [math]\displaystyle{ \lambda^k }[/math]

[math]\displaystyle{ x^{k+1} = x^{k} + \lambda^k \frac{b_{i} - \langle a_{i}, x^{k} \rangle}{\| a_{i} \|^2} \overline{a_{i}} }[/math]

There are versions of the method that converge to a regularized weighted least squares solution when applied to a system of inconsistent equations and, at least as far as initial behavior is concerned, at a lesser cost than other iterative methods, such as the conjugate gradient method.[7]

Algorithm 2: Randomized Kaczmarz algorithm

In 2009, a randomized version of the Kaczmarz method for overdetermined linear systems was introduced by Thomas Strohmer and Roman Vershynin[8] in which the i-th equation is selected randomly with probability proportional to [math]\displaystyle{ \|a_i \|^2. }[/math]

This method can be seen as a particular case of stochastic gradient descent.[9]

Under such circumstances [math]\displaystyle{ x_{k} }[/math] converges exponentially fast to the solution of [math]\displaystyle{ Ax=b, }[/math] and the rate of convergence depends only on the scaled condition number [math]\displaystyle{ \kappa(A) }[/math].

Theorem. Let [math]\displaystyle{ x }[/math] be the solution of [math]\displaystyle{ Ax=b. }[/math] Then Algorithm 2 converges to [math]\displaystyle{ x }[/math] in expectation, with the average error:
[math]\displaystyle{ \mathbb{E} \|x_k-x \|^2 \leq \left (1-\kappa(A)^{-2} \right )^{k} \cdot \| x_0-x \|^2. }[/math]

Proof

We have

[math]\displaystyle{ \forall z \in \Complex^n: \quad \sum_{j=1}^{m}|\langle z,a_j \rangle|^2 \geq \frac{\| z \|^2}{\| A^{-1} \|^2} }[/math]

 

 

 

 

(2)

Using

[math]\displaystyle{ \| A \|^2=\sum_{j=1}^{m} \| a_j \|^2 }[/math]

we can write (2) as

[math]\displaystyle{ \forall z \in \Complex^n: \quad \sum_{j=1}^{m} \frac{\| a_j \|^2}{\| A \|^2}\left|\left\langle z,\frac {a_j}{\| a_j \| }\right\rangle \right|^2 \geq \kappa(A)^{-2}{\| z \|^2} }[/math]

 

 

 

 

(3)

The main point of the proof is to view the left hand side in (3) as an expectation of some random variable. Namely, recall that the solution space of the [math]\displaystyle{ j-th }[/math] equation of [math]\displaystyle{ Ax=b }[/math] is the hyperplane

[math]\displaystyle{ \{y : \langle y,a_j \rangle = b_j\}, }[/math]

whose normal is [math]\displaystyle{ \tfrac{a_j}{\| a_j \|^2}. }[/math] Define a random vector Z whose values are the normals to all the equations of [math]\displaystyle{ Ax=b }[/math], with probabilities as in our algorithm:

[math]\displaystyle{ Z=\frac {a_j}{\| a_j \| } }[/math] with probability [math]\displaystyle{ \frac{\| a_j \|^2}{\| A \|^2} \qquad\qquad\qquad j=1,\ldots,m }[/math]

Then (3) says that

[math]\displaystyle{ \forall z \in \Complex^n: \quad \mathbb E|\langle z,Z\rangle|^2 \geq\kappa(A)^{-2}{\| z \|^2} }[/math]

 

 

 

 

(4)

The orthogonal projection [math]\displaystyle{ P }[/math] onto the solution space of a random equation of [math]\displaystyle{ Ax=b }[/math] is given by [math]\displaystyle{ Pz= z-\langle z-x, Z\rangle Z. }[/math]

Now we are ready to analyze our algorithm. We want to show that the error [math]\displaystyle{ {\| x_k-x \|^2} }[/math] reduces at each step in average (conditioned on the previous steps) by at least the factor of [math]\displaystyle{ (1-\kappa(A)^{-2}). }[/math] The next approximation [math]\displaystyle{ x_k }[/math] is computed from [math]\displaystyle{ x_{k-1} }[/math] as [math]\displaystyle{ x_k= P_kx_{k-1}, }[/math] where [math]\displaystyle{ P_1,P_2,\ldots }[/math] are independent realizations of the random projection [math]\displaystyle{ P. }[/math] The vector [math]\displaystyle{ x_{k-1}-x_k }[/math] is in the kernel of [math]\displaystyle{ P_k. }[/math] It is orthogonal to the solution space of the equation onto which [math]\displaystyle{ P_k }[/math] projects, which contains the vector [math]\displaystyle{ x_k-x }[/math] (recall that [math]\displaystyle{ x }[/math] is the solution to all equations). The orthogonality of these two vectors then yields

[math]\displaystyle{ \| x_k-x \|^2=\| x_{k-1}-x \|^2-\| x_{k-1}-x_k \|^2. }[/math]

To complete the proof, we have to bound [math]\displaystyle{ \| x_{k-1}-x_k \|^2 }[/math] from below. By the definition of [math]\displaystyle{ x_k }[/math], we have

[math]\displaystyle{ \| x_{k-1}-x_k \|=\langle x_{k-1}-x,Z_k\rangle }[/math]

where [math]\displaystyle{ Z_1,Z_2,\ldots }[/math] are independent realizations of the random vector [math]\displaystyle{ Z. }[/math]

Thus

[math]\displaystyle{ \| x_k-x \|^2 \leq \left(1-\left|\left\langle\frac{x_{k-1}-x}{\| x_{k-1}-x \| }, Z_k\right\rangle\right|^2\right){\| x_{k-1}-x \|^2}. }[/math]

Now we take the expectation of both sides conditional upon the choice of the random vectors [math]\displaystyle{ Z_1,\ldots,Z_{k-1} }[/math] (hence we fix the choice of the random projections [math]\displaystyle{ P_1,\ldots,P_{k-1} }[/math] and thus the random vectors [math]\displaystyle{ x_1,\ldots,x_{k-1} }[/math] and we average over the random vector [math]\displaystyle{ Z_k }[/math]). Then

[math]\displaystyle{ \mathbb E_{Z_1,\ldots,Z_{k-1}}{\| x_k-x \|^2} = \left(1-\mathbb E_{Z_1,\ldots,Z_{k-1}, Z_k}\left|\left\langle\frac{x_{k-1}-x}{\| x_{k-1}-x \| },Z_k\right\rangle\right|^2\right){\| x_{k-1}-x \|^2}. }[/math]

By (4) and the independence,

[math]\displaystyle{ \mathbb E_{Z_1,\ldots,Z_{k-1}}{\| x_k-x \|^2} \leq (1-\kappa(A)^{-2}){\| x_{k-1}-x \|^2}. }[/math]

Taking the full expectation of both sides, we conclude that

[math]\displaystyle{ \mathbb E \| x_k-x \|^2 \leq (1-\kappa(A)^{-2})\mathbb E{\| x_{k-1}-x \|^2}.\blacksquare }[/math]

The superiority of this selection was illustrated with the reconstruction of a bandlimited function from its nonuniformly spaced sampling values. However, it has been pointed out[10] that the reported success by Strohmer and Vershynin depends on the specific choices that were made there in translating the underlying problem, whose geometrical nature is to find a common point of a set of hyperplanes, into a system of algebraic equations. There will always be legitimate algebraic representations of the underlying problem for which the selection method in[8] will perform in an inferior manner.[8][10][11]

The Kaczmarz iteration (1) has a purely geometric interpretation: the algorithm successively projects the current iterate onto the hyperplane defined by the next equation. Hence, any scaling of the equations is irrelevant; it can also be seen from (1) that any (nonzero) scaling of the equations cancels out. Thus, in RK, one can use [math]\displaystyle{ \| a_i \| }[/math] or any other weights that may be relevant. Specifically, in the above-mentioned reconstruction example, the equations were chosen with probability proportional to the average distance of each sample point from its two nearest neighbors — a concept introduced by Feichtinger and Gröchenig. For additional progress on this topic, see,[12][13] and the references therein.

Algorithm 3: Gower-Richtarik algorithm

In 2015, Robert M. Gower and Peter Richtarik[14] developed a versatile randomized iterative method for solving a consistent system of linear equations [math]\displaystyle{ Ax = b }[/math] which includes the randomized Kaczmarz algorithm as a special case. Other special cases include randomized coordinate descent, randomized Gaussian descent and randomized Newton method. Block versions and versions with importance sampling of all these methods also arise as special cases. The method is shown to enjoy exponential rate decay (in expectation) - also known as linear convergence, under very mild conditions on the way randomness enters the algorithm. The Gower-Richtarik method is the first algorithm uncovering a "sibling" relationship between these methods, some of which were independently proposed before, while many of which were new.

Insights about Randomized Kaczmarz

Interesting new insights about the randomized Kaczmarz method that can be gained from the analysis of the method include:

  • The general rate of the Gower-Richtarik algorithm precisely recovers the rate of the randomized Kaczmarz method in the special case when it reduced to it.
  • The choice of probabilities for which the randomized Kaczmarz algorithm was originally formulated and analyzed (probabilities proportional to the squares of the row norms) is not optimal. Optimal probabilities are the solution of a certain semidefinite program. The theoretical complexity of randomized Kaczmarz with the optimal probabilities can be arbitrarily better than the complexity for the standard probabilities. However, the amount by which it is better depends on the matrix [math]\displaystyle{ A }[/math]. There are problems for which the standard probabilities are optimal.
  • When applied to a system with matrix [math]\displaystyle{ A }[/math] which is positive definite, Randomized Kaczmarz method is equivalent to the Stochastic Gradient Descent (SGD) method (with a very special stepsize) for minimizing the strongly convex quadratic function [math]\displaystyle{ f(x) = \tfrac{1}{2}x^T A x - b^T x. }[/math] Note that since [math]\displaystyle{ f }[/math] is convex, the minimizers of [math]\displaystyle{ f }[/math] must satisfy [math]\displaystyle{ \nabla f(x) = 0 }[/math], which is equivalent to [math]\displaystyle{ Ax = b. }[/math] The "special stepsize" is the stepsize which leads to a point which in the one-dimensional line spanned by the stochastic gradient minimizes the Euclidean distance from the unknown(!) minimizer of [math]\displaystyle{ f }[/math], namely, from [math]\displaystyle{ x^* = A^{-1}b. }[/math] This insight is gained from a dual view of the iterative process (below described as "Optimization Viewpoint: Constrain and Approximate").

Six Equivalent Formulations

The Gower-Richtarik method enjoys six seemingly different but equivalent formulations, shedding additional light on how to interpret it (and, as a consequence, how to interpret its many variants, including randomized Kaczmarz):

  • 1. Sketching viewpoint: Sketch & Project
  • 2. Optimization viewpoint: Constrain and Approximate
  • 3. Geometric viewpoint: Random Intersect
  • 4. Algebraic viewpoint 1: Random Linear Solve
  • 5. Algebraic viewpoint 2: Random Update
  • 6. Analytic viewpoint: Random Fixed Point

We now describe some of these viewpoints. The method depends on 2 parameters:

  • a positive definite matrix [math]\displaystyle{ B }[/math] giving rise to a weighted Euclidean inner product [math]\displaystyle{ \langle x,y \rangle _B := x^T B y }[/math] and the induced norm
[math]\displaystyle{ \|x\|_B = \left (\langle x,x \rangle _B \right )^{\frac{1}{2}}, }[/math]
  • and a random matrix [math]\displaystyle{ S }[/math] with as many rows as [math]\displaystyle{ A }[/math] (and possibly random number of columns).

1. Sketch and Project

Given previous iterate [math]\displaystyle{ x^k, }[/math] the new point [math]\displaystyle{ x^{k+1} }[/math] is computed by drawing a random matrix [math]\displaystyle{ S }[/math] (in an iid fashion from some fixed distribution), and setting

[math]\displaystyle{ x^{k+1} = \underset x \operatorname{arg\ min} \| x - x^k \|_B \text{ subject to } S^T A x = S^T b. }[/math]

That is, [math]\displaystyle{ x^{k+1} }[/math] is obtained as the projection of [math]\displaystyle{ x^k }[/math] onto the randomly sketched system [math]\displaystyle{ S^T Ax = S^T b }[/math]. The idea behind this method is to pick [math]\displaystyle{ S }[/math] in such a way that a projection onto the sketched system is substantially simpler than the solution of the original system [math]\displaystyle{ Ax=b }[/math]. Randomized Kaczmarz method is obtained by picking [math]\displaystyle{ B }[/math] to be the identity matrix, and [math]\displaystyle{ S }[/math] to be the [math]\displaystyle{ i^{th} }[/math] unit coordinate vector with probability [math]\displaystyle{ p_i = \|a_i\|^2_2/\|A\|_F^2. }[/math] Different choices of [math]\displaystyle{ B }[/math] and [math]\displaystyle{ S }[/math] lead to different variants of the method.

2. Constrain and Approximate

A seemingly different but entirely equivalent formulation of the method (obtained via Lagrangian duality) is

[math]\displaystyle{ x^{k+1} = \underset x \operatorname{arg\ min} \left \|x - x^* \right \|_B \text{ subject to } x = x^k + B^{-1}A^T S y, }[/math]

where [math]\displaystyle{ y }[/math] is also allowed to vary, and where [math]\displaystyle{ x^* }[/math] is any solution of the system [math]\displaystyle{ Ax=b. }[/math] Hence, [math]\displaystyle{ x^{k+1} }[/math] is obtained by first constraining the update to the linear subspace spanned by the columns of the random matrix [math]\displaystyle{ B^{-1}A^T S }[/math], i.e., to

[math]\displaystyle{ \left \{ h  : h = B^{-1} A^T S y, \quad y \text{ can vary } \right \}, }[/math]

and then choosing the point [math]\displaystyle{ x }[/math] from this subspace which best approximates [math]\displaystyle{ x^* }[/math]. This formulation may look surprising as it seems impossible to perform the approximation step due to the fact that [math]\displaystyle{ x^* }[/math] is not known (after all, this is what we are trying the compute!). However, it is still possible to do this, simply because [math]\displaystyle{ x^{k+1} }[/math] computed this way is the same as [math]\displaystyle{ x^{k+1} }[/math] computed via the sketch and project formulation and since [math]\displaystyle{ x^* }[/math] does not appear there.

5. Random Update

The update can also be written explicitly as

[math]\displaystyle{ x^{k+1} = x^k - B^{-1}A^T S \left (S^T A B^{-1}A^T S \right )^{\dagger} S^T \left (Ax^k - b \right ), }[/math]

where by [math]\displaystyle{ M^\dagger }[/math] we denote the Moore-Penrose pseudo-inverse of matrix [math]\displaystyle{ M }[/math]. Hence, the method can be written in the form [math]\displaystyle{ x^{k+1}=x^k + h^k }[/math], where [math]\displaystyle{ h^k }[/math] is a random update vector.

Letting [math]\displaystyle{ M = S^T A B^{-1}A^T S, }[/math] it can be shown that the system [math]\displaystyle{ M y = S^T (Ax^k - b) }[/math] always has a solution [math]\displaystyle{ y^k }[/math], and that for all such solutions the vector [math]\displaystyle{ x^{k+1} - B^{-1} A^T S y^k }[/math] is the same. Hence, it does not matter which of these solutions is chosen, and the method can be also written as [math]\displaystyle{ x^{k+1} = x^k - B^{-1}A^T S y^k }[/math]. The pseudo-inverse leads just to one particular solution. The role of the pseudo-inverse is twofold:

  • It allows the method to be written in the explicit "random update" form as above,
  • It makes the analysis simple through the final, sixth, formulation.

6. Random Fixed Point

If we subtract [math]\displaystyle{ x^* }[/math] from both sides of the random update formula, denote

[math]\displaystyle{ Z := A^T S \left (S^T A B^{-1} A^T S \right )^\dagger S^T A, }[/math]

and use the fact that [math]\displaystyle{ Ax^* = b, }[/math] we arrive at the last formulation:

[math]\displaystyle{ x^{k+1} - x^* = \left (I - B^{-1}Z \right ) \left (x^k - x^* \right ), }[/math]

where [math]\displaystyle{ I }[/math] is the identity matrix. The iteration matrix, [math]\displaystyle{ I- B^{-1}Z, }[/math] is random, whence the name of this formulation.

Convergence

By taking conditional expectations in the 6th formulation (conditional on [math]\displaystyle{ x^k }[/math]), we obtain

[math]\displaystyle{ \mathbb{E} \left. \left [x^{k+1}-x^* \right | x^k \right ] = \left (I - B^{-1}\mathbb{E}[Z] \right ) \left [x^k - x^* \right ]. }[/math]

By taking expectation again, and using the tower property of expectations, we obtain

[math]\displaystyle{ \mathbb{E} \left [x^{k+1}-x^* \right ] = (I - B^{-1}\mathbb{E}[Z]) \mathbb{E}\left [x^k - x^* \right ]. }[/math]

Gower and Richtarik[14] show that

[math]\displaystyle{ \rho: = \left \|I-B^{-\frac{1}{2}}\mathbb{E}[Z]B^{-\frac{1}{2}} \right \|_B = \lambda_{\max} \left (I - B^{-1}\mathbb{E}[Z] \right ), }[/math]

where the matrix norm is defined by

[math]\displaystyle{ \|M\|_B := \max_{x\neq 0} \frac{\|Mx\|_B}{\|x\|_B}. }[/math]

Moreover, without any assumptions on [math]\displaystyle{ S }[/math] one has [math]\displaystyle{ 0\leq \rho \leq 1. }[/math] By taking norms and unrolling the recurrence, we obtain

Theorem [Gower & Richtarik 2015]

[math]\displaystyle{ \left \| \mathbb{E} \left [x^{k}-x^* \right ] \right \|_B \leq \rho^k \| x^0 - x^* \|_B. }[/math]

Remark. A sufficient condition for the expected residuals to converge to 0 is [math]\displaystyle{ \rho\lt 1. }[/math] This can be achieved if [math]\displaystyle{ A }[/math] has a full column rank and under very mild conditions on [math]\displaystyle{ S. }[/math] Convergence of the method can be established also without the full column rank assumption in a different way.[15]

It is also possible to show a stronger result:

Theorem [Gower & Richtarik 2015]

The expected squared norms (rather than norms of expectations) converge at the same rate:

[math]\displaystyle{ \mathbb{E} \left \| \left [x^{k}-x^* \right ] \right \|^2_B \leq \rho^k \left \|x^0 - x^* \right \|^2_B. }[/math]

Remark. This second type of convergence is stronger due to the following identity[14] which holds for any random vector [math]\displaystyle{ x }[/math] and any fixed vector [math]\displaystyle{ x^* }[/math]:

[math]\displaystyle{ \left\|\mathbb{E} \left [x - x^* \right ] \right \|^2 = \mathbb{E}\left [ \left \|x-x^* \right \|^2 \right ] - \mathbb{E} \left [\|x-\mathbb{E}[x]\|^2 \right ]. }[/math]

Convergence of Randomized Kaczmarz

We have seen that the randomized Kaczmarz method appears as a special case of the Gower-Richtarik method for [math]\displaystyle{ B=I }[/math] and [math]\displaystyle{ S }[/math] being the [math]\displaystyle{ i^{th} }[/math] unit coordinate vector with probability [math]\displaystyle{ p_i = \|a_i\|_2^2/\|A\|_F^2, }[/math] where [math]\displaystyle{ a_i }[/math] is the [math]\displaystyle{ i^{th} }[/math] row of [math]\displaystyle{ A. }[/math] It can be checked by direct calculation that

[math]\displaystyle{ \rho = \|I-B^{-1}\mathbb{E}[Z]\|_B = 1 - \frac{\lambda_{\min}(A^T A)}{\|A\|_F^2}. }[/math]

Further Special Cases

Algorithm 4: PLSS-Kaczmarz

Since the convergence of the (randomized) Kaczmarz method depends on a rate of convergence the method may make slow progress on some practical problems.[10] To ensure finite termination of the method, Johannes Brust and Michael Saunders (academic) [16] have developed a process that generalizes the (randomized) Kaczmarz iteration and terminates in at most [math]\displaystyle{ m }[/math] iterations to a solution for the consistent system [math]\displaystyle{ Ax = b }[/math]. The process is based on Dimensionality reduction, or projections onto lower dimensional spaces, which is how it derives its name PLSS (Projected Linear Systems Solver). An iteration of PLSS-Kaczmarz can be regarded as the generalization

[math]\displaystyle{ x^{k+1} = x^k + A^T_{:,1:k}(A_{1:k,:}A^T_{:,1:k})^{\dagger}(b_{1:k} - A_{1:k,:}x^k) }[/math]

where [math]\displaystyle{ A_{1:k,:} }[/math] is the selection of rows 1 to [math]\displaystyle{ k }[/math] and all columns of [math]\displaystyle{ A }[/math]. A randomized version of the method uses [math]\displaystyle{ k }[/math] non repeated row indices at each iteration: [math]\displaystyle{ \{i_1,\ldots,i_{k-1},i_k\} }[/math] where each [math]\displaystyle{ i_j }[/math] is in [math]\displaystyle{ 1,2,...,m }[/math]. The iteration converges to a solution when [math]\displaystyle{ k =m }[/math]. In particular, since [math]\displaystyle{ A_{1:m,:} = A }[/math] it holds that

[math]\displaystyle{ Ax^{m+1} = Ax^m + AA^T(AA^T)^{\dagger}(b-Ax^m) = b }[/math]

and therefore [math]\displaystyle{ x^{m+1} }[/math] is a solution to the linear system. The computation of iterates in PLSS-Kaczmarz can be simplified and organized effectively. The resulting algorithm only requires matrix-vector products and has a direct form

algorithm PLSS-Kaczmarz is
    input: matrix A right hand side b
    output: solution x such that Ax=b

    x := 0, P = [0]
    for k in 1,2,...,m do
        
        a := A(ik,:)'                   // Select an index ik in 1,...,m without resampling
        d := P' * a        
        c1 := norm(a)
        c2 := norm(d)
        c3 := (bik-x'*a)/((c1-c2)*(c1+c2))
        p := c3*(a - P*(P'*a))
        P := [ P, p/norm(p) ]           // Append a normalized update
        x := x + p

    return x


Notes

References

  • Kaczmarz, Stefan (1937), "Angenäherte Auflösung von Systemen linearer Gleichungen", Bulletin International de l'Académie Polonaise des Sciences et des Lettres. Classe des Sciences Mathématiques et Naturelles. Série A, Sciences Mathématiques 35: pp. 355–357, http://jasonstockmann.com/Jason_Stockmann/Welcome_files/kaczmarz_english_translation_1937.pdf 
  • Chong, Edwin K. P.; Zak, Stanislaw H. (2008), An Introduction to Optimization (3rd ed.), John Wiley & Sons, pp. 226–230 
  • Gordon, Richard; Bender, Robert; Herman, Gabor (1970), "Algebraic reconstruction techniques (ART) for threedimensional electron microscopy and x-ray photography", Journal of Theoretical Biology 29 (3): 471–481, doi:10.1016/0022-5193(70)90109-8, PMID 5492997, Bibcode1970JThBi..29..471G 
  • Gordon, Richard (2011), Stop breast cancer now! Imagining imaging pathways towards search, destroy, cure and watchful waiting of premetastasis breast cancer. In: Breast Cancer - A Lobar Disease, editor: Tibor Tot, Springer, pp. 167–203 
  • Herman, Gabor (2009), Fundamentals of computerized tomography: Image reconstruction from projection (2nd ed.), Springer, ISBN 9781846287237, https://books.google.com/books?id=BhtGTkEjkOQC&q=Kaczmarz 
  • Censor, Yair; Zenios, S.A. (1997), Parallel optimization: theory, algorithms, and applications, New York: Oxford University Press 
  • Aster, Richard; Borchers, Brian; Thurber, Clifford (2004), Parameter Estimation and Inverse Problems, Elsevier 
  • Strohmer, Thomas; Vershynin, Roman (2009), "A randomized Kaczmarz algorithm for linear systems with exponential convergence", Journal of Fourier Analysis and Applications 15 (2): 262–278, doi:10.1007/s00041-008-9030-4, http://www.eecs.berkeley.edu/~brecht/cs294docs/week1/09.Strohmer.pdf 
  • Needell, Deanna; Srebro, Nati; Ward, Rachel (2015), "Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm", Mathematical Programming 155 (1–2): 549–573, doi:10.1007/s10107-015-0864-7 
  • Censor, Yair; Herman, Gabor; Jiang, M. (2009), "A note on the behavior of the randomized Kaczmarz algorithm of Strohmer and Vershynin", Journal of Fourier Analysis and Applications 15 (4): 431–436, doi:10.1007/s00041-009-9077-x, PMID 20495623 
  • Strohmer, Thomas; Vershynin, Roman (2009b), "Comments on the randomized Kaczmarz method", Journal of Fourier Analysis and Applications 15 (4): 437–440, doi:10.1007/s00041-009-9082-0 
  • Bass, Richard F.; Gröchenig, Karlheinz (2013), "Relevant sampling of band-limited functions", Illinois Journal of Mathematics 57 (1): 43–58, doi:10.1215/ijm/1403534485 
  • Gordon, Dan (2017), "A derandomization approach to recovering bandlimited signals across a wide range of random sampling rates", Numerical Algorithms 77 (4): 1141–1157, doi:10.1007/s11075-017-0356-3 
  • Vinh Nguyen, Quang; Lumban Gaol, Ford (2011), Proceedings of the 2011 2nd International Congress on Computer Applications and Computational Science, 2, Springer, pp. 465–469 
  • Gower, Robert; Richtarik, Peter (2015a), "Randomized iterative methods for linear systems", SIAM Journal on Matrix Analysis and Applications 36 (4): 1660–1690, doi:10.1137/15M1025487 
  • Gower, Robert; Richtarik, Peter (2015b), "Stochastic dual ascent for solving linear systems", arXiv:1512.06890 [math.NA]
  • Brust, Johannes J; Saunders, Michael A (2023), "PLSS: A Projected Linear Systems Solver", SIAM Journal on Scientific Computing 45 (2): A1012–A1037, doi:10.1137/22M1509783, Bibcode2023SJSC...45A1012B 


External links

  • [1] A randomized Kaczmarz algorithm with exponential convergence
  • [2] Comments on the randomized Kaczmarz method