Freivalds' algorithm
Freivalds' algorithm (named after Rūsiņš Mārtiņš Freivalds) is a probabilistic randomized algorithm used to verify matrix multiplication. Given three n × n matrices [math]\displaystyle{ A }[/math], [math]\displaystyle{ B }[/math], and [math]\displaystyle{ C }[/math], a general problem is to verify whether [math]\displaystyle{ A \times B = C }[/math]. A naïve algorithm would compute the product [math]\displaystyle{ A \times B }[/math] explicitly and compare term by term whether this product equals [math]\displaystyle{ C }[/math]. However, the best known matrix multiplication algorithm runs in [math]\displaystyle{ O(n^{2.3729}) }[/math] time.[1] with high probability. In [math]\displaystyle{ O(kn^2) }[/math] time the algorithm can verify a matrix product with probability of failure less than [math]\displaystyle{ 2^{-k} }[/math].
The algorithm
Input
Three n × n matrices [math]\displaystyle{ A }[/math], [math]\displaystyle{ B }[/math], and [math]\displaystyle{ C }[/math].
Output
Yes, if [math]\displaystyle{ A \times B = C }[/math]; No, otherwise.
Procedure
- Generate an n × 1 random 0/1 vector [math]\displaystyle{ \vec{r} }[/math].
- Compute [math]\displaystyle{ \vec{P} = A \times (B \vec{r}) - C\vec{r} }[/math].
- Output "Yes" if [math]\displaystyle{ \vec{P} = (0,0,\ldots,0)^T }[/math]; "No," otherwise.
Error
If [math]\displaystyle{ A \times B = C }[/math], then the algorithm always returns "Yes". If [math]\displaystyle{ A \times B \neq C }[/math], then the probability that the algorithm returns "Yes" is less than or equal to one half. This is called one-sided error.
By iterating the algorithm k times and returning "Yes" only if all iterations yield "Yes", a runtime of [math]\displaystyle{ O(kn^2) }[/math] and error probability of [math]\displaystyle{ \leq 1/2^k }[/math] is achieved.
Example
Suppose one wished to determine whether:
- [math]\displaystyle{ AB = \begin{bmatrix} 2 & 3 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 1 & 2 \end{bmatrix} \stackrel{?}{=} \begin{bmatrix} 6 & 5 \\ 8 & 7 \end{bmatrix} = C. }[/math]
A random two-element vector with entries equal to 0 or 1 is selected – say [math]\displaystyle{ \vec{r} = \begin{bmatrix}1 \\ 1\end{bmatrix} }[/math] – and used to compute:
- [math]\displaystyle{ \begin{align} A \times (B \vec{r}) - C\vec{r} & = \begin{bmatrix} 2 & 3 \\ 3 & 4 \end{bmatrix} \left( \begin{bmatrix} 1 & 0 \\ 1 & 2 \end{bmatrix} \begin{bmatrix}1 \\ 1\end{bmatrix} \right) - \begin{bmatrix} 6 & 5 \\ 8 & 7 \end{bmatrix} \begin{bmatrix}1 \\ 1\end{bmatrix} \\ & = \begin{bmatrix} 2 & 3 \\ 3 & 4 \end{bmatrix} \begin{bmatrix}1 \\ 3\end{bmatrix} - \begin{bmatrix}11 \\ 15\end{bmatrix} \\ & = \begin{bmatrix}11 \\ 15\end{bmatrix} - \begin{bmatrix}11 \\ 15\end{bmatrix} \\ & = \begin{bmatrix}0 \\ 0\end{bmatrix}. \end{align} }[/math]
This yields the zero vector, suggesting the possibility that AB = C. However, if in a second trial the vector [math]\displaystyle{ \vec{r} = \begin{bmatrix}1 \\ 0\end{bmatrix} }[/math] is selected, the result becomes:
- [math]\displaystyle{ A \times (B \vec{r}) - C\vec{r} = \begin{bmatrix} 2 & 3 \\ 3 & 4 \end{bmatrix} \left( \begin{bmatrix} 1 & 0 \\ 1 & 2 \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix} \right) - \begin{bmatrix} 6 & 5 \\ 8 & 7 \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix} = \begin{bmatrix}-1 \\ -1\end{bmatrix}. }[/math]
The result is nonzero, proving that in fact AB ≠ C.
There are four two-element 0/1 vectors, and half of them give the zero vector in this case ([math]\displaystyle{ \vec{r} = \begin{bmatrix}0 \\ 0\end{bmatrix} }[/math] and [math]\displaystyle{ \vec{r} = \begin{bmatrix}1 \\ 1\end{bmatrix} }[/math]), so the chance of randomly selecting these in two trials (and falsely concluding that AB=C) is 1/22 or 1/4. In the general case, the proportion of r yielding the zero vector may be less than 1/2, and a larger number of trials (such as 20) would be used, rendering the probability of error very small.
Error analysis
Let p equal the probability of error. We claim that if A × B = C, then p = 0, and if A × B ≠ C, then p ≤ 1/2.
Case A × B = C
- [math]\displaystyle{ \begin{align} \vec{P} &= A \times (B \vec{r}) - C \vec{r}\\ &= (A \times B)\vec{r} - C\vec{r}\\ &= (A \times B - C)\vec{r}\\ &= \vec{0} \end{align} }[/math]
This is regardless of the value of [math]\displaystyle{ \vec{r} }[/math], since it uses only that [math]\displaystyle{ A \times B - C = 0 }[/math]. Hence the probability for error in this case is:
- [math]\displaystyle{ \Pr[\vec{P} \neq 0] = 0 }[/math]
Case A × B ≠ C
Let [math]\displaystyle{ D }[/math] such that
- [math]\displaystyle{ \vec{P} = D \times \vec{r} = (p_1, p_2, \dots, p_n)^T }[/math]
Where
- [math]\displaystyle{ D = A \times B - C = (d_{ij}) }[/math].
Since [math]\displaystyle{ A \times B \neq C }[/math], we have that some element of [math]\displaystyle{ D }[/math] is nonzero. Suppose that the element [math]\displaystyle{ d_{ij} \neq 0 }[/math]. By the definition of matrix multiplication, we have:
- [math]\displaystyle{ p_i = \sum_{k = 1}^n d_{ik}r_k = d_{i1}r_1 + \cdots + d_{ij}r_j + \cdots + d_{in}r_n = d_{ij}r_j + y }[/math].
For some constant [math]\displaystyle{ y }[/math]. Using Bayes' theorem, we can partition over [math]\displaystyle{ y }[/math]:
-
[math]\displaystyle{ \Pr[p_i = 0] = \Pr[p_i = 0 | y = 0]\cdot \Pr[y = 0]\, +\, \Pr[p_i = 0 | y \neq 0] \cdot \Pr[y \neq 0] }[/math]
(
)
We use that:
- [math]\displaystyle{ \Pr[p_i = 0 | y = 0] = \Pr[r_j = 0] = \frac{1}{2} }[/math]
- [math]\displaystyle{ \Pr[p_i = 0 | y \neq 0] = \Pr[r_j = 1 \land d_{ij}=-y] \leq \Pr[r_j = 1] = \frac{1}{2} }[/math]
Plugging these in the equation (1), we get:
- [math]\displaystyle{ \begin{align} \Pr[p_i = 0] &\leq \frac{1}{2}\cdot \Pr[y = 0] + \frac{1}{2}\cdot \Pr[y \neq 0]\\ &= \frac{1}{2}\cdot \Pr[y = 0] + \frac{1}{2}\cdot (1 - \Pr[y = 0])\\ &= \frac{1}{2} \end{align} }[/math]
Therefore,
- [math]\displaystyle{ \Pr[\vec{P} = 0] = \Pr[p_1 = 0 \land \dots \land p_i = 0 \land \dots \land p_n = 0] \leq \Pr[p_i = 0] \leq \frac{1}{2}. }[/math]
This completes the proof.
Ramifications
Simple algorithmic analysis shows that the running time of this algorithm is [math]\displaystyle{ O(n^2) }[/math] (in big O notation). This beats the classical deterministic algorithm's runtime of [math]\displaystyle{ O(n^3) }[/math] (or [math]\displaystyle{ O(n^{2.373}) }[/math] if using fast matrix multiplication). The error analysis also shows that if the algorithm is run [math]\displaystyle{ k }[/math] times, an error bound of less than [math]\displaystyle{ 1/2^k }[/math] can be achieved, an exponentially small quantity. The algorithm is also fast in practice due to wide availability of fast implementations for matrix-vector products. Therefore, utilization of randomized algorithms can speed up a very slow deterministic algorithm.
Freivalds' algorithm frequently arises in introductions to probabilistic algorithms because of its simplicity and how it illustrates the superiority of probabilistic algorithms in practice for some problems.
See also
References
- ↑ {{cite CiteSeerX |title=Breaking the Coppersmith-Winograd barrier |author=Virginia Vassilevska Williams|citeseerx=10.1.1.228.9947 Raghavan, Prabhakar (1997). "Randomized algorithms". ACM Computing Surveys 28: 33–37. doi:10.1145/234313.234327. http://portal.acm.org/citation.cfm?id=234327. Retrieved 2008-12-16.
- Freivalds, R. (1977), “Probabilistic Machines Can Use Less Running Time”, IFIP Congress 1977, pp. 839–842.
- Mitzenmacher, Michael; Upfal, Eli (2005), Probability and computing: Randomized algorithms and probabilistic analysis, Cambridge University Press, pp. 8–12, ISBN 0521835402, https://books.google.com/books?id=0bAYl6d7hvkC&pg=PA8
Original source: https://en.wikipedia.org/wiki/Freivalds' algorithm.
Read more |