Coppersmith–Winograd algorithm

From HandWiki
Revision as of 16:24, 27 September 2021 by imported>Jport (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, was the asymptotically fastest known matrix multiplication algorithm from 1990 until 2010. It can multiply two [math]\displaystyle{ n \times n }[/math] matrices in [math]\displaystyle{ \mathcal{O}(n^{2.375477}) }[/math] time[1] (see Big O notation).

This is an improvement over the naïve [math]\displaystyle{ \mathcal{O}(n^3) }[/math] time algorithm and the [math]\displaystyle{ \mathcal{O}(n^{2.807355}) }[/math] time Strassen algorithm. Algorithms with better asymptotic running time than the Strassen algorithm are rarely used in practice, because the large constant factors in their running times make them impractical.[2] It is possible to improve the exponent further; however, the exponent must be at least 2 (because there are [math]\displaystyle{ n \times n = n^2 }[/math] values in the result which must be computed).

In 2010, Andrew Stothers gave an improvement to the algorithm, [math]\displaystyle{ \mathcal{O}(n^{2.374}). }[/math][3][4] In 2011, Virginia Vassilevska Williams combined a mathematical short-cut from Stothers' paper with her own insights and automated optimization on computers, improving the bound to [math]\displaystyle{ \mathcal{O}(n^{2.3728642}). }[/math][5] In 2014, François Le Gall simplified the methods of Williams and obtained an improved bound of [math]\displaystyle{ \mathcal{O}(n^{2.3728639}). }[/math][6]

The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (making it a galactic algorithm).[7]

Henry Cohn, Robert Kleinberg, Balázs Szegedy and Chris Umans have re-derived the Coppersmith–Winograd algorithm using a group-theoretic construction. They also showed that either of two different conjectures would imply that the optimal exponent of matrix multiplication is 2, as has long been suspected. However, they were not able to formulate a specific solution leading to a better running-time than Coppersmith–Winograd.[8] Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method.[9]

See also

References

  1. Coppersmith, Don; Winograd, Shmuel (1990), "Matrix multiplication via arithmetic progressions", Journal of Symbolic Computation 9 (3): 251, doi:10.1016/S0747-7171(08)80013-2, http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf 
  2. Le Gall, F. (2012), "Faster algorithms for rectangular matrix multiplication", Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2012), pp. 514–523, doi:10.1109/FOCS.2012.80 .
  3. Stothers, Andrew (2010), On the Complexity of Matrix Multiplication, University of Edinburgh, https://www.era.lib.ed.ac.uk/handle/1842/4734 .
  4. Davie, A.M.; Stothers, A.J. (April 2013), "Improved bound for complexity of matrix multiplication", Proceedings of the Royal Society of Edinburgh 143A (2): 351–370, doi:10.1017/S0308210511001648, https://www.maths.ed.ac.uk/~adavie/a11164.pdf 
  5. Williams, Virginia Vassilevska (2011), Breaking the Coppersmith-Winograd barrier, http://theory.stanford.edu/~virgi/matrixmult-f.pdf 
  6. ""Le Gall, François (2014), "Powers of tensors and fast matrix multiplication", Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), Bibcode2014arXiv1401.7714L 
  7. Robinson, Sara (November 2005), "Toward an Optimal Algorithm for Matrix Multiplication", SIAM News 38 (9), https://archive.siam.org/pdf/news/174.pdf, "Even if someone manages to prove one of the conjectures—thereby demonstrating that ω = 2—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent." 
  8. Cohn, H.; Kleinberg, R.; Szegedy, B.; Umans, C. (2005). "Group-theoretic Algorithms for Matrix Multiplication". 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05). pp. 379. doi:10.1109/SFCS.2005.39. ISBN 0-7695-2468-0. https://authors.library.caltech.edu/23966/. 
  9. Blasiak, J.; Cohn, H.; Church, T.; Grochow, J.; Naslund, E.; Sawin, W.; Umans, C. (2017). "On cap sets and the group-theoretic approach to matrix multiplication". Discrete Analysis. doi:10.19086/da.1245. http://discreteanalysisjournal.com/article/1245-on-cap-sets-and-the-group-theoretic-approach-to-matrix-multiplication. 

Further reading

  • Bürgisser, P.; Clausen, M.; Shokrollahi, M. A. (1997). Algebraic Complexity Theory. Grundlehren der mathematischen Wissenschaften. 315. Springer Verlag. ISBN 3-540-60582-7.