Stanford DASH

From HandWiki
Revision as of 06:01, 21 July 2022 by imported>Dennis Ross (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Stanford DASH was a cache coherent multiprocessor developed in the late 1980s by a group led by Anoop Gupta, John L. Hennessy, Mark Horowitz, and Monica S. Lam at Stanford University.[1] It was based on adding a pair of directory boards designed at Stanford to up to 16 SGI IRIS 4D Power Series machines and then cabling the systems in a mesh topology using a Stanford-modified version of the Torus Routing Chip.[2] The boards designed at Stanford implemented a directory-based cache coherence protocol[3] allowing Stanford DASH to support distributed shared memory for up to 64 processors. Stanford DASH was also notable for both supporting and helping to formalize weak memory consistency models, including release consistency.[4] Because Stanford DASH was the first operational machine to include scalable cache coherence,[5] it influenced subsequent computer science research as well as the commercially available SGI Origin 2000. Stanford DASH is included in the 25th anniversary retrospective of selected papers from the International Symposium on Computer Architecture[6] and several computer science books,[7][8][9][10][11] has been simulated by the University of Edinburgh,[12] and is used as a case study in contemporary computer science classes.[13][14]

References

  1. Lenoski, Daniel; Laudon, James; Gharachorloo, Kourosh; Weber, Wolf-Dietrich; Gupta, Anoop; Hennessy, John; Horowitz, Mark; Lam, Monica S. (1992). "The Stanford Dash Multiprocessor". Computer 25 (3): 63–79. doi:10.1109/2.121510. http://dl.acm.org/citation.cfm?id=130562. 
  2. Dally, William J.; Seitz, Charles L. (1986). "The torus routing chip". Distributed Computing 1 (4): 187–196. doi:10.1007/BF01660031. 
  3. Lenoski, Daniel; Laudon, James; Gharachorloo, Kourosh; Gupta, Anoop; Hennessy, John (1990). "The directory-based cache coherence protocol for the DASH multiprocessor". ACM. pp. 148–159. doi:10.1145/325164.325132. 
  4. Gharachorloo, Kourosh; Lenoski, Daniel; Laudon, James; Gibbons, Phillip; Gupta, Anoop; Hennessy, John (1990). "Memory consistency and event ordering in scalable shared-memory multiprocessors". pp. 15–26. doi:10.1145/325096.325102. 
  5. Hennessy, John; Patterson, David (2003). Computer Architecture: A Quantitative Approach (Third ed.). Morgan Kaufmann. pp. 655. ISBN 978-1-558-60596-1. https://archive.org/details/computerarchitec0003henn. 
  6. Lenoski, Daniel; Laudon, James; Joe, Truman; Nakahira, David; Stevens, Luis; Gupta, Anoop; Hennessy, John (1998). "The DASH prototype: Implementation and Performance". in Sohi, Gurindar. pp. 418–429. http://dl.acm.org/citation.cfm?id=285930. 
  7. Suzuki, Norihisa (1992). Shared Memory Multiprocessing. The MIT Press. pp. 391–406. ISBN 978-0-262-19322-1. 
  8. Loshin, David (1994). High Performance Computing Demystified. Academic Press. pp. 80, 91. ISBN 978-0-124-55825-0. https://archive.org/details/highperformancec00losh/page/80. 
  9. Parhami, Behrooz (1999). Introduction to Parallel Processing: Algorithms and Architectures. Springer. pp. 450–451. ISBN 978-0-306-45970-2. 
  10. Hill, Mark; Jouppi, Norman; Sohi, Gurindar (2000). Readings in Computer Architecture. Morgan Kaufmann. pp. 583–599. ISBN 978-1-55860-539-8. http://dl.acm.org/citation.cfm?id=333067. 
  11. Dandamudi, Sivarama (2003). Hierarchical Scheduling in Parallel and Cluster Systems. Series in Computer Science. Springer US. pp. 21–22. doi:10.1007/978-1-4615-0133-6. ISBN 978-1-4613-4938-9. https://archive.org/details/hierarchicalsche00dand. 
  12. Institute for Computing Systems Architecture, School of Informatics, University of Edinburgh "Stanford DASH Architecture: Cluster Simulation Model", Retrieved on 3 November 2015.
  13. Carl Olson and Mattan Erez, The University of Texas at Austin (2007) "The Stanford Dash Multiprocessor", Retrieved on 3 November 2015.
  14. Meng Zhang, Duke University (2010) "The Stanford Dash Multiprocessor", Retrieved on 3 November 2015.