Physics:List of Folding@home cores

From HandWiki
Short description: none

The distributed-computing project Folding@home uses scientific computer programs, referred to as "cores" or "fahcores", to perform calculations.[1][2] Folding@home's cores are based on modified and optimized versions of molecular simulation programs for calculation, including TINKER, GROMACS, AMBER, CPMD, SHARPEN, ProtoMol and Desmond.[1][3][4] These variants are each given an arbitrary identifier (Core xx). While the same core can be used by various versions of the client, separating the core from the client enables the scientific methods to be updated automatically as needed without a client update.[1]

Active cores

These cores listed below are currently used by the project.[1]

GROMACS

  • Core a7
  • Core a8
    • Available for Windows, Linux, macOS and ARM, uses Gromacs 2020.5 [6]

GPU

Cores for the Graphics Processing Unit use the graphics chip of modern video cards to do molecular dynamics. The GPU Gromacs core is not a true port of Gromacs, but rather key elements from Gromacs were taken and enhanced for GPU capabilities.[7]

GPU3

These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities.[8]

  • core 22
    • v0.0.18 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.4.2 [9]
    • v0.0.20 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.7.0, which provides performance improvements and many new science features [10]
  • core 23
    • v8.0.3 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.0.0, which provides performance improvements, particularly to CUDA, and many new science features [11]

Inactive cores

These cores are not currently used by the project, as they are either retired due to becoming obsolete, or are not yet ready for general release.[1]

TINKER

TINKER is a computer software application for molecular dynamics simulation with a complete and general package for molecular mechanics and molecular dynamics, with some special features for biopolymers.[12]

  • Tinker core (Core 65)
    • An unoptimized uniprocessor core, this was officially retired as the AMBER and Gromacs cores perform the same tasks much faster. This core was available for Windows, Linux, and Macs.[13]

GROMACS

  • GroGPU (Core 10)
    • Available for ATI series 1xxx GPUs running under Windows.[14][15] Although mostly Gromacs based, parts of the core were rewritten.[14] This core was retired as of June 6, 2008 due to a move to the second generation of the GPU clients.[14]
  • Gro-SMP (Core a1)
  • GroCVS (Core a2)
    • Available only to x86 Macs and x86/64 Linux, this core is very similar to Core a1, as it uses much of the same core base, including use of MPI. However, this core utilizes more recent Gromacs code, and supports more features such as extra-large work units.[19][20] Officially retired due to move to a threads-based SMP2 client.
  • Gro-PS3
    • Also known as the SCEARD core, this variant was for the PlayStation 3 game system,[21][22] which supported a Folding@Home client until it was retired in November 2012. This core performed implicit solvation calculations like the GPU cores, but was also capable of running explicit solvent calculations like the CPU cores, and took the middle ground between the inflexible high-speed GPU cores and flexible low-speed CPU cores.[23] This core used SPE cores for optimization, but did not support SIMD.
  • Gromacs (Core 78)
    • This is the original Gromacs core,[16] and is currently available for uniprocessor clients only, supporting Windows, Linux, and macOS.[24]
  • Gromacs 33 (Core a0)
    • Available to Windows, Linux, and macOS uniprocessor clients only, this core uses the Gromacs 3.3 codebase, which allowing a broader range of simulations to be run.[16][25]
  • Gromacs SREM (Core 80)
    • This core uses the Serial Replica Exchange Method, which is also known as REMD (Replica Exchange Molecular Dynamics) or GroST (Gromacs Serial replica exchange with Temperatures) in its simulations, and is available for Windows and Linux uniprocessor clients only.[16][26][27]
  • GroSimT (Core 81)
    • This core performs simulated tempering, of which the basic idea is to enhance sampling by periodically raising and lowering temperature. This may allow Folding@home to more efficiently sample the transitions between folded and unfolded conformations of proteins.[16] Available for Windows and Linux uniprocessor clients only.[28]
  • DGromacs (Core 79)
    • Available for uniprocessor clients, this core uses SSE2 processor optimization where supported and is capable of running on Windows, Linux, and macOS.[16][29]
  • DGromacsB (Core 7b)
    • Distinct from Core 79 in that it has several scientific additions.[16] Initially released only to the Linux platform in August 2007, it will eventually be available for all platforms.[30]
  • DGromacsC (Core 7c)
    • Very similar to Core 79, and initially released for Linux and Windows in April 2008 for Windows, Linux, and macOS uniprocessor clients.[31]
  • GB Gromacs (Core 7a)
    • Available solely for all uniprocessor clients on Windows, Linux, and macOS.[1][16][32]
  • GB Gromacs (Core a4)
    • Available for Windows, Linux,[33] and macOS,[34] this core was originally released in early October 2010,[35] and as of February 2010 uses the latest version of Gromacs, v4.5.3.[33]
  • SMP2 (Core a3)
    • The next generation of the SMP cores, this core uses threads instead of MPI for inter-process communication, and is available for Windows, Linux, and macOS.[36][37]
  • SMP2 bigadv (Core a5)
    • Similar to a3, but this core is specifically designed to run larger-than-normal simulations.[38][39]
  • SMP2 bigadv (Core a6)
    • A newer version of the a5 core.

CPMD

Short for Car–Parrinello Molecular Dynamics, this core performs ab-initio quantum mechanical molecular dynamics. Unlike classical molecular dynamics calculations which use a force field approach, CPMD includes the motion of electrons in the calculations of energy, forces and motion.[40][41] Quantum chemical calculations have the possibility to yield a very reliable potential energy surface, and can naturally incorporate multi-body interactions.[41]

  • QMD (Core 96)
    • This is a double-precision[41] variant for Windows and Linux uniprocessor clients.[42] This core is currently "on hold" due to the main QMD developer, Young Min Rhee, graduating in 2006.[41] This core can use a substantial amount of memory, and was only available to machines that chose to "opt in".[41] SSE2 optimization on Intel CPUs is supported.[41] Due to licensing issues involving Intel libraries and SSE2, QMD Work Units were not assigned to AMD CPUs.[41][43]

SHARPEN

  • SHARPEN Core[44][45]
    • In early 2010 Vijay Pande said "We've put SHARPEN on hold for now. No ETA to give, sorry. Pushing it further depends a lot on the scientific needs at the time."[46] This core uses different format to standard F@H cores, in that there is more than one "Work Unit" (using the normal definition) in each work packet sent to clients.

Desmond

The software for this core was developed at D. E. Shaw Research. Desmond performs high-speed molecular dynamics simulations of biological systems on conventional computer clusters.[47][48][49][50] The code uses novel parallel algorithms[51] and numerical techniques[52] to achieve high performance on platforms containing a large number of processors,[53] but may also be executed on a single computer. Desmond and its source code are available without cost for non-commercial use by universities and other not-for-profit research institutions.

  • Desmond Core
    • Possible available for Windows x86 and Linux x86/64,[54] this core is currently in development.[8]

AMBER

Short for Assisted Model Building with Energy Refinement, AMBER is a family of force fields for molecular dynamics, as well as the name for the software package that simulates these force fields.[55] AMBER was originally developed by Peter Kollman at the University of California, San Francisco, and is currently maintained by professors at various universities.[56] The double-precision AMBER core is not currently optimized with SSE nor SSE2,[57][58] but AMBER is significantly faster than Tinker cores and adds some functionality which cannot be performed using Gromacs cores.[58]

  • PMD (Core 82)
    • Available for Windows and Linux uniprocessor clients only.[57]

ProtoMol

ProtoMol is an object-oriented, component based, framework for molecular dynamics (MD) simulations. ProtoMol offers high flexibility, easy extendibility and maintenance, and high performance demands, including parallelization.[59] In 2009, the Pande Group was working on a complementary new technique called Normal Mode Langevin Dynamics which had the possibility to greatly speed simulations while maintaining the same accuracy.[8][60]

  • ProtoMol Core (Core b4)
    • Available to Linux x86/64 and x86 Windows.[61]

GPU

GPU2

These are the second generation GPU cores. Unlike the retired GPU1 cores, these variants are for ATI CAL-enabled 2xxx/3xxx or later series and NVIDIA CUDA-enabled NVIDIA 8xxx or later series GPUs.[62]

  • GPU2 (Core 11)
    • Available for x86 Windows clients only.[62] Supported until approximately September 1, 2011 due to AMD/ATI dropping support for the utilized Brook programming language and moving to OpenCL. This forced F@h to rewrite its ATI GPU core code in OpenCL, the result of which is Core 16.[63]
  • GPU2 (Core 12)
    • Available for x86 Windows clients only.[62]
  • GPU2 (Core 13)
    • Available for x86 Windows clients only.[62]
  • GPU2 (Core 14)
    • Available for x86 Windows clients only,[62] this core was officially released Mar 02, 2009.[64]

GPU3

These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities.[8]

  • GPU3 (core 15)
    • Available to x86 Windows only.[65]
  • GPU3 (core 16)
    • Available to x86 Windows only.[65] Released alongside the new v7 client, this is a rewrite of Core 11 in OpenCL.[63]
  • GPU3 (core 17)
    • Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL. Much better performance because of OpenMM 5.1[66]
  • GPU3 (core 18)
    • Available to Windows for AMD and NVIDIA GPUs using OpenCL. This core was developed to address some critical scientific issues in Core17 [67] and uses the latest technology from OpenMM[68] 6.0.1. There are currently issues regarding the stability and performance of this core on some AMD and NVIDIA Maxwell GPUs. This is why assignment of work units running on this core has been temporarily stopped for some GPUs.[69]
  • GPU3 (core 21)
    • Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL. It uses OpenMM 6.2 and fixes the Core 18 AMD/NVIDIA performance issues.[70]

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 "Folding@home Project Summary". https://apps.foldingathome.org/psummary. Retrieved 2019-09-15. 
  2. Zagen30 (2011). "Re: Lucid Virtu and Foldig At Home". http://foldingforum.org/viewtopic.php?f=50&t=19487#p194468. Retrieved 2011-08-30. 
  3. Vijay Pande (2005-10-16). "Folding@home with QMD core FAQ" (FAQ). Stanford University. http://folding.stanford.edu/QMD.html. Retrieved 2006-12-03.  The site indicates that Folding@home uses a modification of CPMD allowing it to run on the supercluster environment.
  4. Vijay Pande (2009-06-17). "Folding@home: How does FAH code development and sysadmin get done?". http://folding.typepad.com/news/2009/06/how-does-fah-code-development-and-sysadmin-get-done.html. Retrieved 2009-06-25. 
  5. "CPU FAH core with AVX support? Mentioned a while back?". 2016-11-07. https://foldingforum.org/viewtopic.php?f=72&t=28119&start=15. Retrieved 2017-02-18. 
  6. "New Client with ARM Support". 24 November 2020. https://foldingathome.org/2020/11/24/new-client-with-arm-support/. 
  7. Vijay Pande (2011). "ATI FAQ: Are these WUs compatible with other fahcores?" (FAQ). Archived from the original on 2012-10-28. https://web.archive.org/web/20121028125028/http://folding.stanford.edu/English/FAQ-ATI. Retrieved 2011-08-23. 
  8. 8.0 8.1 8.2 8.3 Vijay Pande (2009). "Update on new FAH cores and clients". http://folding.typepad.com/news/2009/09/update-on-new-fah-cores-and-clients.html. Retrieved 2011-08-23. 
  9. "GPU CORE22 0.0.2 coming to ADVANCED". https://foldingforum.org/viewtopic.php?f=24&t=32070. Retrieved 2020-02-14. 
  10. "core22 0.0.20 limited testing with project 17110". https://foldingforum.org/viewtopic.php?f=24&t=37700. Retrieved 2021-01-14. 
  11. "New OpenMM core Core23 available for public use". https://foldingforum.org/viewtopic.php?p=361954#p361954. 
  12. "TINKER Home Page". http://dasher.wustl.edu/tinker/. Retrieved 2012-08-24. 
  13. "Tinker Core". 2011. http://fahwiki.net/index.php/Cores#Tinker_core. Retrieved 2012-08-24. 
  14. 14.0 14.1 14.2 "Folding@home on ATI's GPUs: a major step forward". 2011. Archived from the original on 2012-10-28. https://web.archive.org/web/20121028125028/http://folding.stanford.edu/English/FAQ-ATI. Retrieved 2011-08-28. 
  15. "GPU core". 2011. http://fahwiki.net/index.php/Cores#GPU_core. Retrieved 2011-08-28. 
  16. 16.0 16.1 16.2 16.3 16.4 16.5 16.6 16.7 "Gromacs FAQ" (FAQ). 2007. Archived from the original on 2012-07-17. https://web.archive.org/web/20120717063443/http://folding.stanford.edu/English/FAQ-gromacs. Retrieved 2011-09-03. 
  17. "SMP FAQ" (FAQ). 2011. Archived from the original on 2012-09-22. https://web.archive.org/web/20120922013529/http://folding.stanford.edu/English/FAQ-SMP. Retrieved 2011-08-22. 
  18. "Gromacs SMP core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_SMP_core. Retrieved 2011-08-28. 
  19. "Gromacs CVS SMP core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_CVS_SMP_core. Retrieved 2011-08-28. 
  20. "New release: extra-large work units". 2011. http://foldingforum.org/viewtopic.php?f=24&t=10697. Retrieved 2011-08-28. 
  21. "PS3 Screenshot". 2007. http://www.stanford.edu/group/pandegroup/folding/pics/shot-00007.jpg. Retrieved 2011-08-24. 
  22. "PS3 Client". 2008. http://fahwiki.net/index.php/PS3_client. Retrieved 2011-08-28. 
  23. "PS3 FAQ". 2009. http://folding.stanford.edu/English/FAQ-PS3. Retrieved 2011-08-28. 
  24. "Gromacs Core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_core. Retrieved 2011-08-21. 
  25. "Gromacs 33 Core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_33_core. Retrieved 2011-08-21. 
  26. "Gromacs SREM Core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_SREM_core. Retrieved 2011-08-24. 
  27. Sugita, Yuji; Okamoto, Yuko (1999). "Replica-exchange molecular dynamics method for protein folding". Chemical Physics Letters 314 (1–2): 141–151. doi:10.1016/S0009-2614(99)01123-9. Bibcode1999CPL...314..141S. 
  28. "Gromacs Simulated Tempering core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_Simulated_Tempering_core. Retrieved 2011-08-24. 
  29. "Double Gromacs Core". 2011. http://fahwiki.net/index.php/Cores#Double_Gromacs_core. Retrieved 2011-08-22. 
  30. "Double Gromacs B Core". 2011. http://fahwiki.net/index.php/Cores#Double_Gromacs_B_core. Retrieved 2011-08-22. 
  31. "Double Gromacs C Core". 2011. http://fahwiki.net/index.php/Cores#Double_Gromacs_C_core. Retrieved 2011-08-22. 
  32. "GB Gromacs". 2011. http://fahwiki.net/index.php/Cores#GB_Gromacs_core. Retrieved 2011-08-22. 
  33. 33.0 33.1 "Folding Forum • View topic - Public Release of New A4 Cores". http://foldingforum.org/viewtopic.php?f=24&t=17528. 
  34. "Folding Forum • View topic - Project 7600 Adv -> Full FAH". http://foldingforum.org/viewtopic.php?f=24&t=18887#p189345. 
  35. "Project 10412 now on advanced". 2010. http://foldingforum.org/viewtopic.php?p=160829#p160829. Retrieved 2011-09-03. 
  36. "Gromacs CVS SMP2 Core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_SMP2_core. Retrieved 2011-08-22. 
  37. kasson (2011-10-11). "Re: Project:6099 run:3 clone:4 gen:0 - Core needs updating". http://foldingforum.org/viewtopic.php?f=19&t=19802#p197041. Retrieved 2011-10-11. 
  38. "Gromacs CVS SMP2 bigadv Core". 2011. http://fahwiki.net/index.php/Cores#Gromacs_SMP2_bigadv_core. Retrieved 2011-08-22. 
  39. "Introduction of a new SMP core, changes to bigadv". 2011. http://folding.typepad.com/news/2011/03/introduction-of-a-new-smp-core-changes-to-bigadv.html. Retrieved 2011-08-24. 
  40. R. Car; M. Parrinello (1985). "Unified Approach for Molecular Dynamics and Density-Functional Theory". Phys. Rev. Lett. 55 (22): 2471–2474. doi:10.1103/PhysRevLett.55.2471. PMID 10032153. Bibcode1985PhRvL..55.2471C. 
  41. 41.0 41.1 41.2 41.3 41.4 41.5 41.6 "QMD FAQ" (FAQ). 2007. http://folding.stanford.edu/English/FAQ-QMD. Retrieved 2011-08-28. 
  42. "QMD Core". 2011. http://fahwiki.net/index.php/Cores#QMD_core. Retrieved 2011-08-24. 
  43. "FAH & QMD & AMD64 & SSE2" (FAQ). http://fahwiki.net/index.php/FAH_&_QMD_&_AMD64_&_SSE2. 
  44. "SHARPEN". Archived from the original on December 2, 2008. https://web.archive.org/web/20081202015556/http://p450.caltech.edu/sharpen/sharpenprojects.html. 
  45. "SHARPEN: Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network (deadlink)" (About). Archived from the original on December 1, 2008. https://web.archive.org/web/20081201052516/http://p450.caltech.edu/sharpen/sharpenabout.html. 
  46. "Re: SHARPEN". 2010. http://foldingforum.org/viewtopic.php?f=16&t=13467&p=131478#p131463. Retrieved 2011-08-29. 
  47. Kevin J. Bowers; Edmond Chow; Huafeng Xu; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry et al. (2006). "Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters". ACM/IEEE SC 2006 Conference (SC'06). ACM. pp. 43. doi:10.1109/SC.2006.54. ISBN 0-7695-2700-0. http://sc06.supercomputing.org/schedule/pdf/pap259.pdf. 
  48. Morten Ø. Jensen; David W. Borhani; Kresten Lindorff-Larsen; Paul Maragakis; Vishwanath Jogini; Michael P. Eastwood; Ron O. Dror; David E. Shaw (2010). "Principles of Conduction and Hydrophobic Gating in K+ Channels". Proceedings of the National Academy of Sciences of the United States of America (PNAS) 107 (13): 5833–5838. doi:10.1073/pnas.0911691107. PMID 20231479. Bibcode2010PNAS..107.5833J. 
  49. Ron O. Dror; Daniel H. Arlow; David W. Borhani; Morten Ø. Jensen; Stefano Piana; David E. Shaw (2009). "Identification of Two Distinct Inactive Conformations of the ß2-Adrenergic Receptor Reconciles Structural and Biochemical Observations". Proceedings of the National Academy of Sciences of the United States of America (PNAS) 106 (12): 4689–4694. doi:10.1073/pnas.0811065106. PMID 19258456. Bibcode2009PNAS..106.4689D. 
  50. Yibing Shan; Markus A. Seeliger; Michael P. Eastwood; Filipp Frank; Huafeng Xu; Morten Ø. Jensen; Ron O. Dror; John Kuriyan et al. (2009). "A Conserved Protonation-Dependent Switch Controls Drug Binding in the Abl Kinase". Proceedings of the National Academy of Sciences of the United States of America (PNAS) 106 (1): 139–144. doi:10.1073/pnas.0811223106. PMID 19109437. Bibcode2009PNAS..106..139S. 
  51. Kevin J. Bowers; Ron O. Dror; David E. Shaw (2006). "The Midpoint Method for Parallelization of Particle Simulations". Journal of Chemical Physics (J. Chem. Phys.) 124 (18): 184109:1–11. doi:10.1063/1.2191489. PMID 16709099. Bibcode2006JChPh.124r4109B. http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JCPSA6000124000018184109000001&idtype=cvips&gifs=yes. 
  52. Ross A. Lippert; Kevin J. Bowers; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry; David E. Shaw (2007). "A Common, Avoidable Source of Error in Molecular Dynamics Integrators". Journal of Chemical Physics (J. Chem. Phys.) 126 (4): 046101:1–2. doi:10.1063/1.2431176. PMID 17286520. Bibcode2007JChPh.126d6101L. http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JCPSA6000126000004046101000001&idtype=cvips&gifs=yes. 
  53. Edmond Chow; Charles A. Rendleman; Kevin J. Bowers; Ron O. Dror; Douglas H. Hughes; Justin Gullingsrud; Federico D. Sacerdoti; David E. Shaw (2008). Desmond Performance on a Cluster of Multicore Processors. D. E. Shaw Research Technical Report DESRES/TR--2008-01, July 2008. http://deshawresearch.com/publications.html. 
  54. "Desmond core". http://fahwiki.net/index.php/Cores#Desmond_core. Retrieved 2011-08-24. 
  55. "Amber". 2011. http://ambermd.org. Retrieved 2011-08-23. 
  56. "Amber Developers". 2011. http://ambermd.org/#acknowledgments. Retrieved 2011-08-23. 
  57. 57.0 57.1 "AMBER Core". 2011. http://fahwiki.net/index.php/Cores#AMBER_core. Retrieved 2011-08-23. 
  58. 58.0 58.1 "Folding@Home with AMBER FAQ" (FAQ). 2004. http://www.stanford.edu/group/pandegroup/folding/AMBER.html. Retrieved 2011-08-23. 
  59. "ProtoMol". http://protomol.sourceforge.net/. Retrieved 2011-08-24. 
  60. "Folding@home - About" (FAQ). 2010-07-26. http://folding.stanford.edu/English/About. 
  61. "ProtoMol core". 2011. http://fahwiki.net/index.php/Cores#ProtoMol_core. Retrieved 2011-08-24. 
  62. 62.0 62.1 62.2 62.3 62.4 "GPU2 Core". 2011. http://fahwiki.net/index.php/Cores#GPU2_core. Retrieved 2011-08-23. 
  63. 63.0 63.1 "FAH Support for ATI GPUs". 2011. http://folding.typepad.com/news/2011/03/fah-support-for-ati-gpus.html. Retrieved 2011-08-31. 
  64. ihaque (Pande Group member) (2009). "Folding Forum: Announcing project 5900 and Core_14 on advmethods". http://foldingforum.org/viewtopic.php?f=52&t=8734&start=0. Retrieved 2011-08-23. 
  65. 65.0 65.1 "GPU3 Core". 2011. http://fahwiki.net/index.php/Cores#GPU3_core. Retrieved 2011-08-23. 
  66. "GPU Core 17". 2014. http://folding.typepad.com/news/2013/06/welcome-to-fahcore-17.html. Retrieved 2014-07-12. 
  67. "Core 18 and Maxwell". http://foldingforum.org/viewtopic.php?f=16&t=26980&start=16. Retrieved 19 February 2015. 
  68. "Core18 Projects 10470-10473 to FAH". http://foldingforum.org/viewtopic.php?f=24&t=26732. Retrieved 19 February 2015. 
  69. "New Core18 (login required)". http://foldingforum.org/viewtopic.php?f=66&t=26528. Retrieved 19 February 2015. 
  70. "Core 21 v0.0.11 moving to FAH with p9704, p9712". https://foldingforum.org/viewtopic.php?f=24&t=28010. Retrieved 2019-09-18. 

External links