Software:HPX

From HandWiki
HPX
Developer(s) The STEllAR Group, http://stellar.cct.lsu.edu 
LSU Center for Computation and Technology
Initial release2008 (2008)
Stable release
1.9.0 / May 3, 2023; 14 months ago (2023-05-03)
Repositorygithub.com/STEllAR-GROUP/hpx
Written inC++
Operating systemMicrosoft Windows
Linux
Mac OS X
TypePartitioned global address space
Parallel programming
Runtime System
LicenseBoost Software License[1]
Websitestellar-group.github.io/hpx/docs/sphinx/latest/html/index.html

HPX, short for High Performance ParalleX, is a runtime system for high-performance computing. It is currently under active development by the STE||AR group[2] at Louisiana State University. Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism.[3][4][5]

HPX is developed in idiomatic C++ and released as open source under the Boost Software License, which allows usage in commercial applications.

Applications

Though designed as a general-purpose environment for high-performance computing, HPX has primarily been used in

References

  1. "License", Boost Software License – Version 1.0 (boost.org), http://www.boost.org/LICENSE_1_0.txt, retrieved 2012-07-30 
  2. "About the STE||AR Group". https://stellar.cct.lsu.edu/about/about-us/. 
  3. Kaiser, Hartmut; Brodowicz, Maciek; Sterling, Thomas (2009). "ParalleX an Advanced Parallel Execution Model for Scaling-Impaired Applications". 2009 International Conference on Parallel Processing Workshops. pp. 394–401. doi:10.1109/icppw.2009.14. ISBN 978-1-4244-4923-1. 
  4. Wagle, Bibek; Kellar, Samuel; Serio, Adrian; Kaiser, Hartmut (2018). "Methodology for Adaptive Active Message Coalescing in Task Based Runtime Systems". 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). pp. 1133–1140. doi:10.1109/IPDPSW.2018.00173. ISBN 978-1-5386-5555-9. 
  5. 5.0 5.1 Wagle, Bibek; Monil, Mohammad Alaul Haque; Huck, Kevin; Malony, Allen D.; Serio, Adrian; Kaiser, Hartmut (2019). "Runtime Adaptive Task Inlining on Asynchronous Multitasking Runtime Systems". Proceedings of the 48th International Conference on Parallel Processing. pp. 1–10. doi:10.1145/3337821.3337915. ISBN 9781450362955. 
  6. C. Dekate, M. Anderson, M. Brodowicz, H. Kaiser, B. Adelstein-Lelbach and T. Sterling (2012). "Improving the Scalability of Parallel N-body Applications with an Event-driven Constraint-based Execution Model". International Journal of High Performance Computing Applications 26 (3): 319–332. doi:10.1177/1094342012440585. 
  7. M. Anderson, T. Sterling, H. Kaiser and D. Neilsen (2011). "Neutron Star Evolutions using Tabulated Equations of State with a New Execution Model". American Physical Society April 2012 Meeting. http://stellar.cct.lsu.edu/pubs/aps2012.pdf. 
  8. D. Pfander, G. Daiß, D. Marcello, H. Kaiser, D. Pflüger, David (2018). "Accelerating Octo-Tiger: Stellar Mergers on Intel Knights Landing with HPX". DHPCC++ Conference 2018 Hosted by IWOCL. doi:10.1145/3204919.3204938. 
  9. Marcello, Dominic; Daiß, Gregor; Parsa Amini; Kaiser, Hartmut; Diehl, Patrick; Wash, Bryce Adelstein Lelbach Aka; Heller, Thomas; Shibersag et al. (2019-04-17), STEllAR-GROUP/octotiger Repository on GitHub, The STE||AR Group, doi:10.5281/zenodo.5093174, https://github.com/STEllAR-GROUP/octotiger, retrieved 2019-04-17 
  10. Heller, Thomas; Lelbach, Bryce Adelstein; Huck, Kevin A; Biddiscombe, John; Grubel, Patricia; Koniges, Alice E; Kretz, Matthias; Marcello, Dominic et al. (2019-02-14). "Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars" (in en). The International Journal of High Performance Computing Applications 33 (4): 699–715. doi:10.1177/1094342018819744. ISSN 1094-3420. 
  11. "LibGeoDecomp – Petascale Computer Simulations". http://www.libgeodecomp.org/. 
  12. A library for C++/Fortran computer simulations (e.g. stencil codes, mesh-free, unstructured grids, n-body & particle methods). Scales from smartphones to petascale supercomputers (e.g. Titan, T.., The STE||AR Group, 2019-04-06, https://github.com/STEllAR-GROUP/libgeodecomp, retrieved 2019-04-17 
  13. A. Schäfer, D. Fey (2008). "LibGeoDecomp: A Grid-Enabled Library for Geometric Decomposition Codes". Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. 5205. pp. 285–294. doi:10.1007/978-3-540-87475-1_39. ISBN 978-3-540-87474-4. 
  14. Diehl, Patrick; Jha, Prashant K.; Kaiser, Hartmut; Lipton, Robert; Levesque, Martin (2020). "An asynchronous and task-based implementation of peridynamics utilizing HPX—the C++ standard library for parallelism and concurrency". SN Applied Sciences 2 (12). doi:10.1007/s42452-020-03784-x. 
  15. "Phylanx – A Distributed Array Toolkit" (in en-US). http://phylanx.stellar-group.org/. 
  16. An Asynchronous Distributed C++ Array Processing Toolkit: STEllAR-GROUP/phylanx, The STE||AR Group, 2019-04-16, https://github.com/STEllAR-GROUP/phylanx, retrieved 2019-04-17 
  17. Tohid, R.; Wagle, Bibek; Shirzad, Shahrzad; Diehl, Patrick; Serio, Adrian; Kheirkhahan, Alireza; Amini, Parsa; Williams, Katy et al. (2018). "Asynchronous Execution of Python Code on Task-Based Runtime Systems". 2018 IEEE/ACM 4th International Workshop on Extreme Scale Programming Models and Middleware (ESPM2). pp. 37–45. doi:10.1109/ESPM2.2018.00009. ISBN 978-1-72810-178-1. 

External links