The Computer Language Benchmarks Game

From HandWiki
Revision as of 04:31, 27 June 2023 by Carolyn (talk | contribs) (change)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Free software project

The Computer Language Benchmarks Game (formerly called The Great Computer Language Shootout) is a free software project for comparing how a given subset of simple algorithms can be implemented in various popular programming languages.

The project consists of:

  • A set of very simple algorithmic problems
  • Various implementations to the above problems in various programming languages
  • A set of unit tests to verify that the submitted implementations solve the problem statement
  • A framework for running and timing the implementations
  • A website to facilitate the interactive comparison of the results

Supported languages

Due to resource constraints, only a small subset of common programming languages are supported, up to the discretion of the game's operator.[1]

Metrics

The following aspects of each given implementation are measured:[2]

  • overall user runtime
  • peak memory allocation
  • gzip'ped size of the solution's source code
  • sum of total CPU time over all threads
  • individual CPU utilization

It is common to see multiple solutions in the same programming language for the same problem. This highlights that within the constraints of a given language, a solution can be given which is either of high abstraction, is memory efficient, is fast, or can be parallelized better.

Benchmark programs

It was a design choice from the start to only include very simple toy problems, each providing a different kind of programming challenge.[3] This provides users of the Benchmark Game the opportunity to scrutinize the various implementations.[4]

History

The project was known as The Great Computer Language Shootout until 2007.[5]

A port for Windows was maintained separately between 2002 and 2003.[6]

The sources have been archived on GitLab.[7]

There are also older forks on GitHub.[8]

The project is continuously evolving. The list of supported programming languages is updated approximately once per year, following market trends. Users can also submit improved solutions to any of the problems or suggest testing methodology refinement.[9]

Caveats

The developers themselves highlight the fact that those doing research should exercise caution when using such microbenchmarks:

[...] the JavaScript benchmarks are fleetingly small, and behave in ways that are significantly different than the real applications. We have documented numerous differences in behavior, and we conclude from these measured differences that results based on the benchmarks may mislead JavaScript engine implementers. Furthermore, we observe interesting behaviors in real JavaScript applications that the benchmarks fail to exhibit, suggesting that previously unexplored optimization strategies may be productive in practice.

Impact

The benchmark results have uncovered various compiler issues. Sometimes a given compiler failed to process unusual, but otherwise grammatically valid constructs. At other times, runtime performance was shown to be below expectations, which prompted compiler developers to revise their optimization capabilities.

Various research articles have been based on the benchmarks, its results and its methodology.[10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22][excessive citations]

See also

References

  1. "The Computer Language Benchmarks Game". https://benchmarksgame-team.pages.debian.net/benchmarksgame/. Retrieved 29 May 2018. 
  2. "How programs are measured – The Computer Language Benchmarks Game". https://benchmarksgame-team.pages.debian.net/benchmarksgame/how-programs-are-measured.html. Retrieved 29 May 2018. 
  3. "Why toy programs? – The Computer Language Benchmarks Game". https://benchmarksgame-team.pages.debian.net/benchmarksgame/why-measure-toy-benchmark-programs.html. Retrieved 29 May 2018. 
  4. "n-body description (64-bit Ubuntu quad core) – Computer Language Benchmarks Game". https://benchmarksgame-team.pages.debian.net/benchmarksgame/description/nbody.html#nbody. Retrieved 29 May 2018. 
  5. "Trust, and verify – Computer Language Benchmarks Game". https://benchmarksgame-team.pages.debian.net/benchmarksgame/sometimes-people-just-make-up-stuff.html#history. Retrieved 29 May 2018. 
  6. "The Great Win32 Computer Language Shootout". http://dada.perl.it/shootout/. Retrieved 13 December 2017. 
  7. "archive-alioth-benchmarksgame". https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame. Retrieved 29 May 2018. 
  8. Thiel, Sebastian (24 October 2017). "benchmarksgame-cvs-mirror: A git mirror of the benchmarksgame cvs repository". GitHub. https://github.com/Byron/benchmarksgame-cvs-mirror. Retrieved 13 December 2017. 
  9. "Contribute your own program – Computer Language Benchmarks Game". https://benchmarksgame-team.pages.debian.net/benchmarksgame/play.html. Retrieved 29 May 2018. 
  10. Kevin Williams; Jason McCandless; David Gregg (2009). Dynamic Interpretation for Dynamic Scripting Languages. https://www.scss.tcd.ie/publications/tech-reports/reports.09/TCD-CS-2009-37.pdf. Retrieved 25 March 2017. 
  11. Tobias Wrigstad; Francesco Zappa Nardelli; Sylvain Lebresne Johan; Ostlund Jan Vitek (January 17–23, 2009). "Integrating Typed and Untyped Code in a Scripting Language". POPL’10. Madrid, Spain. https://www.di.ens.fr/~zappa/projects/liketypes/paper.pdf. Retrieved 25 March 2017. 
  12. Lerche, Carl (April 17–18, 2009). "Write Fast Ruby: It's All About the Science". Golden Gate Ruby Conference. San Francisco, California. http://2009.gogaruco.com/downloads/Wrap2009.pdf. Retrieved 25 March 2017. 
  13. J. Shirako; D. M. Peixotto; V. Sarkar; W. N. Scherer III (2009). "Phaser Accumulators: a New Reduction Construct for Dynamic Parallelism". IEEE International Symposium on Parallel & Distributed Processing. http://www.cs.rice.edu/~vs3/PDF/ipdps09-accumulators-final-submission.pdf. Retrieved 25 March 2017. 
  14. Rajesh Karmani and Amin Shali and Gul Agha (2009). "Actor frameworks for the JVM platform: A Comparative Analysis". In Proceedings of the 7th International Conference on the Principles and Practice of Programming in Java. http://osl.cs.illinois.edu/docs/pppj09/paper.pdf. Retrieved 26 March 2017. 
  15. Brunthaler Stefan (2010). "Inline Caching Meets Quickening". European Conference on Object-Oriented Programming (ECOOP). Object-Oriented Programming. pp. 429–451. doi:10.1007/978-3-642-14107-2_21. 
  16. Prodromos Gerakios; Nikolaos Papaspyrou; Konstantinos Sagonas (January 23, 2010). "Race-free and Memory-safe Multithreading: Design and Implementation in Cyclone". Proceedings of the 5th ACM SIGPLAN workshop on Types in language design and implementation. Madrid, Spain. pp. 15–26. http://www.softlab.ntua.gr/research/techrep/CSD-SW-TR-8-09.pdf. Retrieved 25 March 2017. 
  17. Slava Pestov; Daniel Ehrenberg; Joe Groff (October 18, 2010). "Factor: A Dynamic Stack-based Programming Language". DLS 2010. Reno/Tahoe, Nevada, USA. http://factorcode.org/littledan/dls.pdf. Retrieved 25 March 2017. 
  18. Andrei Homescu; Alex Suhan (October 24, 2011). "HappyJIT: A Tracing JIT Compiler for PHP". DLS’11. Portland, Oregon, USA. https://www.ics.uci.edu/~ahomescu/happyjit_paper.pdf. Retrieved 25 March 2017. 
  19. Vincent St-Amour; Sam Tobin-Hochstadt; Matthias Felleisen (October 19–26, 2012). "Optimization Coaching – Optimizers Learn to Communicate with Programmers". OOPSLA’12. Tucson, Arizona, USA. http://www.ccs.neu.edu/racket/pubs/oopsla12-stf.pdf. Retrieved 25 March 2017. 
  20. Wing Hang Li; David R. White; Jeremy Singer (September 11–13, 2013). "JVM-Hosted Languages: They Talk the Talk, but do they Walk the Walk?". Proceedings of the 2013 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools. Stuttgart, Germany. pp. 101–112. http://www.dcs.gla.ac.uk/~wingli/jvm_language_study/jvmlanguages.pdf. Retrieved 25 March 2017. 
  21. Aibek Sarimbekov; Andrej Podzimek; Lubomir Bulej; Yudi Zheng; Nathan Ricci; Walter Binder (October 28, 2013). "Characteristics of Dynamic JVM Languages". VMIL ’13. Indianapolis, Indiana, USA. http://d3s.mff.cuni.cz/publications/download/Sarimbekov-vmil13.pdf. Retrieved 25 March 2017. 
  22. Bradford L. Chamberlain; Ben Albrecht; Lydia Duncan; Ben Harshbarger (2017). "Entering the Fray: Chapel's Computer Language Benchmark Game Entry". http://chapel.cray.com/CHIUW/2017/chamberlain-abstract.pdf. Retrieved 25 March 2017. 

External links