Software:Comparison of deep learning software

From HandWiki
Revision as of 11:22, 9 February 2024 by Corlink (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: None

The following table compares notable software frameworks, libraries and computer programs for deep learning.

Deep-learning software by name

Software Creator Initial release Software license[lower-alpha 1] Open source Platform Written in Interface OpenMP support OpenCL support CUDA support Automatic differentiation[1] Has pretrained models Recurrent nets Convolutional nets RBM/DBNs Parallel execution (multi node) Actively developed
BigDL Jason Dai (Intel) 2016 Apache 2.0 Yes Apache Spark Scala Scala, Python No Yes Yes Yes
Caffe Berkeley Vision and Learning Center 2013 BSD Yes Linux, macOS, Windows[2] C++ Python, MATLAB, C++ Yes Under development[3] Yes Yes Yes[4] Yes Yes No ? No[5]
Chainer Preferred Networks 2015 BSD Yes Linux, macOS Python Python No No Yes Yes Yes Yes Yes No Yes No[6]
Deeplearning4j Skymind engineering team; Deeplearning4j community; originally Adam Gibson 2014 Apache 2.0 Yes Linux, macOS, Windows, Android (Cross-platform) C++, Java Java, Scala, Clojure, Python (Keras), Kotlin Yes No[7] Yes[8][9] Computational Graph Yes[10] Yes Yes Yes Yes[11] Yes
Dlib Davis King 2002 Boost Software License Yes Cross-platform C++ C++, Python Yes No Yes Yes Yes No Yes Yes Yes
Flux Mike Innes 2017 MIT license Yes Linux, MacOS, Windows (Cross-platform) Julia Julia Yes Yes Yes[12] Yes Yes No Yes Yes
Intel Data Analytics Acceleration Library Intel 2015 Apache License 2.0 Yes Linux, macOS, Windows on Intel CPU[13] C++, Python, Java C++, Python, Java[13] Yes No No Yes No Yes Yes
Intel Math Kernel Library 2017 [14] and later Intel 2017 Proprietary No Linux, macOS, Windows on Intel CPU[15] C[16] Yes[17] No No Yes No Yes[18] Yes[18] No
Keras François Chollet 2015 MIT license Yes Linux, macOS, Windows Python Python, R Only if using Theano as backend Can use Theano, Tensorflow or PlaidML as backends Yes Yes Yes[19] Yes Yes No[20] Yes[21] Yes
MATLAB + Deep Learning Toolbox (formally Neural Network Toolbox) MathWorks 1992 Proprietary No Linux, macOS, Windows C, C++, Java, MATLAB MATLAB No No Train with Parallel Computing Toolbox and generate CUDA code with GPU Coder[22] Yes[23] Yes[24][25] Yes[24] Yes[24] Yes With Parallel Computing Toolbox[26] Yes
Microsoft Cognitive Toolkit (CNTK) Microsoft Research 2016 MIT license[27] Yes Windows, Linux[28] (macOS via Docker on roadmap) C++ Python (Keras), C++, Command line,[29] BrainScript[30] (.NET on roadmap[31]) Yes[32] No Yes Yes Yes[33] Yes[34] Yes[34] No[35] Yes[36] No[37]
Apache MXNet Apache Software Foundation 2015 Apache 2.0 Yes Linux, macOS, Windows,[38][39] AWS, Android,[40] iOS, JavaScript[41] Small C++ core library C++, Python, Julia, Matlab, JavaScript, Go, R, Scala, Perl, Clojure Yes On roadmap[42] Yes Yes[43] Yes[44] Yes Yes Yes Yes[45] Yes
Neural Designer Artelnics 2014 Proprietary No Linux, macOS, Windows C++ Graphical user interface Yes No Yes Analytical differentiation No No No No Yes Yes
OpenNN Artelnics 2003 GNU LGPL Yes Cross-platform C++ C++ Yes No Yes ? ? No No No ?
PlaidML Vertex.AI, Intel 2017 Apache 2.0 Yes Linux, macOS, Windows Python, C++, OpenCL Python, C++ ? Some OpenCL ICDs are not recognized No Yes Yes Yes Yes Yes Yes
PyTorch Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan (Facebook) 2016 BSD Yes Linux, macOS, Windows, Android[46] Python, C, C++, CUDA Python, C++, Julia Yes Via separately maintained package[47][48][49] Yes Yes Yes Yes Yes Yes[50] Yes Yes
Apache SINGA Apache Software Foundation 2015 Apache 2.0 Yes Linux, macOS, Windows C++ Python, C++, Java No Supported in V1.0 Yes ? Yes Yes Yes Yes Yes
TensorFlow Google Brain 2015 Apache 2.0 Yes Linux, macOS, Windows,[51][52] Android C++, Python, CUDA Python (Keras), C/C++, Java, Go, JavaScript, R,[53] Julia, Swift No On roadmap[54] but already with SYCL[55] support Yes Yes[56] Yes[57] Yes Yes Yes Yes Yes
Theano Université de Montréal 2007 BSD Yes Cross-platform Python Python (Keras) Yes Under development[58] Yes Yes[59][60] Through Lasagne's model zoo[61] Yes Yes Yes Yes[62] No
Torch Ronan Collobert, Koray Kavukcuoglu, Clement Farabet 2002 BSD Yes Linux, macOS, Windows,[63] Android,[64] iOS C, Lua Lua, LuaJIT,[65] C, utility library for C++/OpenCL[66] Yes Third party implementations[67][68] Yes[69][70] Through Twitter's Autograd[71] Yes[72] Yes Yes Yes Yes[63] No
Wolfram Mathematica 10[73] and later Wolfram Research 2014 Proprietary No Windows, macOS, Linux, Cloud computing C++, Wolfram Language, CUDA Wolfram Language Yes No Yes Yes Yes[74] Yes Yes Yes Yes[75] Yes
Software Creator Initial release Software license[lower-alpha 1] Open source Platform Written in Interface OpenMP support OpenCL support CUDA support Automatic differentiation[1] Has pretrained models Recurrent nets Convolutional nets RBM/DBNs Parallel execution (multi node) Actively developed
  1. 1.0 1.1 Licenses here are a summary, and are not taken to be complete statements of the licenses. Some libraries may use other libraries internally under different licenses

Comparison of compatibility of machine learning models

Format name Design goal Compatible with other formats Self-contained DNN Model Pre-processing and Post-processing Run-time configuration for tuning & calibration DNN model interconnect Common platform
TensorFlow, Keras, Caffe, Torch, ONNX, Algorithm training No No / Separate files in most formats No No No Yes
ONNX Algorithm training Yes No / Separate files in most formats No No No Yes

See also

References

  1. 1.0 1.1 Atilim Gunes Baydin; Barak A. Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind (20 February 2015). "Automatic differentiation in machine learning: a survey". arXiv:1502.05767 [cs.LG].
  2. "Microsoft/caffe". GitHub. 30 October 2021. https://github.com/Microsoft/caffe. 
  3. "Caffe: a fast open framework for deep learning.". July 19, 2019. https://github.com/BVLC/caffe. 
  4. "Caffe | Model Zoo". http://caffe.berkeleyvision.org/model_zoo.html. 
  5. GitHub - BVLC/caffe: Caffe: a fast open framework for deep learning., Berkeley Vision and Learning Center, 2019-09-25, https://github.com/BVLC/caffe, retrieved 2019-09-25 
  6. Preferred Networks Migrates its Deep Learning Research Platform to PyTorch, 2019-12-05, https://preferred.jp/en/news/pr20191205/, retrieved 2019-12-27 
  7. "Support for Open CL · Issue #27 · deeplearning4j/nd4j". GitHub. https://github.com/deeplearning4j/nd4j/issues/27. 
  8. "N-Dimensional Scientific Computing for Java". http://nd4j.org/gpu_native_backends.html. 
  9. "Comparing Top Deep Learning Frameworks". Deeplearning4j. https://deeplearning4j.org/compare-dl4j-tensorflow-pytorch. 
  10. "Deeplearning4j Models". http://deeplearning4j.org/model-zoo. 
  11. Deeplearning4j. "Deeplearning4j on Spark". Deeplearning4j. http://deeplearning4j.org/spark. 
  12. "Metalhead". FluxML. 29 October 2021. http://github.com/FluxML/Metalhead.jl. 
  13. 13.0 13.1 "Intel® Data Analytics Acceleration Library (Intel® DAAL)". November 20, 2018. https://software.intel.com/en-us/intel-daal. 
  14. "Intel® Math Kernel Library Release Notes and New Features". https://www.intel.com/content/www/us/en/developer/articles/release-notes/intel-math-kernel-library-release-notes-and-new-features.html. 
  15. "Intel® Math Kernel Library (Intel® MKL)". September 11, 2018. https://software.intel.com/en-us/mkl. 
  16. "Deep Neural Network Functions". May 24, 2019. https://software.intel.com/en-us/mkl-developer-reference-c-deep-neural-network-functions. 
  17. "Using Intel® MKL with Threaded Applications". June 1, 2017. https://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-using-intel-mkl-with-threaded-applications. 
  18. 18.0 18.1 "Intel® Xeon Phi™ Delivers Competitive Performance For Deep Learning—And Getting Better Fast". March 21, 2019. https://software.intel.com/en-us/articles/intel-xeon-phi-delivers-competitive-performance-for-deep-learning-and-getting-better-fast. 
  19. "Applications - Keras Documentation". https://keras.io/applications/. 
  20. "Is there RBM in Keras? · Issue #461 · keras-team/keras". https://github.com/keras-team/keras/issues/461. 
  21. "Does Keras support using multiple GPUs? · Issue #2436 · keras-team/keras". https://github.com/keras-team/keras/issues/2436. 
  22. "GPU Coder - MATLAB & Simulink". https://www.mathworks.com/products/gpu-coder.html. Retrieved 13 November 2017. 
  23. "Automatic Differentiation Background - MATLAB & Simulink". September 3, 2019. https://www.mathworks.com/help/deeplearning/ug/deep-learning-with-automatic-differentiation-in-matlab.html. 
  24. 24.0 24.1 24.2 "Neural Network Toolbox - MATLAB". https://www.mathworks.com/products/neural-network.html. Retrieved 13 November 2017. 
  25. "Deep Learning Models - MATLAB & Simulink". https://www.mathworks.com/solutions/deep-learning/models.html. Retrieved 13 November 2017. 
  26. "Parallel Computing Toolbox - MATLAB". https://www.mathworks.com/products/parallel-computing.html. Retrieved 13 November 2017. 
  27. "CNTK/LICENSE.md at master · Microsoft/CNTK · GitHub". GitHub. https://github.com/Microsoft/CNTK/blob/master/LICENSE.md. 
  28. "Setup CNTK on your machine". GitHub. https://github.com/Microsoft/CNTK/wiki/Setup-CNTK-on-your-machine. 
  29. "CNTK usage overview". GitHub. https://github.com/Microsoft/CNTK/wiki/CNTK-usage-overview. 
  30. "BrainScript Network Builder". GitHub. https://github.com/Microsoft/CNTK/wiki/BrainScript-Network-Builder. 
  31. ".NET Support · Issue #960 · Microsoft/CNTK". GitHub. https://github.com/Microsoft/CNTK/issues/960. 
  32. "How to train a model using multiple machines? · Issue #59 · Microsoft/CNTK". GitHub. https://github.com/Microsoft/CNTK/issues/59#issuecomment-178104505. 
  33. "Prebuilt models for image classification · Issue #140 · microsoft/CNTK". https://github.com/microsoft/CNTK/issues/140. 
  34. 34.0 34.1 "CNTK - Computational Network Toolkit". Microsoft Corporation. http://www.cntk.ai/. 
  35. url=https://github.com/Microsoft/CNTK/issues/534
  36. "Multiple GPUs and machines". Microsoft Corporation. https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines. 
  37. "Disclaimer". CNTK TEAM. 6 November 2021. https://github.com/Microsoft/CNTK#disclaimer. 
  38. "Releases · dmlc/mxnet". Github. https://github.com/dmlc/mxnet/releases. 
  39. "Installation Guide — mxnet documentation". Readthdocs. https://mxnet.readthedocs.io/en/latest/how_to/build.html#building-on-windows. 
  40. "MXNet Smart Device". ReadTheDocs. https://mxnet.readthedocs.io/en/latest/how_to/smart_device.html. 
  41. "MXNet.js". Github. 28 October 2021. https://github.com/dmlc/mxnet.js. 
  42. "Support for other Device Types, OpenCL AMD GPU · Issue #621 · dmlc/mxnet". GitHub. https://github.com/dmlc/mxnet/issues/621. 
  43. "— Redirecting to mxnet.io". https://mxnet.readthedocs.io/en/latest/. 
  44. "Model Gallery". GitHub. https://github.com/dmlc/mxnet-model-gallery. 
  45. "Run MXNet on Multiple CPU/GPUs with Data Parallel". GitHub. https://mxnet.readthedocs.io/en/latest/how_to/multi_devices.html. 
  46. "PyTorch". Dec 17, 2021. https://pytorch.org/mobile/android/. 
  47. "OpenCL build of pytorch: (in-progress, not useable) - hughperkins/pytorch-coriander". July 14, 2019. https://github.com/hughperkins/pytorch-coriander. 
  48. "DLPrimitives/OpenCL out of tree backend for pytorch - artyom-beilis/pytorch_dlprim". Jan 21, 2022. https://github.com/artyom-beilis/pytorch_dlprim. 
  49. "OpenCL Support · Issue #488 · pytorch/pytorch". https://github.com/pytorch/pytorch/issues/488. 
  50. "Restricted Boltzmann Machines (RBMs) in PyTorch". https://github.com/GabrielBianconi/pytorch-rbm/blob/master/rbm.py. 
  51. "Install TensorFlow with pip". https://www.tensorflow.org/install/pip. 
  52. "TensorFlow 0.12 adds support for Windows". https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html. 
  53. interface), JJ Allaire (R; RStudio; Eddelbuettel, Dirk; Golding, Nick; Tang, Yuan; Tutorials), Google Inc (Examples and (2017-05-26), tensorflow: R Interface to TensorFlow, https://cran.r-project.org/web/packages/tensorflow/index.html, retrieved 2017-06-14 
  54. "tensorflow/roadmap.md at master · tensorflow/tensorflow · GitHub". GitHub. January 23, 2017. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/docs_src/about/roadmap.md. 
  55. "OpenCL support · Issue #22 · tensorflow/tensorflow". GitHub. https://github.com/tensorflow/tensorflow/issues/22. 
  56. "TensorFlow". https://www.tensorflow.org/. 
  57. "Models and examples built with TensorFlow.". July 19, 2019. https://github.com/tensorflow/models. 
  58. "Using the GPU — Theano 0.8.2 documentation". http://deeplearning.net/software/theano/tutorial/using_gpu.html. 
  59. "gradient – Symbolic Differentiation — Theano 1.0.0 documentation". http://deeplearning.net/software/theano/library/gradient.html. 
  60. "Automatic vs. Symbolic differentiation". https://groups.google.com/d/msg/theano-users/mln5g2IuBSU/gespG36Lf_QJ. 
  61. "Recipes/modelzoo at master · Lasagne/Recipes · GitHub". GitHub. https://github.com/Lasagne/Recipes/tree/master/modelzoo. 
  62. "Using multiple GPUs — Theano 1.0.0 documentation". http://deeplearning.net/software/theano/tutorial/using_multi_gpu.html. 
  63. 63.0 63.1 "torch/torch7". July 18, 2019. https://github.com/torch/torch7. 
  64. "GitHub - soumith/torch-android: Torch-7 for Android". GitHub. 13 October 2021. https://github.com/soumith/torch-android. 
  65. "Torch7: A Matlab-like Environment for Machine Learning". http://ronan.collobert.com/pub/matos/2011_torch7_nipsw.pdf. 
  66. "GitHub - jonathantompson/jtorch: An OpenCL Torch Utility Library". GitHub. 18 November 2020. https://github.com/jonathantompson/jtorch. 
  67. "Cheatsheet". GitHub. https://github.com/torch/torch7/wiki/Cheatsheet#opencl. 
  68. "cltorch". GitHub. https://github.com/hughperkins/distro-cl. 
  69. "Torch CUDA backend". GitHub. https://github.com/torch/cutorch. 
  70. "Torch CUDA backend for nn". GitHub. https://github.com/torch/cunn. 
  71. "Autograd automatically differentiates native Torch code: twitter/torch-autograd". July 9, 2019. https://github.com/twitter/torch-autograd. 
  72. "ModelZoo". GitHub. https://github.com/torch/torch7/wiki/ModelZoo. 
  73. "Launching Mathematica 10". https://blog.wolfram.com/2014/07/09/launching-mathematica-10-with-700-new-functions-and-a-crazy-amount-of-rd. 
  74. "Wolfram Neural Net Repository of Neural Network Models". http://resources.wolframcloud.com/NeuralNetRepository. 
  75. "Parallel Computing—Wolfram Language Documentation". https://reference.wolfram.com/language/guide/ParallelComputing.html.en.