Software:Comparison of deep-learning software
From HandWiki
The following table compares notable software frameworks, libraries and computer programs for deep learning.
Deep-learning software by name
Software | Creator | Initial release | Software license[lower-alpha 1] | Open source | Platform | Written in | Interface | OpenMP support | OpenCL support | CUDA support | Automatic differentiation[1] | Has pretrained models | Recurrent nets | Convolutional nets | RBM/DBNs | Parallel execution (multi node) | Actively developed |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BigDL | Jason Dai (Intel) | 2016 | Apache 2.0 | Yes | Apache Spark | Scala | Scala, Python | No | Yes | Yes | Yes | ||||||
Caffe | Berkeley Vision and Learning Center | 2013 | BSD | Yes | Linux, macOS, Windows[2] | C++ | Python, MATLAB, C++ | Yes | Under development[3] | Yes | Yes | Yes[4] | Yes | Yes | No | ? | No[5] |
Chainer | Preferred Networks | 2015 | BSD | Yes | Linux, macOS | Python | Python | No | No | Yes | Yes | Yes | Yes | Yes | No | Yes | No[6] |
Deeplearning4j | Skymind engineering team; Deeplearning4j community; originally Adam Gibson | 2014 | Apache 2.0 | Yes | Linux, macOS, Windows, Android (Cross-platform) | C++, Java | Java, Scala, Clojure, Python (Keras), Kotlin | Yes | No[7] | Yes[8][9] | Computational Graph | Yes[10] | Yes | Yes | Yes | Yes[11] | Yes |
Dlib | Davis King | 2002 | Boost Software License | Yes | Cross-platform | C++ | C++, Python | Yes | No | Yes | Yes | Yes | No | Yes | Yes | Yes | |
Flux | Mike Innes | 2017 | MIT license | Yes | Linux, MacOS, Windows (Cross-platform) | Julia | Julia | Yes | Yes | Yes[12] | Yes | Yes | No | Yes | Yes | ||
Intel Data Analytics Acceleration Library | Intel | 2015 | Apache License 2.0 | Yes | Linux, macOS, Windows on Intel CPU[13] | C++, Python, Java | C++, Python, Java[13] | Yes | No | No | Yes | No | Yes | Yes | |||
Intel Math Kernel Library | Intel | Proprietary | No | Linux, macOS, Windows on Intel CPU[14] | C[15] | Yes[16] | No | No | Yes | No | Yes[17] | Yes[17] | No | ||||
Keras | François Chollet | 2015 | MIT license | Yes | Linux, macOS, Windows | Python | Python, R | Only if using Theano as backend | Can use Theano, Tensorflow or PlaidML as backends | Yes | Yes | Yes[18] | Yes | Yes | No[19] | Yes[20] | Yes |
MATLAB + Deep Learning Toolbox | MathWorks | Proprietary | No | Linux, macOS, Windows | C, C++, Java, MATLAB | MATLAB | No | No | Train with Parallel Computing Toolbox and generate CUDA code with GPU Coder[21] | Yes[22] | Yes[23][24] | Yes[23] | Yes[23] | Yes | With Parallel Computing Toolbox[25] | Yes | |
Microsoft Cognitive Toolkit (CNTK) | Microsoft Research | 2016 | MIT license[26] | Yes | Windows, Linux[27] (macOS via Docker on roadmap) | C++ | Python (Keras), C++, Command line,[28] BrainScript[29] (.NET on roadmap[30]) | Yes[31] | No | Yes | Yes | Yes[32] | Yes[33] | Yes[33] | No[34] | Yes[35] | No[36] |
Apache MXNet | Apache Software Foundation | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows,[37][38] AWS, Android,[39] iOS, JavaScript[40] | Small C++ core library | C++, Python, Julia, Matlab, JavaScript, Go, R, Scala, Perl, Clojure | Yes | On roadmap[41] | Yes | Yes[42] | Yes[43] | Yes | Yes | Yes | Yes[44] | Yes |
Neural Designer | Artelnics | 2014 | Proprietary | No | Linux, macOS, Windows | C++ | Graphical user interface | Yes | No | Yes | Analytical differentiation | No | No | No | No | Yes | Yes |
OpenNN | Artelnics | 2003 | GNU LGPL | Yes | Cross-platform | C++ | C++ | Yes | No | Yes | ? | ? | No | No | No | ? | |
PlaidML | Vertex.AI, Intel | 2017 | Apache 2.0 | Yes | Linux, macOS, Windows | Python, C++, OpenCL | Python, C++ | ? | Some OpenCL ICDs are not recognized | No | Yes | Yes | Yes | Yes | Yes | Yes | |
PyTorch | Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan (Facebook) | 2016 | BSD | Yes | Linux, macOS, Windows | Python, C, C++, CUDA | Python, C++, Julia | Yes | Via separately maintained package[45][46] | Yes | Yes | Yes | Yes | Yes | Yes | Yes | |
Seq2SeqSharp | Zhongkai Fu | 2018 | BSD | Yes | Linux, macOS, Windows | C#, C, C++, CUDA | C# | Yes | No | Yes | Yes | Yes | Yes | No | No | Yes | Yes |
Apache SINGA | Apache Software Foundation | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows | C++ | Python, C++, Java | No | Supported in V1.0 | Yes | ? | Yes | Yes | Yes | Yes | Yes | |
TensorFlow | Google Brain | 2015 | Apache 2.0 | Yes | Linux, macOS, Windows,[47][48] Android | C++, Python, CUDA | Python (Keras), C/C++, Java, Go, JavaScript, R,[49] Julia, Swift | No | On roadmap[50] but already with SYCL[51] support | Yes | Yes[52] | Yes[53] | Yes | Yes | Yes | Yes | Yes |
Theano | Université de Montréal | 2007 | BSD | Yes | Cross-platform | Python | Python (Keras) | Yes | Under development[54] | Yes | Yes[55][56] | Through Lasagne's model zoo[57] | Yes | Yes | Yes | Yes[58] | No |
Torch | Ronan Collobert, Koray Kavukcuoglu, Clement Farabet | 2002 | BSD | Yes | Linux, macOS, Windows,[59] Android,[60] iOS | C, Lua | Lua, LuaJIT,[61] C, utility library for C++/OpenCL[62] | Yes | Third party implementations[63][64] | Yes[65][66] | Through Twitter's Autograd[67] | Yes[68] | Yes | Yes | Yes | Yes[59] | No |
Wolfram Mathematica | Wolfram Research | 1988 | Proprietary | No | Windows, macOS, Linux, Cloud computing | C++, Wolfram Language, CUDA | Wolfram Language | Yes | No | Yes | Yes | Yes[69] | Yes | Yes | Yes | Yes[70] | Yes |
Software | Creator | Initial release | Software license[lower-alpha 1] | Open source | Platform | Written in | Interface | OpenMP support | OpenCL support | CUDA support | Automatic differentiation[71] | Has pretrained models | Recurrent nets | Convolutional nets | RBM/DBNs | Parallel execution (multi node) | Actively developed |
Comparison of compatibility of machine learning models
Format name | Design goal | Compatible with other formats | Self-contained DNN Model | Pre-processing and Post-processing | Run-time configuration for tuning & calibration | DNN model interconnect | Common platform |
---|---|---|---|---|---|---|---|
TensorFlow, Keras, Caffe, Torch, ONNX, | Algorithm training | No | No / Separate files in most formats | No | No | No | Yes |
ONNX | Algorithm training | Yes | No / Separate files in most formats | No | No | No | Yes |
See also
- Comparison of numerical-analysis software
- Comparison of statistical packages
- List of datasets for machine-learning research
- List of numerical-analysis software
References
- ↑ Atilim Gunes Baydin; Barak A. Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind (20 February 2015). "Automatic differentiation in machine learning: a survey". arXiv:1502.05767 [cs.LG].
- ↑ "Microsoft/caffe". GitHub. https://github.com/Microsoft/caffe.
- ↑ "Caffe: a fast open framework for deep learning.". July 19, 2019. https://github.com/BVLC/caffe.
- ↑ "Caffe | Model Zoo". http://caffe.berkeleyvision.org/model_zoo.html.
- ↑ GitHub - BVLC/caffe: Caffe: a fast open framework for deep learning., Berkeley Vision and Learning Center, 2019-09-25, https://github.com/BVLC/caffe, retrieved 2019-09-25
- ↑ Preferred Networks Migrates its Deep Learning Research Platform to PyTorch, 2019-12-05, https://preferred.jp/en/news/pr20191205/, retrieved 2019-12-27
- ↑ "Support for Open CL · Issue #27 · deeplearning4j/nd4j". GitHub. https://github.com/deeplearning4j/nd4j/issues/27.
- ↑ "N-Dimensional Scientific Computing for Java". http://nd4j.org/gpu_native_backends.html.
- ↑ "Comparing Top Deep Learning Frameworks". Deeplearning4j. https://deeplearning4j.org/compare-dl4j-tensorflow-pytorch.
- ↑ "Deeplearning4j Models". http://deeplearning4j.org/model-zoo.
- ↑ Deeplearning4j. "Deeplearning4j on Spark". Deeplearning4j. http://deeplearning4j.org/spark.
- ↑ "Metalhead". FluxML. http://github.com/FluxML/Metalhead.jl.
- ↑ 13.0 13.1 "Intel® Data Analytics Acceleration Library (Intel® DAAL)". November 20, 2018. https://software.intel.com/en-us/intel-daal.
- ↑ "Intel® Math Kernel Library (Intel® MKL)". September 11, 2018. https://software.intel.com/en-us/mkl.
- ↑ "Deep Neural Network Functions". May 24, 2019. https://software.intel.com/en-us/mkl-developer-reference-c-deep-neural-network-functions.
- ↑ "Using Intel® MKL with Threaded Applications". June 1, 2017. https://software.intel.com/en-us/articles/intel-math-kernel-library-intel-mkl-using-intel-mkl-with-threaded-applications.
- ↑ 17.0 17.1 "Intel® Xeon Phi™ Delivers Competitive Performance For Deep Learning—And Getting Better Fast". March 21, 2019. https://software.intel.com/en-us/articles/intel-xeon-phi-delivers-competitive-performance-for-deep-learning-and-getting-better-fast.
- ↑ "Applications - Keras Documentation". https://keras.io/applications/.
- ↑ "Is there RBM in Keras? · Issue #461 · keras-team/keras". https://github.com/keras-team/keras/issues/461.
- ↑ "Does Keras support using multiple GPUs? · Issue #2436 · keras-team/keras". https://github.com/keras-team/keras/issues/2436.
- ↑ "GPU Coder - MATLAB & Simulink". https://www.mathworks.com/products/gpu-coder.html. Retrieved 13 November 2017.
- ↑ "Automatic Differentiation Background - MATLAB & Simulink". September 3, 2019. https://www.mathworks.com/help/deeplearning/ug/deep-learning-with-automatic-differentiation-in-matlab.html.
- ↑ 23.0 23.1 23.2 "Neural Network Toolbox - MATLAB". https://www.mathworks.com/products/neural-network.html. Retrieved 13 November 2017.
- ↑ "Deep Learning Models - MATLAB & Simulink". https://www.mathworks.com/solutions/deep-learning/models.html. Retrieved 13 November 2017.
- ↑ "Parallel Computing Toolbox - MATLAB". https://www.mathworks.com/products/parallel-computing.html. Retrieved 13 November 2017.
- ↑ "CNTK/LICENSE.md at master · Microsoft/CNTK · GitHub". GitHub. https://github.com/Microsoft/CNTK/blob/master/LICENSE.md.
- ↑ "Setup CNTK on your machine". GitHub. https://github.com/Microsoft/CNTK/wiki/Setup-CNTK-on-your-machine.
- ↑ "CNTK usage overview". GitHub. https://github.com/Microsoft/CNTK/wiki/CNTK-usage-overview.
- ↑ "BrainScript Network Builder". GitHub. https://github.com/Microsoft/CNTK/wiki/BrainScript-Network-Builder.
- ↑ ".NET Support · Issue #960 · Microsoft/CNTK". GitHub. https://github.com/Microsoft/CNTK/issues/960.
- ↑ "How to train a model using multiple machines? · Issue #59 · Microsoft/CNTK". GitHub. https://github.com/Microsoft/CNTK/issues/59#issuecomment-178104505.
- ↑ "Prebuilt models for image classification · Issue #140 · microsoft/CNTK". https://github.com/microsoft/CNTK/issues/140.
- ↑ 33.0 33.1 "CNTK - Computational Network Toolkit". Microsoft Corporation. http://www.cntk.ai/.
- ↑ url=https://github.com/Microsoft/CNTK/issues/534
- ↑ "Multiple GPUs and machines". Microsoft Corporation. https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines.
- ↑ "Disclaimer". CNTK TEAM. https://github.com/Microsoft/CNTK#disclaimer.
- ↑ "Releases · dmlc/mxnet". Github. https://github.com/dmlc/mxnet/releases.
- ↑ "Installation Guide — mxnet documentation". Readthdocs. https://mxnet.readthedocs.io/en/latest/how_to/build.html#building-on-windows.
- ↑ "MXNet Smart Device". ReadTheDocs. https://mxnet.readthedocs.io/en/latest/how_to/smart_device.html.
- ↑ "MXNet.js". Github. https://github.com/dmlc/mxnet.js.
- ↑ "Support for other Device Types, OpenCL AMD GPU · Issue #621 · dmlc/mxnet". GitHub. https://github.com/dmlc/mxnet/issues/621.
- ↑ "— Redirecting to mxnet.io". https://mxnet.readthedocs.io/en/latest/.
- ↑ "Model Gallery". GitHub. https://github.com/dmlc/mxnet-model-gallery.
- ↑ "Run MXNet on Multiple CPU/GPUs with Data Parallel". GitHub. https://mxnet.readthedocs.io/en/latest/how_to/multi_devices.html.
- ↑ "OpenCL build of pytorch: (in-progress, not useable) - hughperkins/pytorch-coriander". July 14, 2019. https://github.com/hughperkins/pytorch-coriander.
- ↑ "OpenCL Support · Issue #488 · pytorch/pytorch". https://github.com/pytorch/pytorch/issues/488.
- ↑ "Install TensorFlow with pip". https://www.tensorflow.org/install/pip.
- ↑ "TensorFlow 0.12 adds support for Windows". https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html.
- ↑ interface), JJ Allaire (R; RStudio; Eddelbuettel, Dirk; Golding, Nick; Tang, Yuan; Tutorials), Google Inc (Examples and (2017-05-26), tensorflow: R Interface to TensorFlow, https://cran.r-project.org/web/packages/tensorflow/index.html, retrieved 2017-06-14
- ↑ "tensorflow/roadmap.md at master · tensorflow/tensorflow · GitHub". GitHub. January 23, 2017. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/docs_src/about/roadmap.md.
- ↑ "OpenCL support · Issue #22 · tensorflow/tensorflow". GitHub. https://github.com/tensorflow/tensorflow/issues/22.
- ↑ "TensorFlow". https://www.tensorflow.org/.
- ↑ "Models and examples built with TensorFlow.". July 19, 2019. https://github.com/tensorflow/models.
- ↑ "Using the GPU — Theano 0.8.2 documentation". http://deeplearning.net/software/theano/tutorial/using_gpu.html.
- ↑ "gradient – Symbolic Differentiation — Theano 1.0.0 documentation". http://deeplearning.net/software/theano/library/gradient.html.
- ↑ "Automatic vs. Symbolic differentiation". https://groups.google.com/d/msg/theano-users/mln5g2IuBSU/gespG36Lf_QJ.
- ↑ "Recipes/modelzoo at master · Lasagne/Recipes · GitHub". GitHub. https://github.com/Lasagne/Recipes/tree/master/modelzoo.
- ↑ "Using multiple GPUs — Theano 1.0.0 documentation". http://deeplearning.net/software/theano/tutorial/using_multi_gpu.html.
- ↑ 59.0 59.1 "torch/torch7". July 18, 2019. https://github.com/torch/torch7.
- ↑ "GitHub - soumith/torch-android: Torch-7 for Android". GitHub. 13 October 2021. https://github.com/soumith/torch-android.
- ↑ "Torch7: A Matlab-like Environment for Machine Learning". http://ronan.collobert.com/pub/matos/2011_torch7_nipsw.pdf.
- ↑ "GitHub - jonathantompson/jtorch: An OpenCL Torch Utility Library". GitHub. 18 November 2020. https://github.com/jonathantompson/jtorch.
- ↑ "Cheatsheet". GitHub. https://github.com/torch/torch7/wiki/Cheatsheet#opencl.
- ↑ "cltorch". GitHub. https://github.com/hughperkins/distro-cl.
- ↑ "Torch CUDA backend". GitHub. https://github.com/torch/cutorch.
- ↑ "Torch CUDA backend for nn". GitHub. https://github.com/torch/cunn.
- ↑ "Autograd automatically differentiates native Torch code: twitter/torch-autograd". July 9, 2019. https://github.com/twitter/torch-autograd.
- ↑ "ModelZoo". GitHub. https://github.com/torch/torch7/wiki/ModelZoo.
- ↑ "Wolfram Neural Net Repository of Neural Network Models". http://resources.wolframcloud.com/NeuralNetRepository.
- ↑ "Parallel Computing—Wolfram Language Documentation". https://reference.wolfram.com/language/guide/ParallelComputing.html.en.
- ↑ Atilim Gunes Baydin; Barak A. Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind (20 February 2015). "Automatic differentiation in machine learning: a survey". arXiv:1502.05767 [cs.LG].