Long short-term memory
Machine learning and data mining |
---|
Long short-term memory (LSTM)[1] is an artificial neural network used in the fields of artificial intelligence and deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. Such a recurrent neural network (RNN) can process not only single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition,[2] speech recognition,[3][4] machine translation,[5][6] robot control,[7][8] video games,[9][10] and healthcare.[11] LSTM has become the most cited neural network of the 20th century.[12]
The name of LSTM refers to the analogy that a standard RNN has both "long-term memory" and "short-term memory". The connection weights and biases in the network change once per episode of training, analogous to how physiological changes in synaptic strengths store long-term memories; the activation patterns in the network change once per time-step, analogous to how the moment-to-moment change in electric firing patterns in the brain store short-term memories.[13] The LSTM architecture aims to provide a short-term memory for RNN that can last thousands of timesteps, thus "long short-term memory".[1]
A common LSTM unit is composed of a cell, an input gate, an output gate[14] and a forget gate.[15] The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.
LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the vanishing gradient problem[16] that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications.[citation needed]
Idea
In theory, classic (or "vanilla") RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with vanilla RNNs is computational (or practical) in nature: when training a vanilla RNN using back-propagation, the long-term gradients which are back-propagated can "vanish" (that is, they can tend to zero) or "explode" (that is, they can tend to infinity),[16] because of the computations involved in the process, which use finite-precision numbers. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow unchanged. However, LSTM networks can still suffer from the exploding gradient problem.[17]
Variants
In the equations below, the lowercase variables represent vectors. Matrices [math]\displaystyle{ W_q }[/math] and [math]\displaystyle{ U_q }[/math] contain, respectively, the weights of the input and recurrent connections, where the subscript [math]\displaystyle{ _q }[/math] can either be the input gate [math]\displaystyle{ i }[/math], output gate [math]\displaystyle{ o }[/math], the forget gate [math]\displaystyle{ f }[/math] or the memory cell [math]\displaystyle{ c }[/math], depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, [math]\displaystyle{ c_t \in \mathbb{R}^{h} }[/math] is not just one unit of one LSTM cell, but contains [math]\displaystyle{ h }[/math] LSTM cell's units.
LSTM with a forget gate
The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are:[1][15]
- [math]\displaystyle{ \begin{align} f_t &= \sigma_g(W_{f} x_t + U_{f} h_{t-1} + b_f) \\ i_t &= \sigma_g(W_{i} x_t + U_{i} h_{t-1} + b_i) \\ o_t &= \sigma_g(W_{o} x_t + U_{o} h_{t-1} + b_o) \\ \tilde{c}_t &= \sigma_c(W_{c} x_t + U_{c} h_{t-1} + b_c) \\ c_t &= f_t \odot c_{t-1} + i_t \odot \tilde{c}_t \\ h_t &= o_t \odot \sigma_h(c_t) \end{align} }[/math]
where the initial values are [math]\displaystyle{ c_0 = 0 }[/math] and [math]\displaystyle{ h_0 = 0 }[/math] and the operator [math]\displaystyle{ \odot }[/math] denotes the Hadamard product (element-wise product). The subscript [math]\displaystyle{ t }[/math] indexes the time step.
Variables
- [math]\displaystyle{ x_t \in \mathbb{R}^{d} }[/math]: input vector to the LSTM unit
- [math]\displaystyle{ f_t \in {(0,1)}^{h} }[/math]: forget gate's activation vector
- [math]\displaystyle{ i_t \in {(0,1)}^{h} }[/math]: input/update gate's activation vector
- [math]\displaystyle{ o_t \in {(0,1)}^{h} }[/math]: output gate's activation vector
- [math]\displaystyle{ h_t \in {(-1,1)}^{h} }[/math]: hidden state vector also known as output vector of the LSTM unit
- [math]\displaystyle{ \tilde{c}_t \in {(-1,1)}^{h} }[/math]: cell input activation vector
- [math]\displaystyle{ c_t \in \mathbb{R}^{h} }[/math]: cell state vector
- [math]\displaystyle{ W \in \mathbb{R}^{h \times d} }[/math], [math]\displaystyle{ U \in \mathbb{R}^{h \times h} }[/math] and [math]\displaystyle{ b \in \mathbb{R}^{h} }[/math]: weight matrices and bias vector parameters which need to be learned during training
where the superscripts [math]\displaystyle{ d }[/math] and [math]\displaystyle{ h }[/math] refer to the number of input features and number of hidden units, respectively.
Activation functions
- [math]\displaystyle{ \sigma_g }[/math]: sigmoid function.
- [math]\displaystyle{ \sigma_c }[/math]: hyperbolic tangent function.
- [math]\displaystyle{ \sigma_h }[/math]: hyperbolic tangent function or, as the peephole LSTM paper[18][19] suggests, [math]\displaystyle{ \sigma_h(x) = x }[/math].
Peephole LSTM
The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM).[18][19] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[18] [math]\displaystyle{ h_{t-1} }[/math] is not used, [math]\displaystyle{ c_{t-1} }[/math] is used instead in most places.
- [math]\displaystyle{ \begin{align} f_t &= \sigma_g(W_{f} x_t + U_{f} c_{t-1} + b_f) \\ i_t &= \sigma_g(W_{i} x_t + U_{i} c_{t-1} + b_i) \\ o_t &= \sigma_g(W_{o} x_t + U_{o} c_{t-1} + b_o) \\ c_t &= f_t \odot c_{t-1} + i_t \odot \sigma_c(W_{c} x_t + b_c) \\ h_t &= o_t \odot \sigma_h(c_t) \end{align} }[/math]
Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. [math]\displaystyle{ i_t, o_t }[/math] and [math]\displaystyle{ f_t }[/math] represent the activations of respectively the input, output and forget gates, at time step [math]\displaystyle{ t }[/math].
The 3 exit arrows from the memory cell [math]\displaystyle{ c }[/math] to the 3 gates [math]\displaystyle{ i, o }[/math] and [math]\displaystyle{ f }[/math] represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell [math]\displaystyle{ c }[/math] at time step [math]\displaystyle{ t-1 }[/math], i.e. the contribution of [math]\displaystyle{ c_{t-1} }[/math] (and not [math]\displaystyle{ c_{t} }[/math], as the picture may suggest). In other words, the gates [math]\displaystyle{ i, o }[/math] and [math]\displaystyle{ f }[/math] calculate their activations at time step [math]\displaystyle{ t }[/math] (i.e., respectively, [math]\displaystyle{ i_t, o_t }[/math] and [math]\displaystyle{ f_t }[/math]) also considering the activation of the memory cell [math]\displaystyle{ c }[/math] at time step [math]\displaystyle{ t - 1 }[/math], i.e. [math]\displaystyle{ c_{t-1} }[/math].
The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes [math]\displaystyle{ c_{t} }[/math].
The little circles containing a [math]\displaystyle{ \times }[/math] symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.
Peephole convolutional LSTM
Peephole convolutional LSTM.[20] The [math]\displaystyle{ * }[/math] denotes the convolution operator.
- [math]\displaystyle{ \begin{align} f_t &= \sigma_g(W_{f} * x_t + U_{f} * h_{t-1} + V_{f} \odot c_{t-1} + b_f) \\ i_t &= \sigma_g(W_{i} * x_t + U_{i} * h_{t-1} + V_{i} \odot c_{t-1} + b_i) \\ c_t &= f_t \odot c_{t-1} + i_t \odot \sigma_c(W_{c} * x_t + U_{c} * h_{t-1} + b_c) \\ o_t &= \sigma_g(W_{o} * x_t + U_{o} * h_{t-1} + V_{o} \odot c_{t} + b_o) \\ h_t &= o_t \odot \sigma_h(c_t) \end{align} }[/math]
Training
An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.
A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to [math]\displaystyle{ \lim_{n \to \infty}W^n = 0 }[/math] if the spectral radius of [math]\displaystyle{ W }[/math] is smaller than 1.[16][21]
However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.
CTC score function
Many applications use stacks of LSTM RNNs[22] and train them by connectionist temporal classification (CTC)[23] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.
Alternatives
Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution[24] or by policy gradient methods, especially when there is no "teacher" (that is, training labels).
Success
There have been several successful stories of training, in a non-supervised fashion, RNNs with LSTM units.
In 2018, Bill Gates called it a "huge milestone in advancing artificial intelligence" when bots developed by OpenAI were able to beat humans in the game of Dota 2.[9] OpenAI Five consists of five independent but coordinated neural networks. Each network is trained by a policy gradient method without supervising teacher and contains a single-layer, 1024-unit Long-Short-Term-Memory that sees the current game state and emits actions through several possible action heads.[9]
In 2018, OpenAI also trained a similar LSTM by policy gradients to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[8]
In 2019, DeepMind's program AlphaStar used a deep LSTM core to excel at the complex video game Starcraft II.[10] This was viewed as significant progress towards Artificial General Intelligence.[10]
Applications
Applications of LSTM include:
- Robot control[7]
- Time series prediction[24]
- Speech recognition[25][26][27]
- Rhythm learning[19]
- Music composition[28]
- Grammar learning[29][18][30]
- Handwriting recognition[31][32]
- Human action recognition[33]
- Sign language translation[34]
- Protein homology detection[35]
- Predicting subcellular localization of proteins[36]
- Time series anomaly detection[37]
- Several prediction tasks in the area of business process management[38]
- Prediction in medical care pathways[39]
- Semantic parsing[40]
- Object co-segmentation[41][42]
- Airport passenger management[43]
- Short-term traffic forecast[44]
- Drug design[45]
- Market Prediction[46]
Timeline of development
1991: Sepp Hochreiter analyzed the vanishing gradient problem and developed principles of the method in his German diploma thesis[16] advised by Jürgen Schmidhuber.
1995: "Long Short-Term Memory (LSTM)" is published in a technical report by Sepp Hochreiter and Jürgen Schmidhuber.[47]
1996: LSTM is published at NIPS'1996, a peer-reviewed conference.[14]
1997: The main LSTM paper is published in the journal Neural Computation.[1] By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.[48]
1999: Felix Gers and his advisor Jürgen Schmidhuber and Fred Cummins introduced the forget gate (also called "keep gate") into the LSTM architecture,[49] enabling the LSTM to reset its own state.[48]
2000: Gers & Schmidhuber & Cummins added peephole connections (connections from the cell to the gates) into the architecture.[15] Additionally, the output activation function was omitted.[48]
2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models.[18][50]
Hochreiter et al. used LSTM for meta learning (i.e. learning a learning algorithm).[51]
2004: First successful application of LSTM to speech by Schmidhuber's student Alex Graves et al.[52][50]
2005: First publication (Graves and Schmidhuber) of LSTM with full backpropagation through time and of bi-directional LSTM.[25][50]
2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher.[24]
2006: Graves, Fernandez, Gomez, and Schmidhuber introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences.[23] CTC-trained LSTM led to breakthroughs in speech recognition.[26][53][54][55]
Mayer et al. trained LSTM to control robots.[7]
2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher.[56]
Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology.[35]
2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves.[2] One was the most accurate model in the competition and another was the fastest.[57] This was the first time an RNN won international competitions.[50]
2009: Justin Bayer et al. introduced neural architecture search for LSTM.[58][50]
2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.[27]
2014: Kyunghyun Cho et al. put forward a simplified variant of the forget gate LSTM[49] called Gated recurrent unit (GRU).[59]
2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice.[53][54] According to the official blog post, the new model cut transcription errors by 49%.[60]
2015: Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles[49] to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks.[61][62][12] 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network.[63] This has become the most cited neural network of the 21st century.[12]
2016: Google started using an LSTM to suggest messages in the Allo conversation app.[64] In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.[5][65][66]
Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype[67][68][69] in the iPhone and for Siri.[70][71]
Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.[72]
2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.[6]
Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference.[73][74][75] Their Time-Aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.
Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[55]
2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2,[9] and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[8][50]
2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II.[10][50]
2021: According to Google Scholar, in 2021, LSTM was cited over 16,000 times within a single year. This reflects applications of LSTM in many different fields including healthcare.[11]
See also
- Deep learning
- Differentiable neural computer
- Gated recurrent unit
- Highway network
- Long-term potentiation
- Prefrontal cortex basal ganglia working memory
- Recurrent neural network
- Seq2seq
- Time aware long short-term memory
- Time series
References
- ↑ 1.0 1.1 1.2 1.3 Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. https://www.researchgate.net/publication/13853244.
- ↑ 2.0 2.1 Graves, A.; Liwicki, M.; Fernández, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (May 2009). "A Novel Connectionist System for Unconstrained Handwriting Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence 31 (5): 855–868. doi:10.1109/tpami.2008.137. ISSN 0162-8828. PMID 19299860.
- ↑ Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). "Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling". https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43905.pdf.
- ↑ Li, Xiangang; Wu, Xihong (2014-10-15). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv:1410.4281 [cs.CL].
- ↑ 5.0 5.1 Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin (2016-09-26). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL].
- ↑ 6.0 6.1 Ong, Thuy (4 August 2017). "Facebook's translations are now powered completely by AI". https://www.theverge.com/2017/8/4/16093872/facebook-ai-translations-artificial-intelligence.
- ↑ 7.0 7.1 7.2 Mayer, H.; Gomez, F.; Wierstra, D.; Nagy, I.; Knoll, A.; Schmidhuber, J. (October 2006). A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks. 543–548. doi:10.1109/IROS.2006.282190. ISBN 978-1-4244-0258-8.
- ↑ 8.0 8.1 8.2 "Learning Dexterity". OpenAI Blog. July 30, 2018. https://blog.openai.com/learning-dexterity/.
- ↑ 9.0 9.1 9.2 9.3 Rodriguez, Jesus (July 2, 2018). "The Science Behind OpenAI Five that just Produced One of the Greatest Breakthrough in the History of AI". Towards Data Science. https://towardsdatascience.com/the-science-behind-openai-five-that-just-produced-one-of-the-greatest-breakthrough-in-the-history-b045bcdc2b69.
- ↑ 10.0 10.1 10.2 10.3 Stanford, Stacy (January 25, 2019). "DeepMind's AI, AlphaStar Showcases Significant Progress Towards AGI". Medium ML Memoirs. https://medium.com/mlmemoirs/deepminds-ai-alphastar-showcases-significant-progress-towards-agi-93810c94fbe9.
- ↑ 11.0 11.1 Schmidhuber, Jürgen (2021). "The 2010s: Our Decade of Deep Learning / Outlook on the 2020s". AI Blog (IDSIA, Switzerland). https://people.idsia.ch/~juergen/2010s-our-decade-of-deep-learning.html.
- ↑ 12.0 12.1 12.2 Schmidhuber, Jürgen (2021). "The most cited neural networks all build on work done in my labs". AI Blog (IDSIA, Switzerland). https://people.idsia.ch/~juergen/most-cited-neural-nets.html.
- ↑ Elman, Jeffrey L. (March 1990). "Finding Structure in Time" (in en). Cognitive Science 14 (2): 179–211. doi:10.1207/s15516709cog1402_1. http://doi.wiley.com/10.1207/s15516709cog1402_1.
- ↑ 14.0 14.1 Hochreiter, Sepp; Schmidhuber, Juergen (1996). "LSTM can solve hard long time lag problems". Advances in Neural Information Processing Systems. https://dl.acm.org/doi/10.5555/2998981.2999048.
- ↑ 15.0 15.1 15.2 Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation 12 (10): 2451–2471. doi:10.1162/089976600300015015. PMID 11032042.
- ↑ 16.0 16.1 16.2 16.3 Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (diploma thesis). Technical University Munich, Institute of Computer Science, advisor: J. Schmidhuber.
- ↑ Calin, Ovidiu (14 February 2020). Deep Learning Architectures. Cham, Switzerland: Springer Nature. p. 555. ISBN 978-3-030-36720-6.
- ↑ 18.0 18.1 18.2 18.3 18.4 Gers, F. A.; Schmidhuber, J. (2001). "LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages". IEEE Transactions on Neural Networks 12 (6): 1333–1340. doi:10.1109/72.963769. PMID 18249962. ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf.
- ↑ 19.0 19.1 19.2 Gers, F.; Schraudolph, N.; Schmidhuber, J. (2002). "Learning precise timing with LSTM recurrent networks". Journal of Machine Learning Research 3: 115–143. http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf.
- ↑ Xingjian Shi; Zhourong Chen; Hao Wang; Dit-Yan Yeung; Wai-kin Wong; Wang-chun Woo (2015). "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting". Proceedings of the 28th International Conference on Neural Information Processing Systems: 802–810. Bibcode: 2015arXiv150604214S.
- ↑ Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. (2001). "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies (PDF Download Available)". A Field Guide to Dynamical Recurrent Neural Networks.. IEEE Press. https://www.researchgate.net/publication/2839938.
- ↑ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "Sequence labelling in structured domains with hierarchical recurrent neural networks". Proc. 20th Int. Joint Conf. On Artificial Intelligence, Ijcai 2007: 774–779.
- ↑ 23.0 23.1 Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Jürgen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376.
- ↑ 24.0 24.1 24.2 Wierstra, Daan; Schmidhuber, J.; Gomez, F. J. (2005). "Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning". Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh: 853–858. https://www.academia.edu/5830256.
- ↑ 25.0 25.1 Graves, A.; Schmidhuber, J. (2005). "Framewise phoneme classification with bidirectional LSTM and other neural network architectures". Neural Networks 18 (5–6): 602–610. doi:10.1016/j.neunet.2005.06.042. PMID 16112549.
- ↑ 26.0 26.1 Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). An Application of Recurrent Neural Networks to Discriminative Keyword Spotting. ICANN'07. Berlin, Heidelberg: Springer-Verlag. 220–229. ISBN 978-3540746935. http://dl.acm.org/citation.cfm?id=1778066.1778092.
- ↑ 27.0 27.1 Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013). "Speech Recognition with Deep Recurrent Neural Networks". Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on: 6645–6649. doi:10.1109/ICASSP.2013.6638947. ISBN 978-1-4799-0356-6.
- ↑ Eck, Douglas; Schmidhuber, Jürgen (2002-08-28). Learning the Long-Term Structure of the Blues. Lecture Notes in Computer Science. 2415. Springer, Berlin, Heidelberg. 284–289. doi:10.1007/3-540-46084-5_47. ISBN 978-3540460848.
- ↑ Schmidhuber, J.; Gers, F.; Eck, D.; Schmidhuber, J.; Gers, F. (2002). "Learning nonregular languages: A comparison of simple recurrent networks and LSTM". Neural Computation 14 (9): 2039–2041. doi:10.1162/089976602320263980. PMID 12184841.
- ↑ Perez-Ortiz, J. A.; Gers, F. A.; Eck, D.; Schmidhuber, J. (2003). "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets". Neural Networks 16 (2): 241–250. doi:10.1016/s0893-6080(02)00219-8. PMID 12628609.
- ↑ A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
- ↑ Graves, Alex; Fernández, Santiago; Liwicki, Marcus; Bunke, Horst; Schmidhuber, Jürgen (2007). Unconstrained Online Handwriting Recognition with Recurrent Neural Networks. NIPS'07. USA: Curran Associates Inc.. 577–584. ISBN 9781605603520. http://dl.acm.org/citation.cfm?id=2981562.2981635.
- ↑ Baccouche, M.; Mamalet, F.; Wolf, C.; Garcia, C.; Baskurt, A. (2011). "Sequential Deep Learning for Human Action Recognition". in Salah, A. A.; Lepri, B.. 2nd International Workshop on Human Behavior Understanding (HBU). Lecture Notes in Computer Science. 7065. Amsterdam, Netherlands: Springer. pp. 29–39. doi:10.1007/978-3-642-25446-8_4. ISBN 978-3-642-25445-1.
- ↑ Huang, Jie; Zhou, Wengang; Zhang, Qilin; Li, Houqiang; Li, Weiping (2018-01-30). "Video-based Sign Language Recognition without Temporal Segmentation". arXiv:1801.10111 [cs.CV].
- ↑ 35.0 35.1 Hochreiter, S.; Heusel, M.; Obermayer, K. (2007). "Fast model-based protein homology detection without alignment". Bioinformatics 23 (14): 1728–1736. doi:10.1093/bioinformatics/btm247. PMID 17488755.
- ↑ Thireou, T.; Reczko, M. (2007). "Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins". IEEE/ACM Transactions on Computational Biology and Bioinformatics 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. PMID 17666763.
- ↑ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). "Long Short Term Memory Networks for Anomaly Detection in Time Series". European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015. https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2015-56.pdf. Retrieved 2018-02-21.
- ↑ Tax, N.; Verenich, I.; La Rosa, M.; Dumas, M. (2017). Predictive Business Process Monitoring with LSTM neural networks. Lecture Notes in Computer Science. 10253. 477–492. doi:10.1007/978-3-319-59536-8_30. ISBN 978-3-319-59535-1.
- ↑ Choi, E.; Bahadori, M.T.; Schuetz, E.; Stewart, W.; Sun, J. (2016). "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks". Proceedings of the 1st Machine Learning for Healthcare Conference 56: 301–318. PMID 28286600. PMC 5341604. Bibcode: 2015arXiv151105942C. http://proceedings.mlr.press/v56/Choi16.html.
- ↑ Jia, Robin; Liang, Percy (2016). "Data Recombination for Neural Semantic Parsing". arXiv:1606.03622 [cs.CL].
- ↑ Wang, Le; Duan, Xuhuan; Zhang, Qilin; Niu, Zhenxing; Hua, Gang; Zheng, Nanning (2018-05-22). "Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation". Sensors 18 (5): 1657. doi:10.3390/s18051657. ISSN 1424-8220. PMID 29789447. PMC 5982167. Bibcode: 2018Senso..18.1657W. https://qilin-zhang.github.io/_pages/pdfs/Segment-Tube_Spatio-Temporal_Action_Localization_in_Untrimmed_Videos_with_Per-Frame_Segmentation.pdf.
- ↑ Duan, Xuhuan; Wang, Le; Zhai, Changbo; Zheng, Nanning; Zhang, Qilin; Niu, Zhenxing; Hua, Gang (2018). "Joint Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation". 25th IEEE International Conference on Image Processing (ICIP). doi:10.1109/icip.2018.8451692. ISBN 978-1-4799-7061-2.
- ↑ Orsini, F.; Gastaldi, M.; Mantecchini, L.; Rossi, R. (2019). "Neural networks trained with WiFi traces to predict airport passenger behavior". 6th International Conference on Models and Technologies for Intelligent Transportation Systems. Krakow: IEEE. doi:10.1109/MTITS.2019.8883365. 8883365.
- ↑ Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.Y.; Liu, J. (2017). "LSTM network: A deep learning approach for Short-term traffic forecast". IET Intelligent Transport Systems 11 (2): 68–75. doi:10.1049/iet-its.2016.0208.
- ↑ Gupta A, Müller AT, Huisman BJH, Fuchs JA, Schneider P, Schneider G (2018). "Generative Recurrent Networks for De Novo Drug Design.". Mol Inform 37 (1–2). doi:10.1002/minf.201700111. PMID 29095571.
- ↑ Saiful Islam, Md.; Hossain, Emam (2020-10-26). "Foreign Exchange Currency Rate Prediction using a GRU-LSTM Hybrid Network" (in en). Soft Computing Letters 3: 100009. doi:10.1016/j.socl.2020.100009. ISSN 2666-2221.
- ↑ , Wikidata Q98967430
- ↑ 48.0 48.1 48.2 Klaus Greff; Rupesh Kumar Srivastava; Jan Koutník; Bas R. Steunebrink; Jürgen Schmidhuber (2015). "LSTM: A Search Space Odyssey". IEEE Transactions on Neural Networks and Learning Systems 28 (10): 2222–2232. doi:10.1109/TNNLS.2016.2582924. PMID 27411231. Bibcode: 2015arXiv150304069G.
- ↑ 49.0 49.1 49.2 Gers, Felix; Schmidhuber, Jürgen; Cummins, Fred (1999). "Learning to forget: Continual prediction with LSTM". 9th International Conference on Artificial Neural Networks: ICANN '99. 1999. pp. 850–855. doi:10.1049/cp:19991218. ISBN 0-85296-721-7.
- ↑ 50.0 50.1 50.2 50.3 50.4 50.5 50.6 Schmidhuber, Juergen (10 May 2021). "Deep Learning: Our Miraculous Year 1990-1991". arXiv:2005.05744 [cs.NE].
- ↑ Hochreiter, S.; Younger, A. S.; Conwell, P. R. (2001). Learning to Learn Using Gradient Descent. Lecture Notes in Computer Science. 2130. 87–94. doi:10.1007/3-540-44668-0_13. ISBN 978-3-540-42486-4. http://www.bioinf.jku.at/publications/older/1504.pdf.
- ↑ Graves, Alex; Beringer, Nicole; Eck, Douglas; Schmidhuber, Juergen (2004). "Biologically Plausible Speech Recognition with LSTM Neural Nets.". Workshop on Biologically Inspired Approaches to Advanced Information Technology, Bio-ADIT 2004, Lausanne, Switzerland. pp. 175–184.
- ↑ 53.0 53.1 Beaufays, Françoise (August 11, 2015). "The neural networks behind Google Voice transcription". Research Blog. http://googleresearch.blogspot.co.at/2015/08/the-neural-networks-behind-google-voice.html.
- ↑ 54.0 54.1 Sak, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise; Schalkwyk, Johan (September 24, 2015). "Google voice search: faster and more accurate" (in en-US). Research Blog. http://googleresearch.blogspot.co.uk/2015/09/google-voice-search-faster-and-more.html.
- ↑ 55.0 55.1 Haridy, Rich (August 21, 2017). "Microsoft's speech recognition system is now as good as a human". http://newatlas.com/microsoft-speech-recognition-equals-humans/50999.
- ↑ Wierstra, Daan; Foerster, Alexander; Peters, Jan; Schmidhuber, Juergen (2005). "Solving Deep Memory POMDPs with Recurrent Policy Gradients". International Conference on Artificial Neural Networks ICANN'07. https://people.idsia.ch/~juergen/lstm-policy-gradient-2010.html.
- ↑ Märgner, Volker; Abed, Haikal El (July 2009). "ICDAR 2009 Arabic Handwriting Recognition Competition". 2009 10th International Conference on Document Analysis and Recognition: 1383–1387. doi:10.1109/ICDAR.2009.256. ISBN 978-1-4244-4500-4.
- ↑ Bayer, Justin; Wierstra, Daan; Togelius, Julian; Schmidhuber, Juergen (2009). "Evolving memory cell structures for sequence learning". International Conference on Artificial Neural Networks ICANN'09, Cyprus.
- ↑ Cho, Kyunghyun; van Merrienboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (2014). "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation". arXiv:1406.1078 [cs.CL].
- ↑ "Neon prescription... or rather, New transcription for Google Voice" (in en). 23 July 2015. https://googleblog.blogspot.com/2015/07/neon-prescription-or-rather-new.html.
- ↑ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2 May 2015). "Highway Networks". arXiv:1505.00387 [cs.LG].
- ↑ Srivastava, Rupesh K; Greff, Klaus; Schmidhuber, Juergen (2015). "Training Very Deep Networks". Advances in Neural Information Processing Systems 28 (Curran Associates, Inc.) 28: 2377–2385. http://papers.nips.cc/paper/5850-training-very-deep-networks.
- ↑ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). "Deep Residual Learning for Image Recognition". Las Vegas, NV, USA: IEEE. 770–778. doi:10.1109/CVPR.2016.90. ISBN 978-1-4673-8851-1. https://ieeexplore.ieee.org/document/7780459.
- ↑ Khaitan, Pranav (May 18, 2016). "Chat Smarter with Allo". Research Blog. http://googleresearch.blogspot.co.at/2016/05/chat-smarter-with-allo.html.
- ↑ Metz, Cade (September 27, 2016). "An Infusion of AI Makes Google Translate More Powerful Than Ever | WIRED". Wired. https://www.wired.com/2016/09/google-claims-ai-breakthrough-machine-translation/. Retrieved 2017-06-27.
- ↑ "A Neural Network for Machine Translation, at Production Scale" (in en). http://ai.googleblog.com/2016/09/a-neural-network-for-machine.html.
- ↑ Efrati, Amir (June 13, 2016). "Apple's Machines Can Learn Too". https://www.theinformation.com/apples-machines-can-learn-too.
- ↑ Ranger, Steve (June 14, 2016). "iPhone, AI and big data: Here's how Apple plans to protect your privacy | ZDNet". ZDNet. http://www.zdnet.com/article/ai-big-data-and-the-iphone-heres-how-apple-plans-to-protect-your-privacy.
- ↑ "Can Global Semantic Context Improve Neural Language Models? – Apple" (in en-US). https://machinelearning.apple.com/2018/09/27/can-global-semantic-context-improve-neural-language-models.html.
- ↑ Smith, Chris (2016-06-13). "iOS 10: Siri now works in third-party apps, comes with extra AI features". http://bgr.com/2016/06/13/ios-10-siri-third-party-apps/.
- ↑ Capes, Tim; Coles, Paul; Conkie, Alistair; Golipour, Ladan; Hadjitarkhani, Abie; Hu, Qiong; Huddleston, Nancy; Hunt, Melvyn et al. (2017-08-20). "Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System" (in en). Interspeech 2017 (ISCA): 4011–4015. doi:10.21437/Interspeech.2017-1798. http://www.isca-speech.org/archive/Interspeech_2017/abstracts/1798.html.
- ↑ Vogels, Werner (30 November 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS. – All Things Distributed". http://www.allthingsdistributed.com/2016/11/amazon-ai-and-alexa-for-all-aws-apps.html.
- ↑ "Patient Subtyping via Time-Aware LSTM Networks". http://biometrics.cse.msu.edu/Publications/MachineLearning/Baytasetal_PatientSubtypingViaTimeAwareLSTMNetworks.pdf.
- ↑ "Patient Subtyping via Time-Aware LSTM Networks". http://www.kdd.org/kdd2017/papers/view/patient-subtyping-via-time-aware-lstm-networks.
- ↑ "SIGKDD". http://www.kdd.org.
External links
- Recurrent Neural Networks with over 30 LSTM papers by Jürgen Schmidhuber's group at IDSIA
- Gers, Felix (2001). "Long Short-Term Memory in Recurrent Neural Networks". PhD thesis. http://www.felixgers.de/papers/phd.pdf.
- Gers, Felix A.; Schraudolph, Nicol N.; Schmidhuber, Jürgen (Aug 2002). "Learning precise timing with LSTM recurrent networks". Journal of Machine Learning Research 3: 115–143. http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf.
- Abidogun, Olusola Adeniyi (2005). Data Mining, Fraud Detection and Mobile Telecommunications: Call Pattern Analysis with Unsupervised Neural Networks. Master's Thesis (Thesis). University of the Western Cape. hdl:11394/249. Archived (PDF) from the original on May 22, 2012.
- original with two chapters devoted to explaining recurrent neural networks, especially LSTM.
- Monner, Derek D.; Reggia, James A. (2010). "A generalized LSTM-like training algorithm for second-order recurrent neural networks". Neural Networks 25 (1): 70–83. doi:10.1016/j.neunet.2011.07.003. PMID 21803542. PMC 3217173. http://www.cs.umd.edu/~dmonner/papers/nn2012.pdf. "High-performing extension of LSTM that has been simplified to a single node type and can train arbitrary architectures".
- Dolphin, R (12 November 2021). "LSTM Networks – A Detailed Explanation". Article. https://towardsdatascience.com/lstm-networks-a-detailed-explanation-8fae6aefc7f9.
- Herta, Christian. "How to implement LSTM in Python with Theano". Tutorial. http://christianherta.de/lehre/dataScience/machineLearning/neuralNetworks/LSTM.html.