A Tour of Recurrent Neural Network Algorithms for Deep Learning
Last Updated on August 14, 2019
Recurrent neural networks, or RNNs, are a type of artificial neural network that add additional weights to the network to create cycles in the network graph in an effort to maintain an internal state.
The promise of adding state to neural networks is that they will be able to explicitly learn and exploit context in sequence prediction problems, such as problems with an order or temporal component.
In this post, you are going take a tour of recurrent neural networks used for deep learning.
After reading this post, you will know:
- How top recurrent neural networks used for deep learning work, such as LSTMs, GRUs, and NTMs.
- How top RNNs relate to the broader study of recurrence in artificial neural networks.
- How research in RNNs has led to state-of-the-art performance on a range of challenging problems.
Kick-start your project with my new book Long Short-Term Memory Networks With Python, including step-by-step tutorials and the Python source code files for all examples.
Note, we’re not going to cover every possible recurrent neural network. Instead, we will focus on recurrent neural networks used for deep learning (LSTMs, GRUs and NTMs) and the context
To finish reading, please visit source site