Dropout with LSTM Networks for Time Series Forecasting
Last Updated on August 28, 2020
Long Short-Term Memory (LSTM) models are a type of recurrent neural network capable of learning sequences of observations.
This may make them a network well suited to time series forecasting.
An issue with LSTMs is that they can easily overfit training data, reducing their predictive skill.
Dropout is a regularization method where input and recurrent connections to LSTM units are probabilistically excluded from activation and weight updates while training a network. This has the effect of reducing overfitting and improving model performance.
In this tutorial, you will discover how to use dropout with LSTM networks and design experiments to test for its effectiveness for time series forecasting.
After completing this tutorial, you will know:
- How to design a robust test harness for evaluating LSTM networks for time series forecasting.
- How to design, execute, and interpret the results from using input weight dropout with LSTMs.
- How to design, execute, and interpret the results from using recurrent weight dropout with LSTMs.
Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Updated Apr/2019: Updated
To finish reading, please visit source site