Instability of Online Learning for Stateful LSTM for Time Series Forecasting
Last Updated on August 28, 2020
Some neural network configurations can result in an unstable model.
This can make them hard to characterize and compare to other model configurations on the same problem using descriptive statistics.
One good example of a seemingly unstable model is the use of online learning (a batch size of 1) for a stateful Long Short-Term Memory (LSTM) model.
In this tutorial, you will discover how to explore the results of a stateful LSTM fit using online learning on a standard time series forecasting problem.
After completing this tutorial, you will know:
- How to design a robust test harness for evaluating LSTM models on time series forecasting problems.
- How to analyze a population of results, including summary statistics, spread, and distribution of results.
- How to analyze the impact of increasing the number of repeats for an experiment.
Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Updated Apr/2019: Updated the link to dataset.