How to use Different Batch Sizes when Training and Predicting with LSTMs
Last Updated on August 14, 2019
Keras uses fast symbolic mathematical libraries as a backend, such as TensorFlow and Theano.
A downside of using these libraries is that the shape and size of your data must be defined once up front and held constant regardless of whether you are training your network or making predictions.
On sequence prediction problems, it may be desirable to use a large batch size when training the network and a batch size of 1 when making predictions in order to predict the next step in the sequence.
In this tutorial, you will discover how you can address this problem and even use different batch sizes during training and predicting.
After completing this tutorial, you will know:
- How to design a simple sequence prediction problem and develop an LSTM to learn it.
- How to vary an LSTM configuration for online and batch-based learning and predicting.
- How to vary the batch size used for training from that used for predicting.
Kick-start your project with my new book Long Short-Term Memory Networks With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.