How to Use Weight Decay to Reduce Overfitting of Neural Network in Keras
Last Updated on August 25, 2020
Weight regularization provides an approach to reduce the overfitting of a deep learning neural network model on the training data and improve the performance of the model on new data, such as the holdout test set.
There are multiple types of weight regularization, such as L1 and L2 vector norms, and each requires a hyperparameter that must be configured.
In this tutorial, you will discover how to apply weight regularization to improve the performance of an overfit deep learning neural network in Python with Keras.
After completing this tutorial, you will know:
- How to use the Keras API to add weight regularization to an MLP, CNN, or LSTM neural network.
- Examples of weight regularization configurations used in books and recent research papers.
- How to work through a case study for identifying an overfit model and improving test performance using weight regularization.
Kick-start your project with my new book Better Deep Learning, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Updated Oct/2019: Updated for Keras 2.3 and TensorFlow 2.0.