How to Avoid Overfitting in Deep Learning Neural Networks
Last Updated on August 6, 2019
Training a deep neural network that can generalize well to new data is a challenging problem.
A model with too little capacity cannot learn the problem, whereas a model with too much capacity can learn it too well and overfit the training dataset. Both cases result in a model that does not generalize well.
A modern approach to reducing generalization error is to use a larger model that may be required to use regularization during training that keeps the weights of the model small. These techniques not only reduce overfitting, but they can also lead to faster optimization of the model and better overall performance.
In this post, you will discover the problem of overfitting when training neural networks and how it can be addressed with regularization methods.
After reading this post, you will know:
- Underfitting can easily be addressed by increasing the capacity of the network, but overfitting requires the use of specialized techniques.
- Regularization methods like weight decay provide an easy way to control overfitting for large neural network models.
- A modern recommendation for regularization is to use early stopping with dropout and a weight constraint.
Kick-start your project with
To finish reading, please visit source site