Use Weight Regularization to Reduce Overfitting of Deep Learning Models
Last Updated on August 6, 2019
Neural networks learn a set of weights that best map inputs to outputs.
A network with large network weights can be a sign of an unstable network where small changes in the input can lead to large changes in the output. This can be a sign that the network has overfit the training dataset and will likely perform poorly when making predictions on new data.
A solution to this problem is to update the learning algorithm to encourage the network to keep the weights small. This is called weight regularization and it can be used as a general technique to reduce overfitting of the training dataset and improve the generalization of the model.
In this post, you will discover weight regularization as an approach to reduce overfitting for neural networks.
After reading this post, you will know:
- Large weights in a neural network are a sign of a more complex network that has overfit the training data.
- Penalizing a network based on the size of the network weights during training can reduce overfitting.
- An L1 or L2 vector norm penalty can be added to the optimization of the network to encourage
To finish reading, please visit source site