A Gentle Introduction to Dropout for Regularizing Deep Neural Networks
Last Updated on August 6, 2019
Deep learning neural networks are likely to quickly overfit a training dataset with few examples.
Ensembles of neural networks with different model configurations are known to reduce overfitting, but require the additional computational expense of training and maintaining multiple models.
A single model can be used to simulate having a large number of different network architectures by randomly dropping out nodes during training. This is called dropout and offers a very computationally cheap and remarkably effective regularization method to reduce overfitting and improve generalization error in deep neural networks of all kinds.
In this post, you will discover the use of dropout regularization for reducing overfitting and improving the generalization of deep neural networks.
After reading this post, you will know:
- Large weights in a neural network are a sign of a more complex network that has overfit the training data.
- Probabilistically dropping out nodes in the network is a simple and effective regularization method.
- A large network with more training and the use of a weight constraint are suggested when using dropout.
Kick-start your project with my new book Better Deep Learning, including step-by-step tutorials and the Python source
To finish reading, please visit source site