How to Choose Loss Functions When Training Deep Learning Neural Networks
Last Updated on August 25, 2020
Deep learning neural networks are trained using the stochastic gradient descent optimization algorithm.
As part of the optimization algorithm, the error for the current state of the model must be estimated repeatedly. This requires the choice of an error function, conventionally called a loss function, that can be used to estimate the loss of the model so that the weights can be updated to reduce the loss on the next evaluation.
Neural network models learn a mapping from inputs to outputs from examples and the choice of loss function must match the framing of the specific predictive modeling problem, such as classification or regression. Further, the configuration of the output layer must also be appropriate for the chosen loss function.
In this tutorial, you will discover how to choose a loss function for your deep learning neural network for a given predictive modeling problem.
After completing this tutorial, you will know:
- How to configure a model for mean squared error and variants for regression problems.
- How to configure a model for cross-entropy and hinge loss functions for binary classification.
- How to configure a model for cross-entropy and KL divergence loss functions for
To finish reading, please visit source site