How to Reduce Variance in a Final Machine Learning Model
Last Updated on June 26, 2020
A final machine learning model is one trained on all available data and is then used to make predictions on new data.
A problem with most final models is that they suffer variance in their predictions.
This means that each time you fit a model, you get a slightly different set of parameters that in turn will make slightly different predictions. Sometimes more and sometimes less skillful than what you expected.
This can be frustrating, especially when you are looking to deploy a model into an operational environment.
In this post, you will discover how to think about model variance in a final model and techniques that you can use to reduce the variance in predictions from a final model.
After reading this post, you will know:
- The problem with variance in the predictions made by a final model.
- How to measure model variance and how variance is addressed generally when estimating parameters.
- Techniques you can use to reduce the variance in predictions made by a final model.
Let’s get started.