Gentle Introduction to the Bias-Variance Trade-Off in Machine Learning
Last Updated on October 25, 2019
Supervised machine learning algorithms can best be understood through the lens of the bias-variance trade-off.
In this post, you will discover the Bias-Variance Trade-Off and how to use it to better understand machine learning algorithms and get better performance on your data.
Kick-start your project with my new book Master Machine Learning Algorithms, including step-by-step tutorials and the Excel Spreadsheet files for all examples.
Let’s get started.
- Update Oct/2019: Removed discussion of parametric/nonparametric models (thanks Alex).
Overview of Bias and Variance
In supervised machine learning an algorithm learns a model from training data.
The goal of any supervised machine learning algorithm is to best estimate the mapping function (f) for the output variable (Y) given the input data (X). The mapping function is often called the target function because it is the function that a given supervised machine learning algorithm aims to approximate.
The prediction
To finish reading, please visit source site