A Gentle Introduction to Computational Learning Theory
Last Updated on September 7, 2020
Computational learning theory, or statistical learning theory, refers to mathematical frameworks for quantifying learning tasks and algorithms.
These are sub-fields of machine learning that a machine learning practitioner does not need to know in great depth in order to achieve good results on a wide range of problems. Nevertheless, it is a sub-field where having a high-level understanding of some of the more prominent methods may provide insight into the broader task of learning from data.
In this post, you will discover a gentle introduction to computational learning theory for machine learning.
After reading this post, you will know:
- Computational learning theory uses formal methods to study learning tasks and learning algorithms.
- PAC learning provides a way to quantify the computational difficulty of a machine learning task.
- VC Dimension provides a way to quantify the computational capacity of a machine learning algorithm.
Kick-start your project with my new book Probability for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.