A Gentle Introduction to Bayes Theorem for Machine Learning
Last Updated on December 4, 2019
Bayes Theorem provides a principled way for calculating a conditional probability.
It is a deceptively simple calculation, although it can be used to easily calculate the conditional probability of events where intuition often fails.
Although it is a powerful tool in the field of probability, Bayes Theorem is also widely used in the field of machine learning. Including its use in a probability framework for fitting a model to a training dataset, referred to as maximum a posteriori or MAP for short, and in developing models for classification predictive modeling problems such as the Bayes Optimal Classifier and Naive Bayes.
In this post, you will discover Bayes Theorem for calculating conditional probabilities and how it is used in machine learning.
After reading this post, you will know:
- What Bayes Theorem is and how to work through the calculation on a real scenario.
- What the terms in the Bayes theorem calculation mean and the intuitions behind them.
- Examples of how Bayes theorem is used in classifiers, optimization and causal models.
Kick-start your project with my new book Probability for Machine Learning, including step-by-step tutorials and the Python source code files for all
To finish reading, please visit source site