A Gentle Introduction to Maximum a Posteriori (MAP) for Machine Learning
Density estimation is the problem of estimating the probability distribution for a sample of observations from a problem domain.
Typically, estimating the entire distribution is intractable, and instead, we are happy to have the expected value of the distribution, such as the mean or mode. Maximum a Posteriori or MAP for short is a Bayesian-based approach to estimating a distribution and model parameters that best explain an observed dataset.
This flexible probabilistic framework can be used to provide a Bayesian foundation for many machine learning algorithms, including important methods such as linear regression and logistic regression for predicting numeric values and class labels respectively, and unlike maximum likelihood estimation, explicitly allows prior belief about candidate models to be incorporated systematically.
In this post, you will discover a gentle introduction to Maximum a Posteriori estimation.
After reading this post, you will know:
- Maximum a Posteriori estimation is a probabilistic framework for solving the problem of density estimation.
- MAP involves calculating a conditional probability of observing the data given a model weighted by a prior probability or belief about the model.
- MAP provides an alternate probability framework to maximum likelihood estimation for machine learning.
Kick-start your project with my new book
To finish reading, please visit source site