A Gentle Introduction to Markov Chain Monte Carlo for Probability
Probabilistic inference involves estimating an expected value or density using a probabilistic model.
Often, directly inferring values is not tractable with probabilistic models, and instead, approximation methods must be used.
Markov Chain Monte Carlo sampling provides a class of algorithms for systematic random sampling from high-dimensional probability distributions. Unlike Monte Carlo sampling methods that are able to draw independent samples from the distribution, Markov Chain Monte Carlo methods draw samples where the next sample is dependent on the existing sample, called a Markov Chain. This allows the algorithms to narrow in on the quantity that is being approximated from the distribution, even with a large number of random variables.
In this post, you will discover a gentle introduction to Markov Chain Monte Carlo for machine learning.
After reading this post, you will know:
- Monte Carlo sampling is not effective and may be intractable for high-dimensional probabilistic models.
- Markov Chain Monte Carlo provides an alternate approach to random sampling a high-dimensional probability distribution where the next sample is dependent upon the current sample.
- Gibbs Sampling and the more general Metropolis-Hastings algorithm are the two most common approaches to Markov Chain Monte Carlo sampling.
Kick-start your project with my new book
To finish reading, please visit source site