A Gentle Introduction to Linear Regression With Maximum Likelihood Estimation
Last Updated on November 1, 2019
Linear regression is a classical model for predicting a numerical quantity.
The parameters of a linear regression model can be estimated using a least squares procedure or by a maximum likelihood estimation procedure. Maximum likelihood estimation is a probabilistic framework for automatically finding the probability distribution and parameters that best describe the observed data. Supervised learning can be framed as a conditional probability problem, and maximum likelihood estimation can be used to fit the parameters of a model that best summarizes the conditional probability distribution, so-called conditional maximum likelihood estimation.
A linear regression model can be fit under this framework and can be shown to derive an identical solution to a least squares approach.
In this post, you will discover linear regression with maximum likelihood estimation.
After reading this post, you will know:
- Linear regression is a model for predicting a numerical quantity and maximum likelihood estimation is a probabilistic framework for estimating model parameters.
- Coefficients of a linear regression model can be estimated using a negative log-likelihood function from maximum likelihood estimation.
- The negative log-likelihood function can be used to derive the least squares solution to linear regression.
Kick-start your
To finish reading, please visit source site