Articles About Machine Learning

How to Calculate Parametric Statistical Hypothesis Tests in Python

Last Updated on August 8, 2019 Parametric statistical methods often mean those methods that assume the data samples have a Gaussian distribution. in applied machine learning, we need to compare data samples, specifically the mean of the samples. Perhaps to see if one technique performs better than another on one or more datasets. To quantify this question and interpret the results, we can use parametric hypothesis testing methods such as the Student’s t-test and ANOVA. In this tutorial, you will […]

Read more

How to Transform Data to Better Fit The Normal Distribution

Last Updated on August 8, 2019 A large portion of the field of statistics is concerned with methods that assume a Gaussian distribution: the familiar bell curve. If your data has a Gaussian distribution, the parametric methods are powerful and well understood. This gives some incentive to use them if possible. Even if your data does not have a Gaussian distribution. It is possible that your data does not look Gaussian or fails a normality test, but can be transformed […]

Read more

A Gentle Introduction to k-fold Cross-Validation

Last Updated on August 3, 2020 Cross-validation is a statistical method used to estimate the skill of machine learning models. It is commonly used in applied machine learning to compare and select a model for a given predictive modeling problem because it is easy to understand, easy to implement, and results in skill estimates that generally have a lower bias than other methods. In this tutorial, you will discover a gentle introduction to the k-fold cross-validation procedure for estimating the […]

Read more

A Gentle Introduction to the Bootstrap Method

Last Updated on August 8, 2019 The bootstrap method is a resampling technique used to estimate statistics on a population by sampling a dataset with replacement. It can be used to estimate summary statistics such as the mean or standard deviation. It is used in applied machine learning to estimate the skill of machine learning models when making predictions on data not included in the training data. A desirable property of the results from estimating machine learning model skill is […]

Read more

Confidence Intervals for Machine Learning

Last Updated on August 8, 2019 Much of machine learning involves estimating the performance of a machine learning algorithm on unseen data. Confidence intervals are a way of quantifying the uncertainty of an estimate. They can be used to add a bounds or likelihood on a population parameter, such as a mean, estimated from a sample of independent observations from the population. Confidence intervals come from the field of estimation statistics. In this tutorial, you will discover confidence intervals and […]

Read more

Prediction Intervals for Machine Learning

Last Updated on May 1, 2020 A prediction from a machine learning perspective is a single point that hides the uncertainty of that prediction. Prediction intervals provide a way to quantify and communicate the uncertainty in a prediction. They are different from confidence intervals that instead seek to quantify the uncertainty in a population parameter such as a mean or standard deviation. Prediction intervals describe the uncertainty for a single specific outcome. In this tutorial, you will discover the prediction […]

Read more

A Gentle Introduction to Statistical Tolerance Intervals in Machine Learning

Last Updated on August 8, 2019 It can be useful to have an upper and lower limit on data. These bounds can be used to help identify anomalies and set expectations for what to expect. A bound on observations from a population is called a tolerance interval. A tolerance interval comes from the field of estimation statistics. A tolerance interval is different from a prediction interval that quantifies the uncertainty for a single predicted value. It is also different from […]

Read more

A Gentle Introduction to Estimation Statistics for Machine Learning

Last Updated on August 8, 2019 Statistical hypothesis tests can be used to indicate whether the difference between two samples is due to random chance, but cannot comment on the size of the difference. A group of methods referred to as “new statistics” are seeing increased use instead of or in addition to p-values in order to quantify the magnitude of effects and the amount of uncertainty for estimated values. This group of statistical methods is referred to as “estimation […]

Read more

A Gentle Introduction to Data Visualization Methods in Python

Last Updated on August 23, 2019 Sometimes data does not make sense until you can look at in a visual form, such as with charts and plots. Being able to quickly visualize your data samples for yourself and others is an important skill both in applied statistics and in applied machine learning. In this tutorial, you will discover the five types of plots that you will need to know when visualizing data in Python and how to use them to […]

Read more

A Gentle Introduction to Statistical Data Distributions

Last Updated on August 8, 2019 A sample of data will form a distribution, and by far the most well-known distribution is the Gaussian distribution, often called the Normal distribution. The distribution provides a parameterized mathematical function that can be used to calculate the probability for any individual observation from the sample space. This distribution describes the grouping or the density of the observations, called the probability density function. We can also calculate the likelihood of an observation having a […]

Read more
1 177 178 179 180 181 226