How to Report Classifier Performance with Confidence Intervals
Last Updated on August 14, 2020
Once you choose a machine learning algorithm for your classification problem, you need to report the performance of the model to stakeholders.
This is important so that you can set the expectations for the model on new data.
A common mistake is to report the classification accuracy of the model alone.
In this post, you will discover how to calculate confidence intervals on the performance of your model to provide a calibrated and robust indication of your model’s skill.
Kick-start your project with my new book Statistics for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
Classification Accuracy
The skill of a classification machine learning algorithm is often reported as classification accuracy.
This is the percentage of the correct predictions from all predictions made. It is calculated as follows: