How to Develop and Evaluate Naive Classifier Strategies Using Probability
Last Updated on September 25, 2019
A Naive Classifier is a simple classification model that assumes little to nothing about the problem and the performance of which provides a baseline by which all other models evaluated on a dataset can be compared.
There are different strategies that can be used for a naive classifier, and some are better than others, depending on the dataset and the choice of performance measures. The most common performance measure is classification accuracy and common naive classification strategies, including randomly guessing class labels, randomly choosing labels from a training dataset, and using a majority class label.
It is useful to develop a small probability framework to calculate the expected performance of a given naive classification strategy and to perform experiments to confirm the theoretical expectations. These exercises provide an intuition both for the behavior of naive classification algorithms in general, and the importance of establishing a performance baseline for a classification task.
In this tutorial, you will discover how to develop and evaluate naive classification strategies for machine learning.
After completing this tutorial, you will know:
- The performance of naive classification models provides a baseline by which all other models can be deemed
To finish reading, please visit source site