How to Evaluate the Skill of Deep Learning Models
Last Updated on August 14, 2020
I often see practitioners expressing confusion about how to evaluate a deep learning model.
This is often obvious from questions like:
- What random seed should I use?
- Do I need a random seed?
- Why don’t I get the same results on subsequent runs?
In this post, you will discover the procedure that you can use to evaluate deep learning models and the rationale for using it.
You will also discover useful related statistics that you can calculate to present the skill of your model, such as standard deviation, standard error, and confidence intervals.
Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
The Beginner’s Mistake
You fit the model to your training data and evaluate it on the test dataset, then report the skill.
Perhaps you use k-fold
To finish reading, please visit source site