How to Use Small Experiments to Develop a Caption Generation Model in Keras
Last Updated on September 3, 2020
Caption generation is a challenging artificial intelligence problem where a textual description must be generated for a photograph.
It requires both methods from computer vision to understand the content of the image and a language model from the field of natural language processing to turn the understanding of the image into words in the right order. Recently, deep learning methods have achieved state of the art results on examples of this problem.
It can be hard to develop caption generating models on your own data, primarily because the datasets and the models are so large and take days to train. An alternative approach is to explore model configurations with a small sample of the fuller dataset.
In this tutorial, you will discover how you can use a small sample of a standard photo captioning dataset to explore different deep model designs.
After completing this tutorial, you will know:
- How to prepare data for photo captioning modeling.
- How to design a baseline and test harness to evaluate the skill of models and control for their stochastic nature.
- How to evaluate properties like model skill, feature extraction models, and word embeddings in order to
To finish reading, please visit source site