How to Use Word Embedding Layers for Deep Learning with Keras
Last Updated on September 3, 2020
Word embeddings provide a dense representation of words and their relative meanings.
They are an improvement over sparse representations used in simpler bag of word model representations.
Word embeddings can be learned from text data and reused among projects. They can also be learned as part of fitting a neural network on text data.
In this tutorial, you will discover how to use word embeddings for deep learning in Python with Keras.
After completing this tutorial, you will know:
- About word embeddings and that Keras supports word embeddings via the Embedding layer.
- How to learn a word embedding while fitting a neural network.
- How to use a pre-trained word embedding in a neural network.
Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Update Feb/2018: Fixed a bug due to a change in the underlying APIs.
- Updated Oct/2019: Updated for Keras 2.3 and TensorFlow 2.0.