Encoder-Decoder Deep Learning Models for Text Summarization
Last Updated on August 7, 2019
Text summarization is the task of creating short, accurate, and fluent summaries from larger text documents.
Recently deep learning methods have proven effective at the abstractive approach to text summarization.
In this post, you will discover three different models that build on top of the effective Encoder-Decoder architecture developed for sequence-to-sequence prediction in machine translation.
After reading this post, you will know:
- The Facebook AI Research model that uses the Encoder-Decoder model with a convolutional neural network encoder.
- The IBM Watson model that uses the Encoder-Decoder model with pointing and hierarchical attention.
- The Stanford / Google model that uses the Encoder-Decoder model with pointing and coverage.
Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
Models Overview
We will look at three different models for text summarization, named for the
To finish reading, please visit source site