A Structured Self-attentive Sentence Embedding

Implementation for the paper A Structured Self-Attentive Sentence Embedding, which was published in ICLR 2017: https://arxiv.org/abs/1703.03130 .

USAGE:

For binary sentiment classification on imdb dataset run : python classification.py "binary"

For multiclass classification on reuters dataset run : python classification.py "multiclass"

You can change the model parameters in the model_params.json file Other tranining parameters like number of attention hops etc can be configured in the config.json file.

If you want to use pretrained glove embeddings , set the use_embeddings parameter to "True" ,default is set to False. Do not forget to download the glove.6B.50d.txt and place it in the glove folder.

Implemented:

  • Classification using self attention
  • Regularization using Frobenius norm
  • Gradient clipping
  • Visualizing the attention weights

Instead

 

 

 

To finish reading, please visit source site