How to Implement Scaled Dot-Product Attention from Scratch in TensorFlow and Keras
Having familiarized ourselves with the theory behind the Transformer model and its attention mechanism, we’ll start our journey of implementing a complete Transformer model by first seeing how to implement the scaled-dot product attention. The scaled dot-product attention is an integral part of the multi-head attention, which, in turn, is an important component of both the Transformer encoder and decoder. Our end goal will be to apply the complete Transformer model to Natural Language Processing (NLP).
In this tutorial, you will discover how to implement scaled dot-product attention from scratch in TensorFlow and Keras.
After completing this tutorial, you will know:
- The operations that form part of the scaled dot-product attention mechanism
- How to implement