BERT for Natural Language Inference simplified in Pytorch!
Introduction to BERT:
BERT stands for Bidirectional Encoder Representations from Transformers. It was introduced in 2018 by Google Researchers. BERT achieved state-of-art performance in most of the NLP tasks at that time and drawn the attention of the data science community worldwide.
It is extensively used today by data science practitioners for various NLP tasks. Details about the working of the BERT model can be found here.
Introduction to Natural Language Inference:
Natural Language Inference is a task in NLP where we are given two sentences namely premise and hypothesis. We have to predict whether the hypothesis given is True, False or not related with respect to Premise. We call it entailment for True, contradiction for False and neutral for undetermined or not related. Also, We can understand it with the following examples:
- Entailment: A person is riding a horse & A person is outdoor on a horse.
- Contradiction: A person is wearing blue