Articles About Natural Language Processing

Issue #116 – Fully Non-autoregressive Neural Machine Translation

04 Feb21 Issue #116 – Fully Non-autoregressive Neural Machine Translation Author: Dr. Patrik Lambert, Senior Machine Translation Scientist @ Iconic Introduction The standard Transformer model is autoregressive (AT), which means that the prediction of each target word is based on the predictions for the previous words. The output is generated from left to right, a process which cannot be parallelised because the prediction probability of a token depends on previous tokens. In the last few years, new approaches have been […]

Read more

How to create your own Question and Answering API(Flask+Docker +BERT) using haystack framework

Introduction Note from the author: In this article, we will learn how to create your own Question and Answering(QA) API using python, flask, and haystack framework with docker. The haystack framework will provide the complete QA features which are highly scalable and customizable. In this article Medium Rules, the text will be used as the target document and fine-tuning the model as well. Basic Knowledge Required: Elasticsearch & Docker This article contains the working code which can be directly build […]

Read more

Issue #115 – Revisiting Low-Resource Neural Machine Translation: A Case Study

28 Jan21 Issue #115 – Revisiting Low-Resource Neural Machine Translation: A Case Study Author: Akshai Ramesh, Machine Translation Scientist @ Iconic Introduction Although deep neu­ral models produce state­-of­-the-­art results in many translation tasks, they are found to under­perform phrase-based statistical machine translation in resource ­poor conditions. The majority of research on low-resource neural machine translation (NMT) focuses on the exploitation of monolingual or parallel data involving other language pairs. There is notably less attention into the research of low-resource NMT […]

Read more

Issue #114 – Tagged Back-translation Revisited

21 Jan21 Issue #114 – Tagged Back-translation Revisited Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic Introduction In a previous post in our series, we examined tagged back-translation for Neural Machine Translation (NMT), whereby the back-translated data that is used to supplement parallel data is tagged before training. This led to improvements in the output over untagged data. Today’s blog post extends the work of Caswell et al. (2019), by taking a closer look at why and how adding […]

Read more

Streamlit Web API for NLP: Tweet Sentiment Analysis

This article was published as a part of the Data Science Blogathon. Introduction Developing Web Apps for data models has always been a hectic task for non-web developers. For developing Web API we need to make the front end as well as back end platform. That is not an easy task. But then python comes to the rescue with its very fascinating frameworks like Streamlit, Flassger, FastAPI. These frameworks help us to build web APIs very elegantly, without worrying about […]

Read more

Implementation of Attention Mechanism for Caption Generation on Transformers using TensorFlow

Overview Learning about the state of the art model that is Transformers. Understand how we can implement Transformers on the already seen image captioning problem using Tensorflow Comparing the results of Transformers vs attention models.   Introduction We have seen that Attention mechanisms (in the previous article) have become an integral part of compelling sequence modeling and transduction models in various tasks (such as image captioning), allowing modeling of dependencies without regard to their distance in the input or output […]

Read more

ACL 2018 Highlights: Understanding Representations and Evaluation in More Challenging Settings

This post discusses highlights of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). This post originally appeared at the AYLIEN blog. I attended the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) in Melbourne, Australia from July 15-20, 2018 and presented three papers . It is foolhardy to try to condense an entire conference into one topic; however, in retrospect, certain themes appear particularly pronounced. In 2015 and 2016, NLP conferences were dominated […]

Read more

Issue #113 – Optimising Transformer for Low-Resource Neural Machine Translation

14 Jan21 Issue #113 – Optimising Transformer for Low-Resource Neural Machine Translation Author: Dr. Jingyi Han, Machine Translation Scientist @ Iconic Introduction The lack of parallel training data has always been a big challenge when building neural machine translation (NMT) systems. Most approaches address the low-resource issue in NMT by exploiting more parallel or comparable corpora. Recently, several studies show that instead of adding more data, optimising NMT systems could also be helpful to improve translation quality for low-resource language […]

Read more

Emotion classification on Twitter Data Using Transformers

Introduction The world of Natural language processing is recently overtaken by the invention of Transformers. Transformers are entirely indifferent to the conventional sequence-based networks. RNNs are the initial weapon used for sequence-based tasks like text generation, text classification, etc. But with the arrival of LSTM and GRU cells, the issue with capturing long-term dependency in the text got resolved. But learning the model with LSTM cells is a hard task as we cannot make it learn parallelly.

Read more
1 40 41 42 43 44 71