Sentiment Analysis: Predicting Sentiment Of COVID-19 Tweets

This article was published as a part of the Data Science Blogathon. Introduction Hi folks, I hope you are doing well in these difficult times! We all are going through the unprecedented time of the Corona Virus pandemic. Some people lost their lives, but many of us successfully defeated this new strain i.e. Covid-19. The virus was declared a pandemic by World Health Organization on 11th March 2020. This article will analyze various types of “Tweets” gathered during pandemic times. […]

Read more

Machine Translation Weekly 69: One-Short learning in MT

This week I will discuss a paper about the one-shot vocabulary learning abilities of machine translation. The title of the paper is Continuous Learning in Neural Machine Translation using Bilingual Dictionaries and will be presented at EACL in May this year. A very similar idea is also presented in a paper Facilitating Terminology Translation with Target Lemma Annotations that will be presented at the same conference. One-shot learning is the ability to learn from a single example. In the context […]

Read more

Code Adam Optimization Algorithm From Scratch

Last Updated on February 21, 2021 Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A limitation of gradient descent is that a single step size (learning rate) is used for all input variables. Extensions to gradient descent like AdaGrad and RMSProp update the algorithm to use a separate step size for each input variable but may result in a step size that rapidly decreases […]

Read more

Simulated Annealing From Scratch in Python

Simulated Annealing is a stochastic global search optimization algorithm. This means that it makes use of randomness as part of the search process. This makes the algorithm appropriate for nonlinear objective functions where other local search algorithms do not operate well. Like the stochastic hill climbing local search algorithm, it modifies a single solution and searches the relatively local area of the search space until the local optima is located. Unlike the hill climbing algorithm, it may accept worse solutions […]

Read more

A Review of the Neural History of Natural Language Processing

This post discusses major recent advances in NLP focusing on neural network-based methods. This post originally appeared at the AYLIEN blog. This is the first blog post in a two-part series. The series expands on the Frontiers of Natural Language Processing session organized by Herman Kamper and me at the Deep Learning Indaba 2018. Slides of the entire session can be found here. This post discusses major recent advances in NLP focusing on neural network-based methods. The second post discusses […]

Read more

Issue #118 – EDITOR: a Repositioning Transformer with Soft Lexical Constraints

18 Feb21 Issue #118 – EDITOR: a Repositioning Transformer with Soft Lexical Constraints Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic EDITOR: an Edit-Based Transformer with Repositioning for Neural MT with Soft Lexical Constraints Introduction On our blog a couple of weeks ago (issue 116), Patrik explored fully non-autoregressive machine translation, highlighting the tricks such as dependency reduction that enabled quality to be maintained while retaining the speed-up gains over autoregressive MT. Today we revisit non-autoregressive translation (NAT), examining […]

Read more

No Free Lunch Theorem for Machine Learning

The No Free Lunch Theorem is often thrown around in the field of optimization and machine learning, often with little understanding of what it means or implies. The theorem states that all optimization algorithms perform equally well when their performance is averaged across all possible problems. It implies that there is no single best optimization algorithm. Because of the close relationship between optimization, search, and machine learning, it also implies that there is no single best machine learning algorithm for […]

Read more

Dialogue Summarization: A Deep Learning Approach

This article was published as a part of the Data Science Blogathon. Dialogue Summarization: Its types and methodology   Image cc: Aseem Srivastava Summarizing long pieces of text is a challenging problem. Summarization is done primarily in two ways: extractive approach and abstractive approach. In this work, we break down the problem of meeting summarization into extractive and abstractive components which further collectively generate a summary of the conversation.   What is Dialogue Summarization? Humans are social animals, we exchange […]

Read more

Python: Get Number of Days Between Dates

Introduction In this tutorial, we’ll take a look at how to get the number of days between two dates in Python. We’ll be using the built-in datetime package, that allows you to really easily work with datetime objects in Python. Creating a Datetime Object As datetime is a built-in module, you can access it right away by importing it at the top of your Python file. You can construct datetime objects in a few different ways: from datetime import datetime […]

Read more
1 684 685 686 687 688 919