An NLP Approach to Mining Online Reviews using Topic Modeling (with Python codes)

Introduction E-commerce has revolutionized the way we shop. That phone you’ve been saving up to buy for months? It’s just a search and a few clicks away. Items are delivered within a matter of days (sometimes even the next day!). For online retailers, there are no constraints related to inventory management or space management They can sell as many different products as they want. Brick and mortar stores can keep only a limited number of products due to the finite space […]

Read more

Introduction to StanfordNLP: An Incredible State-of-the-Art NLP Library for 53 Languages (with Python code)

Introduction A common challenge I came across while learning Natural Language Processing (NLP) – can we build models for non-English languages? The answer has been no for quite a long time. Each language has its own grammatical patterns and linguistic nuances. And there just aren’t many datasets available in other languages. That’s where Stanford’s latest NLP library steps in – StanfordNLP. I could barely contain my excitement when I read the news last week. The authors claimed StanfordNLP could support more […]

Read more

Building a Recommendation System using Word2vec: A Unique Tutorial with Case Study in Python

Overview Recommendation engines are ubiquitous nowadays and data scientists are expected to know how to build one Word2vec is an ultra-popular word embeddings used for performing a variety of NLP tasks We will use word2vec to build our own recommendation system. Curious how NLP and recommendation engines combine? Let’s find out!   Introduction Be honest – how many times have you used the ‘Recommended for you’ section on Amazon? Ever since I found out a few years back that machine […]

Read more

A Comprehensive Guide to Attention Mechanism in Deep Learning for Everyone

 Overview The attention mechanism has changed the way we work with deep learning algorithms Fields like Natural Language Processing (NLP) and even Computer Vision have been revolutionized by the attention mechanism We will learn how this attention mechanism works in deep learning, and even implement it in Python   Introduction “Every once in a while, a revolutionary product comes along that changes everything.” – Steve Jobs What does one of the most famous quotes of the 21st century have to do with […]

Read more

An Essential Guide to Pretrained Word Embeddings for NLP Practitioners

Overview Understand the importance of pretrained word embeddings Learn about the two popular types of pretrained word embeddings – Word2Vec and GloVe Compare the performance of pretrained word embeddings and learning embeddings from scratch   Introduction How do we make machines understand text data? We know that machines are supremely adept at dealing and working with numerical data but they become sputtering instruments if we feed raw text data to them. The idea is to create a representation of words […]

Read more

How Part-of-Speech Tag, Dependency and Constituency Parsing Aid In Understanding Text Data?

Overview Learn about Part-of-Speech (POS) Tagging, Understand Dependency Parsing and Constituency Parsing   Introduction Knowledge of languages is the doorway to wisdom.                                                               – Roger Bacon I was amazed that Roger Bacon gave the above quote in the 13th century, and it still holds, Isn’t it? I am sure that […]

Read more

Simple Text Multi Classification Task Using Keras BERT

This article was published as a part of the Data Science Blogathon. Introduction BERT is a really powerful language representation model that has been a big milestone in the field of NLP. It has greatly increased our capacity to do transfer learning in NLP. It comes with great promise to solve a wide variety of NLP tasks. Definitely you will gain great knowledge by the end of this article, keep reading. I am sure you will get good hands-on experience […]

Read more

Suppressing Mislabeled Data via Grouping and Self-Attention

Deep networks achieve excellent results on large-scale clean data but degrade significantly when learning from noisy labels. To suppressing the impact of mislabeled data, this paper proposes a conceptually simple yet efficient training block, termed as Attentive Feature Mixup (AFM), which allows paying more attention to clean samples and less to mislabeled ones via sample interactions in small groups… Specifically, this plug-and-play AFM first leverages a textit{group-to-attend} module to construct groups and assign attention weights for group-wise samples, and then […]

Read more

Low-Variance Policy Gradient Estimation with World Models

In this paper, we propose World Model Policy Gradient (WMPG), an approach to reduce the variance of policy gradient estimates using learned world models (WM’s). In WMPG, a WM is trained online and used to imagine trajectories… The imagined trajectories are used in two ways. Firstly, to calculate a without-replacement estimator of the policy gradient. Secondly, the return of the imagined trajectories is used as an informed baseline. We compare the proposed approach with AC and MAC on a set […]

Read more

Teaching a GAN What Not to Learn

Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution. Variants such as conditional GANs, auxiliary-classifier GANs (ACGANs) project GANs on to supervised and semi-supervised learning frameworks by providing labelled data and using multi-class discriminators… In this paper, we approach the supervised GAN problem from a different perspective, one that is motivated by the philosophy of the famous Persian poet Rumi who said, “The art of knowing is knowing what to ignore.” […]

Read more
1 758 759 760 761 762 928