Logistic Regression for Image Classification Using OpenCV

In a previous tutorial, we explored logistic regression as a simple but popular machine learning algorithm for binary classification implemented in the OpenCV library. So far, we have seen how logistic regression may be applied to a custom two-class dataset we have generated ourselves.  In this tutorial, you will learn how the standard logistic regression algorithm, inherently designed for binary classification, can be modified to cater to multi-class classification problems by applying it to an image classification task.  After completing […]

Read more

Highlights from Machine Translation and Multilinguality in October 2023

Here is my monthly summary of what papers on multilinguality and machine translation I found the most noteworthy during October 2023. There were 2,881 preprints in the computation and language category on arXiv (a new record number), so there is a big chance that there were preprints I would like to read that I missed. Navigating Cultural Chasms: Exploring and Unlocking the Cultural POV of Text-To-Image Models A preprint from Israeli Technion, Google Research, and Cambridge University studies cultural awareness […]

Read more

Highlights from Machine Translation and Multilinguality in November 2023

Here are a couple of articles that caught my attention in November. Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles A team from Johns Hopkins University published a pre-print that belongs to the currently trendy genre: stuff we can do with LLMs. This time, it is about how to use it efficiently for domain-specific machine translation. It is known that few-shot prompting works much better than zero-shot prompting, but you need to select proper parallel examples. […]

Read more

Q-learning for beginners

The goal of this article is to teach an AI how to solve the ❄️Frozen Lake environment using reinforcement learning. We’re going to start from scratch and try to recreate the Q-learning algorithm by ourselves. We’ll not just understand how it works, but more importantly, why it was designed that way. By the end of this article, you’ll master the Q-learning algorithm and be able to apply it to other environments. It’s a cool mini-project that gives a better insight […]

Read more

Introduction to Linear Programming in Python

Linear programming is a technique to optimize any problem with multiple variables and constraints. It’s a simple but powerful tool every data scientist should master. Imagine you are a strategist recruiting an army. You have: Three resources: 🌾food, 🪵wood, and 🪙gold Three units: 🗡️swordsmen, 🏹bowmen, and 🐎horsemen. Horsemen are stronger than bowmen, who are in turn stronger than swordsmen. The following table provides the cost and power of each unit: 🗡️Swordsman 60 20 0 70 🏹Bowman 80 10 40 95 […]

Read more

Integer vs. Linear Programming in Python

Why is linear programming called that way? Both terms are confusing: Linear implies that nonlinear programming exists; Programming actually means “planning” in this context. In summary, it has nothing to do with code: linear or not. It’s about optimizing variables with various constraints. In this article, we’re gonna talk about another type of optimization: integer programming. We’ll see why a good understanding of the problem we face is necessary to choose the right solver. Finally, we will write a model […]

Read more

Graph Attention Networks: Self-Attention for GNNs

Graph Attention Networks (GATs) are one of the most popular types of Graph Neural Networks. Instead of calculating static weights based on node degrees like Graph Convolutional Networks (GCNs), they assign dynamic weights to node features through a process called self-attention. The main idea behind GATs is that some neighbors are more important than others, regardless of their node degrees. Node 4 is more important than node 3, which is more important than node 2 In this article, we will […]