Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation

Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications. A naive solution is to transform the data so that it is statistically independent of group membership, but this may throw away too much information when a reasonable compromise between fairness and accuracy is desired… Another common approach is to limit the ability of a particular adversary who seeks to maximize parity. Unfortunately, representations produced by adversarial approaches may still retain […]

Read more

DENet: a deep architecture for audio surveillance applications

In the last years, a big interest of both the scientific community and the market has been devoted to the design of audio surveillance systems, able to analyse the audio stream and to identify events of interest; this is particularly true in security applications, in which the audio analytics can be profitably used as an alternative to video analytics systems, but also combined with them. Within this context, in this paper we propose a novel recurrent convolutional neural network architecture, […]

Read more

Hugging Face – Issue 5 – Dec 21st 2020

News Hugging Face Datasets Sprint 2020 This December, we had our largest community event ever: the Hugging Face Datasets Sprint 2020. It all started as an internal project gathering about 15 employees to spend a week working together to add datasets to the Hugging Face Datasets Hub backing the 🤗 datasets library. The library provides 2 main features surrounding datasets: One-line dataloaders for many public datasets: with a simple command like    

Read more

Hugging Face – Special Edition – New Plans, Private Models and AutoNLP – Dec 23rd 2020

News Ho Ho Ho! welcome to a special edition of the Hugging Face newsletter focused on new and upcoming commercial products. 👩‍🔬Introducing Supporter plans for individuals, with private models 👩‍🔬 Hugging Face is built for, and by the NLP community. We share our commitment to democratize NLP with hundreds of open source contributors, and model contributors all around    

Read more

Python: Update All Packages With pip-review

Introduction Updating Python packages can be a hassle. There are many of them – it’s hard to keep track of all the newest versions, and even when you decide what to update, you still have to update each of them manually. To address this issue, pip-review was created. It lets you smoothly manage all available PyPi updates with simple commands. Originally a part of the pip-tools package, it now lives on as a standalone convenience wrapper around pip. In this […]

Read more

Machine Translation Weekly 64: Non-autoregressive Models Strike Back

Half a year ago I featured here (MT Weekly 45) a paper that questions the contribution of non-autoregressive models to computational efficiency. It showed that a model with a deep encoder (that can be parallelized) and a shallow decoder (that works sequentially) reaches the same speed with much better translation quality than NAR models. A pre-print by Facebook AI and CMU published on New Year’s Eve, Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade, presents a new fully non-autoregressive […]

Read more

Univariate Function Optimization in Python

How to Optimize a Function with One Variable? Univariate function optimization involves finding the input to a function that results in the optimal output from an objective function. This is a common procedure in machine learning when fitting a model with one parameter or tuning a model that has a single hyperparameter. An efficient algorithm is required to solve optimization problems of this type that will find the best solution with the minimum number of evaluations of the objective function, […]

Read more

GraphHop: An Enhanced Label Propagation Method for Node Classification

A scalable semi-supervised node classification method on graph-structured data, called GraphHop, is proposed in this work. The graph contains attributes of all nodes but labels of a few nodes… The classical label propagation (LP) method and the emerging graph convolutional network (GCN) are two popular semi-supervised solutions to this problem. The LP method is not effective in modeling node attributes and labels jointly or facing a slow convergence rate on large-scale graphs. GraphHop is proposed to its shortcoming. With proper […]

Read more

Who’s a Good Boy? Reinforcing Canine Behavior using Machine Learning in Real-Time

In this paper we outline the development methodology for an automatic dog treat dispenser which combines machine learning and embedded hardware to identify and reward dog behaviors in real-time. Using machine learning techniques for training an image classification model we identify three behaviors of our canine companions: “sit”, “stand”, and “lie down” with up to 92% test accuracy and 39 frames per second… We evaluate a variety of neural network architectures, interpretability methods, model quantization and optimization techniques to develop […]

Read more

Few-Shot Learning with Class Imbalance

Few-shot learning aims to train models on a limited number of labeled samples given in a support set in order to generalize to unseen samples from a query set. In the standard setup, the support set contains an equal amount of data points for each class… However, this assumption overlooks many practical considerations arising from the dynamic nature of the real world, such as class-imbalance. In this paper, we present a detailed study of few-shot class-imbalance along three axes: meta-dataset […]

Read more
1 703 704 705 706 707 928