Efficient semidefinite-programming-based inference for binary and multi-class MRFs

Probabilistic inference in pairwise Markov Random Fields (MRFs), i.e. computing the partition function or computing a MAP estimate of the variables, is a foundational problem in probabilistic graphical models. Semidefinite programming relaxations have long been a theoretically powerful tool for analyzing properties of probabilistic inference, but have not been practical owing to the high computational cost of typical solvers for solving the resulting SDPs… In this paper, we propose an efficient method for computing the partition function or MAP estimate […]

Read more

Hands-On Tutorial on Stack Overflow Question Tagging

This article was published as a part of the Data Science Blogathon. Background I won’t be lying if I assert that every developer/engineer/student has used the website Stack Overflow more than once in their journey. Widely considered as one of the largest and more trusted websites for developers to learn and share their knowledge, the website presently hosts in excess of 10,000,000 questions. In this post, we try to predict the question tags based on the question text asked on […]

Read more

NeurIPS 2020: Moving toward real-world reinforcement learning via batch RL, strategic exploration, and representation learning

As human beings, we encounter unfamiliar situations all the time—learning to drive, living on our own for the first time, starting a new job. And while we can anticipate what to expect based on what others have told us or what we’ve picked up from books and depictions in movies and TV, it isn’t until we’re behind the wheel of a car, maintaining an apartment, or doing a job in a workplace that we’re able to take advantage of one […]

Read more

Research Collection – Reinforcement Learning at Microsoft

Reinforcement learning is about agents taking information from the world and learning a policy for interacting with it, so that they perform better. So, you can imagine a future where, every time you type on the keyboard, the keyboard learns to understand you better. Or every time you interact with some website, it understands better what your preferences are, so the world just starts working better and better at interacting with people. John Langford, Partner Research Manager, MSR NYC Fundamentally, […]

Read more

Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D

Understanding spatial relations (e.g., “laptop on table”) in visual input is important for both humans and robots. Existing datasets are insufficient as they lack large-scale, high-quality 3D ground truth information, which is critical for learning spatial relations… In this paper, we fill this gap by constructing Rel3D: the first large-scale, human-annotated dataset for grounding spatial relations in 3D. Rel3D enables quantifying the effectiveness of 3D information in predicting spatial relations on large-scale human data. Moreover, we propose minimally contrastive data […]

Read more

Reconstructing cellular automata rules from observations at nonconsecutive times

Recent experiments by Springer and Kenyon have shown that a deep neural network can be trained to predict the action of $t$ steps of Conway’s Game of Life automaton given millions of examples of this action on random initial states. However, training was never completely successful for $t>1$, and even when successful, a reconstruction of the elementary rule ($t=1$) from $t>1$ data is not within the scope of what the neural network can deliver… We describe an alternative network-like method, […]

Read more

Self-Explaining Structures Improve NLP Models

Existing approaches to explaining deep learning models in NLP usually suffer from two major drawbacks: (1) the main model and the explaining model are decoupled: an additional probing or surrogate model is used to interpret an existing model, and thus existing explaining tools are not self-explainable; (2) the probing model is only able to explain a model’s predictions by operating on low-level features by computing saliency scores for individual words but are clumsy at high-level text units such as phrases, […]

Read more

Neural Prototype Trees for Interpretable Fine-grained Image Recognition

Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, instead of generating post-hoc explanations that approximate a trained model… However, a large number of prototypes can be overwhelming. To reduce explanation size and improve interpretability, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in an interpretable decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in […]

Read more

Attributes Aware Face Generation with Generative Adversarial Networks

Recent studies have shown remarkable success in face image generations. However, most of the existing methods only generate face images from random noise, and cannot generate face images according to the specific attributes… In this paper, we focus on the problem of face synthesis from attributes, which aims at generating faces with specific characteristics corresponding to the given attributes. To this end, we propose a novel attributes aware face image generator method with generative adversarial networks called AFGAN. Specifically, we […]

Read more

Bengali Abstractive News Summarization(BANS): A Neural Attention Approach

Abstractive summarization is the process of generating novel sentences based on the information extracted from the original text document while retaining the context. Due to abstractive summarization’s underlying complexities, most of the past research work has been done on the extractive summarization approach… Nevertheless, with the triumph of the sequence-to-sequence (seq2seq) model, abstractive summarization becomes more viable. Although a significant number of notable research has been done in the English language based on abstractive summarization, only a couple of works […]

Read more
1 716 717 718 719 720 928