Combining Event Semantics and Degree Semantics for Natural Language Inference

In formal semantics, there are two well-developed semantic frameworks: event semantics, which treats verbs and adverbial modifiers using the notion of event, and degree semantics, which analyzes adjectives and comparatives using the notion of degree. However, it is not obvious whether these frameworks can be combined to handle cases in which the phenomena in question are interacting with each other… Here, we study this issue by focusing on natural language inference (NLI). We implement a logic-based NLI system that combines […]

Read more

The Devil is in the Details: Evaluating Limitations of Transformer-based Methods for Granular Tasks

Contextual embeddings derived from transformer-based neural language models have shown state-of-the-art performance for various tasks such as question answering, sentiment analysis, and textual similarity in recent years. Extensive work shows how accurately such models can represent abstract, semantic information present in text… In this expository work, we explore a tangent direction and analyze such models’ performance on tasks that require a more granular level of representation. We focus on the problem of textual similarity from two perspectives: matching documents on […]

Read more

Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation

We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST)… Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures […]

Read more

MARNet: Multi-Abstraction Refinement Network for 3D Point Cloud Analysis

Representation learning from 3D point clouds is challenging due to their inherent nature of permutation invariance and irregular distribution in space. Existing deep learning methods follow a hierarchical feature extraction paradigm in which high-level abstract features are derived from low-level features… However, they fail to exploit different granularity of information due to the limited interaction between these features. To this end, we propose Multi-Abstraction Refinement Network (MARNet) that ensures an effective exchange of information between multi-level features to gain local […]

Read more

Collective Knowledge: organizing research projects as a database of reusable components and portable workflows with common APIs

This article provides the motivation and overview of the Collective Knowledge framework (CK or cKnowledge). The CK concept is to decompose research projects into reusable components that encapsulate research artifacts and provide unified application programming interfaces (APIs), command-line interfaces (CLIs), meta descriptions and common automation actions for related artifacts… The CK framework is used to organize and manage research projects as a database of such components. Inspired by the USB “plug and play” approach for hardware, CK also helps to […]

Read more

Diverse Image Captioning with Context-Object Split Latent Spaces

Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, e.g. VAEs with structured latent spaces… Yet, the amount of multimodality captured by prior work is limited to that of the paired training data — the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in […]

Read more

c-lasso — a Python package for constrained sparse and robust regression and classification

We introduce c-lasso, a Python package that enables sparse and robust linear regression and classification with linear equality constraints. The underlying statistical forward model is assumed to be of the following form: [ y = X beta + sigma epsilon qquad textrm{subject to} qquad Cbeta=0 ] Here, $X in mathbb{R}^{ntimes d}$is a given design matrix and the vector $y in mathbb{R}^{n}$ is a continuous or binary response vector… The matrix $C$ is a general constraint matrix. The vector $beta in […]

Read more

Image Inpainting with Learnable Feature Imputation

A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image. Several studies address this issue with feature re-normalization on the output of the convolution… However, these models use a significant amount of learnable parameters for feature re-normalization, or assume a binary representation of the certainty of an output. We propose (layer-wise) feature imputation of the missing input values to a convolution. In contrast to learned feature re-normalization, […]

Read more

Python: How to Flatten a List of Lists

Introduction A list is the most flexible data structure in Python. Whereas, a 2D list which is commonly known as a list of lists, is a list object where every item is a list itself – for example: [[1,2,3], [4,5,6], [7,8,9]]. Flattening a list of lists entails converting a 2D list into a 1D list by un-nesting each list item stored in the list of lists – i.e., converting [[1, 2, 3], [4, 5, 6], [7, 8, 9]] into [1, […]

Read more

Python: Slice Notation on Tuple

Introduction The term slicing in programming usually refers to obtaining a substring, sub-tuple, or sublist from a string, tuple, or list respectively. Python offers an array of straightforward ways to slice not only these three but any iterable. An iterable is, as the name suggests, any object that can be iterated over. In this article, we’ll go over everything you need to know about Slicing Tuples in Python. Slicing a Tuple in Python There are a couple of ways to […]

Read more
1 745 746 747 748 749 919