Attention Can Reflect Syntactic Structure (If You Let It)

Abstract Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism. However, much of such work focused almost exclusively on English — a language with rigid word order and a lack of inflectional morphology. In this study, we present decoding experiments for multilingual BERT across 18 languages in order to test the generalizability of the claim that dependency syntax is reflected in attention patterns. We […]

Read more

“Laughing at you or with you”: The Role of Sarcasm in Shaping the Disagreement Space

Frans Hendrik van Eemeren, Rob Grootendorst, Sally Jackson, Scott Jacobs, et al. 1993. Reconstructing argumentative discourse. University of Alabama Press. Rob Abbott, Marilyn Walker, Pranav Anand, Jean E Fox Tree, Robeson Bowmani, and Joseph King. 2011. How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Languages in Social Media, pages 2–11. Association for Computational Linguistics. Marilyn A Walker, Jean E Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012b.    

Read more

Syntactic Nuclei in Dependency Parsing – A Multilingual Exploration

In the previous sections, we have shown how syntactic nuclei can be identified in the UD annotation and how transition-based parsers can be made sensitive to these structures in their internal representations through the use of nucleus composition. We now proceed to a set of experiments investigating the impact of nucleus composition on a diverse selection of languages. 5.1 Experimental Settings We use UUParser (de Lhoneux et al., 2017, Smith    

Read more

Does injecting linguistic structure into language models lead to better alignment with brain recordings?

Figure 1 shows a high-level outline of our experimental design, which aims to establish whether injecting structure derived from a variety of syntacto-semantic formalisms into neural language model representations can lead to better correspondence with human brain activation data. We utilize fMRI recordings of human subjects reading a set of texts. Representations of these texts are then derived from the activations of the language models. Following Gauthier and Levy (

Read more

Transition-based Graph Decoder for Neural Machine Translation

Abstract While a number of works showed gains from incorporating source-side symbolic syntactic and semantic structure into neural machine translation (NMT), much fewer works addressed the decoding of such structure. We propose a general Transformer-based approach for tree and graph decoding based on generating a sequence of transitions, inspired by a similar approach that uses RNNs by Dyer et al. (2016). Experiments with using the proposed decoder with Universal Dependencies syntax on English-German, German-English and English-Russian show improved performance over […]

Read more

NLPBK at VLSP-2020 shared task: Compose transformer pretrained models for Reliable Intelligence Identification on Social network

In Our model, we generate representations of post message in three methods: tokenized syllables-level text through Bert4News, tokenized word-level text through PhoBERT and tokenized syllables-level text through XLM. We simply concatenate both this three representations with the corresponding post metadata features. This can be considered as a naive model but are proved that can improve performance of systems (Tu et al. (2017), Thanh et al. (

Read more

Speech Enhancement for Wake-Up-Word detection in Voice Assistants

With the aim of assessing the quality of the trained SE models, we use several trigger word detection classifier models, reporting the impact of the SE module at WUW classification performance. The WUW classifiers used here are a LeNet, a well-known standard classifier, easy to optimize [13]; Res15, Res15-narrow and Res8 based on a reimplementation by Tang and Lin [26] of Sainath and Parada’s Convolutional Neural Networks (CNNs) for    

Read more

Covariance and Correlation in Python

Introduction Working with variables in data analysis always drives the question: How are the variables dependent, linked, and varying against each other? Covariance and Correlation measures aid in establishing this. Covariance brings about the variation across variables. We use covariance to measure how much two variables change with each other. Correlation reveals the relation between the variables. We use correlation to determine how strongly linked two variables are to each other. In this article, we’ll learn how to calculate the […]

Read more
1 696 697 698 699 700 928