BCN2BRNO: ASR System Fusion for Albayzin 2020 Speech to Text Challenge
AM LM WER [%] Dev2 Test 1 CNN-TDNNf Alb 14.1 15.5 2 Alb + Wiki 13.6 14.9 3 Alb + Giga 13.6 15.1 4 Alb + Wiki + Giga 13.5 15.0 5
Read moreDeep Learning, NLP, NMT, AI, ML
AM LM WER [%] Dev2 Test 1 CNN-TDNNf Alb 14.1 15.5 2 Alb + Wiki 13.6 14.9 3 Alb + Giga 13.6 15.1 4 Alb + Wiki + Giga 13.5 15.0 5
Read moreWith the aim of assessing the quality of the trained SE models, we use several trigger word detection classifier models, reporting the impact of the SE module at WUW classification performance. The WUW classifiers used here are a LeNet, a well-known standard classifier, easy to optimize [13]; Res15, Res15-narrow and Res8 based on a reimplementation by Tang and Lin [26] of Sainath and Parada’s Convolutional Neural Networks (CNNs) for
Read moreIntroduction Working with variables in data analysis always drives the question: How are the variables dependent, linked, and varying against each other? Covariance and Correlation measures aid in establishing this. Covariance brings about the variation across variables. We use covariance to measure how much two variables change with each other. Correlation reveals the relation between the variables. We use correlation to determine how strongly linked two variables are to each other. In this article, we’ll learn how to calculate the […]
Read moreIf someone told me ten years ago when I was a freshly graduated bachelor of computer science that there would models that would produce multilingual sentence representation allowing zero-shot model transfer, I would have hardly believed such a prediction. If they added that the models would be total black boxes and we would not know why it worked, I would think they were insane. After all, one of the goals of the mathematization of stuff in science is to make […]
Read moreGradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A problem with gradient descent is that it can bounce around the search space on optimization problems that have large amounts of curvature or noisy gradients, and it can get stuck in flat spots in the search space that have no gradient. Momentum is an extension to the gradient descent optimization algorithm that allows the search […]
Read moreFrom a research point of view, games offer an amazing environment in which to develop new machine learning algorithms and techniques. And we hope, in due course, that those new algorithms will feed back not just into gaming, but into many other domains. Beyond the very technical machine learning techniques themselves, gaming is an environment in which we can explore the relationship between AI and people, and see how they can work in partnership. It’s a very rich environment in […]
Read moreHaving spent a big part of my career as a graduate student researcher and now a Data Scientist in the industry, I have come to realize that a vast majority of solutions proposed both in academic research papers and in the work place are just not meant to ship — they just don’t scale! And when I say scale, I mean handling real world uses cases, ability to handle large amounts of data and ease of deployment in a production […]
Read moreN-grams of texts are extensively used in text mining and natural language processing tasks. They are basically a set of co-occurring words within a given window and when computing the n-grams you typically move one word forward (although you can move X words forward in more advanced scenarios). For example, for the sentence “The cow jumps over the moon”. If N=2 (known as bigrams), then the ngrams would be: the cow cow jumps jumps over over the the moon So […]
Read moreTerm Frequency (TF) Term frequency (TF) often used in Text Mining, NLP and Information Retrieval tells you how frequently a term occurs in a document. In the context natural language, terms correspond to words or phrases. Since every document is different in length, it is possible that a term would appear more often in longer documents than shorter ones. Thus, term frequency is often divided by the the total number of terms in the document as a way of normalization. […]
Read moreInverse Document Frequency (IDF) is a weight indicating how commonly a word is used. The more frequent its usage across documents, the lower its score. The lower the score, the less important the word becomes. For example, the word the appears in almost all English texts and would thus have a very low IDF score as it carries very little “topic” information. In contrast, if you take the word coffee, while it is common, it’s not used as widely as […]
Read more