Articles About Machine Learning

Disentangling Latent Space for Unsupervised Semantic Face Editing

Editing facial images created by StyleGAN is a popular research topic with important applications. Through editing the latent vectors, it is possible to control the facial attributes such as smile, age, textit{etc}… However, facial attributes are entangled in the latent space and this makes it very difficult to independently control a specific attribute without affecting the others. The key to developing neat semantic control is to completely disentangle the latent space and perform image editing in an unsupervised manner. In […]

Read more

CompressAI: a PyTorch library and evaluation platform for end-to-end compression research

This paper presents CompressAI, a platform that provides custom operations, layers, models and tools to research, develop and evaluate end-to-end image and video compression codecs. In particular, CompressAI includes pre-trained models and evaluation tools to compare learned methods with traditional codecs… Multiple models from the state-of-the-art on learned end-to-end compression have thus been reimplemented in PyTorch and trained from scratch. We also report objective comparison results using PSNR and MS-SSIM metrics vs. bit-rate, using the Kodak image dataset as test […]

Read more

Domain Adaptation Using Class Similarity for Robust Speech Recognition

When only limited target domain data is available, domain adaptation could be used to promote performance of deep neural network (DNN) acoustic model by leveraging well-trained source model and target domain data. However, suffering from domain mismatch and data sparsity, domain adaptation is very challenging… This paper proposes a novel adaptation method for DNN acoustic model using class similarity. Since the output distribution of DNN model contains the knowledge of similarity among classes, which is applicable to both source and […]

Read more

AML-SVM: Adaptive Multilevel Learning with Support Vector Machines

The support vector machines (SVM) is one of the most widely used and practical optimization based classification models in machine learning because of its interpretability and flexibility to produce high quality results. However, the big data imposes a certain difficulty to the most sophisticated but relatively slow versions of SVM, namely, the nonlinear SVM… The complexity of nonlinear SVM solvers and the number of elements in the kernel matrix quadratically increases with the number of samples in training data. Therefore, […]

Read more

Short-Term Memory Optimization in Recurrent Neural Networks by Autoencoder-based Initialization

Training RNNs to learn long-term dependencies is difficult due to vanishing gradients. We explore an alternative solution based on explicit memorization using linear autoencoders for sequences, which allows to maximize the short-term memory and that can be solved with a closed-form solution without backpropagation… We introduce an initialization schema that pretrains the weights of a recurrent neural network to approximate the linear autoencoder of the input sequences and we show how such pretraining can better support solving hard classification tasks […]

Read more

Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent

We consider a natural model of online preference aggregation, where sets of preferred items $R_1, R_2, ldots, R_t$ along with a demand for $k_t$ items in each $R_t$, appear online. Without prior knowledge of $(R_t, k_t)$, the learner maintains a ranking $pi_t$ aiming that at least $k_t$ items from $R_t$ appear high in $pi_t$… This is a fundamental problem in preference aggregation with applications to, e.g., ordering product or news items in web pages based on user scrolling and click […]

Read more

CODER: Knowledge infused cross-lingual medical term embedding for term normalization

We propose a novel medical term embedding method named CODER, which stands for mediCal knOwledge embeDded tErm Representation. CODER is designed for medical term normalization by providing close vector representations for terms that represent the same or similar concepts with multi-language support… CODER is trained on top of BERT (Devlin et al., 2018) with the innovation that token vector aggregation is trained using relations from the UMLS Metathesaurus (Bodenreider, 2004), which is a comprehensive medical knowledge graph with multi-language support. […]

Read more

Intriguing Properties of Contrastive Losses

Contrastive loss and its variants have become very popular recently for learning visual representations without supervision. In this work, we first generalize the standard contrastive loss based on cross entropy to a broader family of losses that share an abstract form of $mathcal{L}_{text{alignment}} + lambda mathcal{L}_{text{distribution}}$, where hidden representations are encouraged to (1) be aligned under some transformations/augmentations, and (2) match a prior distribution of high entropy… We show that various instantiations of the generalized loss perform similarly under the […]

Read more

Time-dependent Performance Analysis of the 802.11p-based Platooning Communications Under Disturbance

Platooning is a critical technology to realize autonomous driving. Each vehicle in platoons adopts the IEEE 802.11p standard to exchange information through communications to maintain the string stability of platoons… However, one vehicle in platoons inevitably suffers from a disturbance resulting from the leader vehicle acceleration/deceleration, wind gust and uncertainties in a platoon control system, i.e., aerodynamics drag and rolling resistance moment etc. Disturbances acting on one vehicle may inevitably affect the following vehicles and cause that the spacing error […]

Read more

Tapping Twitter Sentiments: A Complete Case-Study on 2015 Chennai Floods

Introduction We did this case study as a part of our capstone project at Great Lakes Institute of Management, Chennai. After we presented this study, we got an overwhelming response from our professors & mentors. Later, they encouraged us to share our work to help others learn something new. We’ve been following Analytics Vidhya for a while now. Everyone knows, it’s probably the largest engine to share analytics knowledge. We tried and got lucky in connecting with their content team. So, […]

Read more
1 103 104 105 106 107 226