Self-Supervised Time Series Representation Learning by Inter-Intra Relational Reasoning

Self-supervised learning achieves superior performance in many domains by extracting useful representations from the unlabeled data. However, most of traditional self-supervised methods mainly focus on exploring the inter-sample structure while less efforts have been concentrated on the underlying intra-temporal structure, which is important for time series data… In this paper, we present SelfTime: a general self-supervised time series representation learning framework, by exploring the inter-sample relation and intra-temporal relation of time series to learn the underlying structure feature on the […]

Read more

Enhancing Diversity in Teacher-Student Networks via Asymmetric branches for Unsupervised Person Re-identification

The objective of unsupervised person re-identification (Re-ID) is to learn discriminative features without labor-intensive identity annotations. State-of-the-art unsupervised Re-ID methods assign pseudo labels to unlabeled images in the target domain and learn from these noisy pseudo labels… Recently introduced Mean Teacher Model is a promising way to mitigate the label noise. However, during the training, self-ensembled teacher-student networks quickly converge to a consensus which leads to a local minimum. We explore the possibility of using an asymmetric structure inside neural […]

Read more

Navigating the GAN Parameter Space for Semantic Image Editing

Generative Adversarial Networks (GANs) are currently an indispensable tool for visual editing, being a standard component of image-to-image translation and image restoration pipelines. Furthermore, GANs are especially useful for controllable generation since their latent spaces contain a wide range of interpretable directions, well suited for semantic editing operations… By gradually changing latent codes along these directions, one can produce impressive visual effects, unattainable without GANs. In this paper, we significantly expand the range of visual effects achievable with the state-of-the-art […]

Read more

Seaborn Distribution/Histogram Plot – Tutorial and Examples

Introduction Seaborn is one of the most widely used data visualization libraries in Python, as an extension to Matplotlib. It offers a simple, intuitive, yet highly customizable API for data visualization. In this tutorial, we’ll take a look at how to plot a histogram plot in Seaborn. We’ll cover how to plot a histogram with Seaborn, how to change Histogram bin sizes, as well as plot Kernel Density Estimation plots on top of Histograms and show distribution data instead of […]

Read more

Matplotlib Bar Plot – Tutorial and Examples

Introduction Matplotlib is one of the most widely used data visualization libraries in Python. From simple to complex visualizations, it’s the go-to library for most. In this tutorial, we’ll take a look at how to plot a bar plot in Matplotlib. Bar graphs display numerical quantities on one axis and categorical variables on the other, letting you see how many occurrences there are for the different categories. Bar charts can be used for visualizing a time series, as well as […]

Read more

The human side of AI for chess

As artificial intelligence continues its rapid progress, equaling or surpassing human performance on benchmarks in an increasing range of tasks, researchers in the field are directing more effort to the interaction between humans and AI in domains where both are active. Chess stands as a model system for studying how people can collaborate with AI, or learn from AI, just as chess has served as a leading indicator of many central questions in AI throughout the field’s history. AI-powered chess […]

Read more

Project InnerEye evaluation shows how AI can augment and accelerate clinicians’ ability to perform radiotherapy planning 13 times faster

Up to half of the population in the United States and United Kingdom will be diagnosed with cancer at some point in their lives. Of those, half will be treated with radiotherapy (RT), often in combination with other treatments such as surgery, chemotherapy, and increasingly immunotherapy. Radiotherapy involves focusing high-intensity radiation beams to damage the DNA of deep-seated cancerous tumors while avoiding surrounding healthy organs (known as organs at risk or OARs). Around 40% of successfully treated cancer patients undergo […]

Read more

Machine Translation Weekly 60: Notes about WMT 2020 Shared Tasks

This week, I will follow up the last week’s post and comment on the news from this year’s WMT that was collocated with EMNLP. As every year, there were many shared tasks on various types of translation and evaluation of machine translation. News translation task The news translation task is the oldest task at WMT and sort of a flagship task providing benchmarks for MT research in the long term. Test sets are created by manually translating recent news stories […]

Read more

Physics-informed neural networks for myocardial perfusion MRI quantification

Tracer-kinetic models allow for the quantification of kinetic parameters such as blood flow from dynamic contrast-enhanced magnetic resonance (MR) images. Fitting the observed data with multi-compartment exchange models is desirable, as they are physiologically plausible and resolve directly for blood flow and microvascular function… However, the reliability of model fitting is limited by the low signal-to-noise ratio, temporal resolution, and acquisition length. This may result in inaccurate parameter estimates. This study introduces physics-informed neural networks (PINNs) as a means to […]

Read more

Neural Representations for Modeling Variation in English Speech

Variation in speech is often represented and investigated using phonetic transcriptions, but transcribing speech is time-consuming and error prone. To create reliable representations of speech independent from phonetic transcriptions, we investigate the extraction of acoustic embeddings from several self-supervised neural models… We use these representations to compute word-based pronunciation differences between non-native and native speakers of English, and evaluate these differences by comparing them with human native-likeness judgments. We show that Transformer-based speech representations lead to significant performance gains over […]

Read more
1 707 708 709 710 711 911