Deep Networks from the Principle of Rate Reduction

redunet_paper Deep Networks from the Principle of Rate ReductionThis repository is the official NumPy implementation of the paper Deep Networks from the Principle of Rate Reduction (2021) by Kwan Ho Ryan Chan* (UC Berkeley), Yaodong Yu* (UC Berkeley), Chong You* (UC Berkeley), Haozhi Qi (UC Berkeley), John Wright (Columbia), and Yi Ma (UC Berkeley). For PyTorch version of ReduNet, please visit https://github.com/ryanchankh/redunet. What is ReduNet? ReduNet is a deep neural network construcuted naturally by deriving the gradients of the Maximal […]

Read more

A GAN implemented with the Perceptual Simplicity and Spatial Constriction constraints

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction constraints) introduced in the paper Where and What? Examining Interpretable Disentangled Representations. The code for computing the TPL for model checkpoints from disentanglemen_lib can be found in this repository. Abstract Capturing interpretable variations has long been one of the goals indisentanglement learning. However, unlike the independence assumption,interpretability has rarely been exploited to encourage disentanglementin the unsupervised setting. […]

Read more

Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition

ABINet Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition The official code of ABINet (CVPR 2021, Oral). ABINet uses a vision model and an explicit language model to recognize text in the wild, which are trained in end-to-end way. The language model (BCN) achieves bidirectional language representation in simulating cloze test, additionally utilizing iterative correction strategy. Runtime Environment We provide a pre-built docker image using the Dockerfile from docker/Dockerfile Running in Docker $ [email protected]:FangShancheng/ABINet.git $ […]

Read more

Prioritized Architecture Sampling with Monto-Carlo Tree Search

NAS-Bench-Macro This repository includes the benchmark and code for NAS-Bench-Macro in paper “Prioritized Architecture Sampling with Monto-Carlo Tree Search”, CVPR2021. NAS-Bench-Macro is a NAS benchmark on macro search space. The NAS-Bench-Macro consists of 6561 networks and their test accuracies, parameters, and FLOPs on CIFAR-10 dataset. Each architecture in NAS-Bench-Macro is trained from scratch isolatedly. Benchmark All the evaluated architectures are stored in file nas-bench-macro_cifar10.json with the following format: { arch1: { test_acc: [float, float, float], // the test accuracies of […]

Read more

Polygonal Building Segmentation by Frame Field Learning

Polygonization-by-Frame-Field-Learning This repository contains the code for our fast polygonal building extraction from overhead images pipeline. We add a frame field output to an image segmentation neural network to improve segmentation quality and provide structural information for the subsequent polygonization step. Figure 1: Close-up of our additional frame field output on a test image. Figure 2: Given an overhead image, the model outputs an edge mask, an interior mask,and a frame field for buildings. The total loss includes terms that […]

Read more

An unofficial PyTorch implemenation of EventProp

This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiking Neural Networks. The repo currently contains code to train a 1-layer Spiking Neural Network with leaky integrate-and-fire (LIF) neurons for 10-way digit classification on MNIST. Implementation Details The implementation of EventProp itself is in models.py, in form of the forward and backward methods of the SpikingLinear module, which compute the forward passes of a spiking layer and its adjoint layer. In particular, the manual_forward […]

Read more

Leaderboard and Visualization for RLCard with python

This is the GUI support for the RLCard project and DouZero project. RLCard-Showdown provides evaluation and visualization tools to help understand the performance of the agents. It includes a replay module, where you can analyze the replays, and a PvE module, where you can play with the AI interactively. Currently, we only support Leduc Hold’em and Dou Dizhu. The frontend is developed with React. The backend is based on Django and Flask. Have fun! Cite this work Zha, Daochen, et […]

Read more

Machine Translation Weekly 84: Order Agnostic Cross-Entropy

I tend to be a little biased against autoregressive models. The way they operate: say exactly one subword, think for a while, and then say again exactly one subword, just does not sound natural to me. Moreover, with current models, a subword can be anything from a single character to a word as long as “Ausgußreiniger”. Non-autoregressive models generate everything in a single step. That does seem to be really natural either, but at least they offer an interesting alternative. […]

Read more

Simplify Complex Numbers With Python

Most general-purpose programming languages have either no support or limited support for complex numbers. Your typical options are learning some specialized tool like MATLAB or finding a third-party library. Python is a rare exception because it comes with complex numbers built in. Despite the name, complex numbers aren’t complicated! They’re convenient in tackling practical problems that you’ll get a taste of in this tutorial. You’ll explore vector graphics and sound frequency analysis, but complex numbers can also help in drawing […]

Read more

All You Need to know about BERT

This article was published as a part of the Data Science Blogathon Introduction Machines understand language through language representations. These language representations are in the form of vectors of real numbers. Proper language representation is necessary for a better understanding of the language by the machine. Language representations are of two types: (i) Context-free language representation such as Glove and Word2vec where embeddings for each token in the vocabulary are constant and it doesn’t depend on the context of the word. […]

Read more
1 614 615 616 617 618 935