An optimal, byte-aligned, LZ+RLE hybrid encoder, designed to maximize decoding speed on NMOS 6502 and derived CPUs

An optimal, byte-aligned, LZ+RLE hybrid encoder, designed to maximize decoding speed on NMOS 6502 and derived CPUs TSCrunch is an optimal, byte-aligned, LZ+RLE hybrid encoder, designed to maximize decoding speed on NMOS 6502 and derived CPUs, while achieving decent compression ratio (for a bytecruncher, that is).TSCrunch was designed as the default asset cruncher for the upcoming game A Pig Quest, and, as such, it’s optimized for in-memory level compression, but as of version 1.0 it can also create Commodore 64 […]

Read more

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement

Recently, the power of unconditional image synthesis has significantly advanced through the use of Generative Adversarial Networks (GANs). The task of inverting an image into its corresponding latent code of the trained GAN is of utmost importance as it allows for the manipulation of real images, leveraging the rich semantics learned by the network. Recognizing the limitations of current inversion approaches, in this work we present a novel inversion scheme that extends current encoder-based inversion methods by introducing an iterative […]

Read more

ALIbaba’s Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind: ALIbaba’s Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encoder-decoder models and its related optimization techniques developed by Alibaba’s MinD (Machine IntelligeNce of Damo) Lab. The family of AliceMind: Language understanding model: StructBERT (ICLR 2020) Generative language model: PALM (EMNLP 2020) Cross-lingual language model: VECO (ACL 2021) Cross-modal language model: StructVBERT (CVPR 2020 VQA Challenge Runner-up) Structural language model: StructuralLM (ACL 2021) Chinese language understanding model with multi-granularity inputs: LatticeBERT (NAACL 2021) Pre-training […]

Read more