Hugging Face’s TensorFlow Philosophy
Introduction Despite increasing competition from PyTorch and JAX, TensorFlow remains the most-used deep learning framework. It also differs from those other two libraries in some very important ways. In particular, itβs quite tightly integrated with its high-level API Keras,
Read moreIntroducing Skops
At Hugging Face, we are working on tackling various problems in open-source machine learning, including, hosting
Read moreA Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes
Introduction Language models are becoming larger all the time. At the time of
Read moreDeep Dive: Vision Transformers On Hugging Face Optimum Graphcore
This blog post will show how easy it is to fine-tune pre-trained Transformer models for your dataset using the Hugging Face Optimum library on Graphcore Intelligence Processing Units (IPUs). As an example, we will show a step-by-step guide and provide a notebook that takes a large, widely-used chest X-ray dataset and trains a vision transformer (ViT) model. Introducing vision transformer (ViT)
Read moreDeploying π€ ViT on Vertex AI
In the previous posts, we showed how to deploy a Vision Transformers (ViT) model from π€ Transformers locally and on a Kubernetes cluster. This post will show you
Read morePre-Training BERT with Hugging Face Transformers and Habana Gaudi
In this Tutorial, you will learn how to pre-train BERT-base from scratch using a Habana Gaudi-based DL1 instance on AWS to take advantage of the cost-performance benefits of Gaudi. We will use the Hugging Face Transformers, Optimum Habana and Datasets libraries to pre-train a BERT-base model using masked-language modeling, one of the two original BERT pre-training
Read moreStable Diffusion with 𧨠Diffusers
Stable Diffusion π¨ …using 𧨠Diffusers Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It is trained on 512×512 images from a subset of the LAION-5B database. LAION-5B is the largest, freely accessible multi-modal dataset that currently exists. In this post, we want to show how to use Stable Diffusion with the 𧨠Diffusers library, explain how the model works and finally dive a bit deeper into how […]
Read moreOpenRAIL: Towards open and responsible AI licensing frameworks
Open & Responsible AI licenses (“OpenRAIL”) are AI-specific licenses enabling open access, use and distribution of AI artifacts while requiring a responsible use of the latter. OpenRAIL licenses could be for open and responsible ML what current open software licenses are to code and Creative Commons to general content: a widespread community licensing tool. Advances in machine learning and other
Read more