Introducing the Private Hub: A New Way to Build With Machine Learning
June 2023 Update: The Private Hub is now called Enterprise Hub. The Enterprise Hub is a hosted solution that combines the best of Cloud Managed services (SaaS) and Enterprise security. It lets customers deploy specific services like Inference Endpoints on a wide scope of compute options, from on-cloud to on-prem. It offers advanced user administration and access controls through
Read moreProximal Policy Optimization (PPO)
⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introduction This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here. ⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introduction This article is part of the Deep Reinforcement Learning
Read moreTrain and Fine-Tune Sentence Transformers Models
This guide is only suited for Sentence Transformers before v3.0. Read Training and Finetuning Embedding Models with Sentence Transformers v3 for an updated guide. Check out this tutorial with the Notebook Companion: Training or fine-tuning a Sentence Transformers model highly depends on the available data and the target task. The
Read moreHugging Face’s TensorFlow Philosophy
Introduction Despite increasing competition from PyTorch and JAX, TensorFlow remains the most-used deep learning framework. It also differs from those other two libraries in some very important ways. In particular, it’s quite tightly integrated with its high-level API Keras,
Read moreIntroducing Skops
At Hugging Face, we are working on tackling various problems in open-source machine learning, including, hosting
Read moreA Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes
Introduction Language models are becoming larger all the time. At the time of
Read moreDeep Dive: Vision Transformers On Hugging Face Optimum Graphcore
This blog post will show how easy it is to fine-tune pre-trained Transformer models for your dataset using the Hugging Face Optimum library on Graphcore Intelligence Processing Units (IPUs). As an example, we will show a step-by-step guide and provide a notebook that takes a large, widely-used chest X-ray dataset and trains a vision transformer (ViT) model. Introducing vision transformer (ViT)
Read moreDeploying 🤗 ViT on Vertex AI
In the previous posts, we showed how to deploy a Vision Transformers (ViT) model from 🤗 Transformers locally and on a Kubernetes cluster. This post will show you
Read more