Welcome Gemma 3: Google’s all new multimodal, multilingual, long context open LLM

Today Google releases Gemma 3, a new iteration of their Gemma family of models. The models range from 1B to 27B parameters, have a context window up to 128k tokens, can accept images and text, and support 140+ languages. Try out Gemma 3 now 👉🏻 Gemma 3 Space Gemma 2 Gemma 3 Size Variants 2B 9B 27B 1B 4B 12B 27B Context Window Length 8k 32k (1B) 128k (4B, 12B, 27B) Multimodality (Images and Text) ❌ ❌ (1B)    

Read more

Xet is on the Hub

Want to skip the details and get straight to faster uploads and downloads with bigger files than ever before? Click here to read about joining the Xet waitlist (or head over to join immediately). Over the past few weeks, Hugging Face’s Xet Team took a major step forward by migrating the first Model and Dataset repositories off LFS and to Xet storage. This marks one of many steps to fulfill Hugging Face’s vision for the Hub by empowering AI builders […]

Read more

AI Policy @🤗: Response to the White House AI Action Plan RFI

On March 14, we submitted Hugging Face’s response to the White House Office of Science and Technology Policy’s request for information on the White House AI Action Plan. We took this opportunity to (re-)assert the fundamental role that open AI systems and open science play in enabling the technology to be more performant and efficient, broadly and reliably adopted, and meeting the highest standards of security. This blog post provides a summary of our response, the full text is available […]

Read more

Open R1: How to use OlympicCoder locally for coding

Everyone’s been using Claude and OpenAI as coding assistants for the last few years, but there’s less appeal if you look at the developments coming out of open source projects like Open R1. If we look at the evaluation on LiveCodeBench below, we can see that the 7B parameter variant outperforms Claude 3.7 Sonnet and GPT-4o. These models are the daily drivers of many engineers in applications like Cursor and VSCode. Evals are great and all, but I want to […]

Read more

Analytics is important

Analytics and metrics are the cornerstone of understanding what’s happening with your deployment. Are your Inference Endpoints overloaded? How many requests are they handling? Having well-visualized, relevant metrics displayed in real-time is crucial for monitoring and debugging. We realized that our analytics dashboard needed a refresh. Since we debug a lot of endpoints ourselves, we’ve felt the same pain as our users. That’s why we sat down to plan and make several improvements to provide a better experience for you. […]

Read more

Training and Finetuning Reranker Models with Sentence Transformers v4

Sentence Transformers is a Python library for using and training embedding and reranker models for a wide range of applications, such as retrieval augmented generation, semantic search, semantic textual similarity, paraphrase mining, and more. Its v4.0 update introduces a new training approach for rerankers, also known as cross-encoder models, similar to what the v3.0 update introduced for embedding models. In    

Read more

Open R1: Update #4

Welcome DeepSeek-V3 0324 This week, a new model from DeepSeek silently landed on the Hub. It’s an updated version of DeepSeek-V3, the base model underlying the R1 reasoning model. There isn’t much information shared yet on this new model, but we do know a few things! What we know so far    

Read more

🚀 Accelerating LLM Inference with TGI on Intel Gaudi

We’re excited to announce the native integration of Intel Gaudi hardware support directly into Text Generation Inference (TGI), our production-ready serving solution for Large Language Models (LLMs). This integration brings the power of Intel’s specialized AI accelerators to our high-performance inference stack, enabling more deployment options for the open-source AI community 🎉 ✨ What’s New? We’ve fully integrated Gaudi support into TGI’s main codebase in PR #3091. Previously, we maintained a separate fork for Gaudi devices    

Read more

How Hugging Face Scaled Secrets Management for AI Infrastructure

Hugging Face has become synonymous with advancing AI at scale. With over 4 million builders deploying models on the Hub, the rapid growth of the platform necessitated a rethinking of how sensitive configuration data —secrets— are managed. Last year, the engineering teams set out to improve the handling of their secrets and credentials. After evaluating tools like HashiCorp Vault, they ultimately chose    

Read more
1 51 52 53 54 55 1,021