Analytics is important

Analytics and metrics are the cornerstone of understanding what’s happening with your deployment. Are your Inference Endpoints overloaded? How many requests are they handling? Having well-visualized, relevant metrics displayed in real-time is crucial for monitoring and debugging. We realized that our analytics dashboard needed a refresh. Since we debug a lot of endpoints ourselves, we’ve felt the same pain as our users. That’s why we sat down to plan and make several improvements to provide a better experience for you. […]

Read more

Training and Finetuning Reranker Models with Sentence Transformers v4

Sentence Transformers is a Python library for using and training embedding and reranker models for a wide range of applications, such as retrieval augmented generation, semantic search, semantic textual similarity, paraphrase mining, and more. Its v4.0 update introduces a new training approach for rerankers, also known as cross-encoder models, similar to what the v3.0 update introduced for embedding models. In    

Read more

Open R1: Update #4

Welcome DeepSeek-V3 0324 This week, a new model from DeepSeek silently landed on the Hub. It’s an updated version of DeepSeek-V3, the base model underlying the R1 reasoning model. There isn’t much information shared yet on this new model, but we do know a few things! What we know so far    

Read more

🚀 Accelerating LLM Inference with TGI on Intel Gaudi

We’re excited to announce the native integration of Intel Gaudi hardware support directly into Text Generation Inference (TGI), our production-ready serving solution for Large Language Models (LLMs). This integration brings the power of Intel’s specialized AI accelerators to our high-performance inference stack, enabling more deployment options for the open-source AI community 🎉 ✨ What’s New? We’ve fully integrated Gaudi support into TGI’s main codebase in PR #3091. Previously, we maintained a separate fork for Gaudi devices    

Read more

How Hugging Face Scaled Secrets Management for AI Infrastructure

Hugging Face has become synonymous with advancing AI at scale. With over 4 million builders deploying models on the Hub, the rapid growth of the platform necessitated a rethinking of how sensitive configuration data —secrets— are managed. Last year, the engineering teams set out to improve the handling of their secrets and credentials. After evaluating tools like HashiCorp Vault, they ultimately chose    

Read more

The NLP Course is becoming the LLM Course!

Education has always been at the heart of Hugging Face’s mission to democratize AI and we’re doubling down on that by giving hf.co/learn a big upgrade! Our NLP course has been a go-to resource for the open-source AI community for the past 3 years, and it’s now time for a refresh. We’re updating and expanding it to keep up with all the exciting stuff happening in AI (which is not easy when there are breakthroughs every week!) We felt the […]

Read more

Journey to 1 Million Gradio Users!

5 years ago, we launched Gradio as a simple Python library to let researchers at Stanford easily demo computer vision models with a web interface. Today, Gradio is used by >1 million developers each month to build and share AI web apps. This includes some of the most popular open-source projects of all time, like Automatic1111,

Read more

Welcome Llama 4 Maverick & Scout on Hugging Face

We are incredibly excited to welcome the next generation of large language models from Meta to the Hugging Face Hub: Llama 4 Maverick (~400B) and Llama 4 Scout (~109B)! 🤗 Both are Mixture of Experts (MoE) models with 17B active parameters. Released today, these powerful, natively multimodal models represent a significant leap forward. We’ve worked closely with Meta to ensure seamless integration into the Hugging Face ecosystem, including both transformers and TGI from day one. This is just the start […]

Read more
1 52 53 54 55 56 1,021