How to Build a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac for Healthcare

Simulation has been a cornerstone in medical imaging to address the data gap. However, in healthcare robotics until now, it’s often been too slow, siloed, or difficult to translate into real-world systems. That’s now changing. With new advances in GPU-accelerated simulation and digital twins, developers can design, test, and validate robotic workflows entirely in virtual environments – reducing prototyping time from months to days,    

Read more

Apriel-H1: The Surprising Key to Distilling Efficient Reasoning Models

We converted our 15B reasoning model to a Mamba hybrid achieving 2.1x throughput with minimal quality loss. The key? A non-obvious insight about what data to distill on, and why intuition fails here. When MiniMax published their M2 post-mortem in October explaining why they abandoned efficient attention at 230B scale, the narrative briefly became “efficient attention is dead.” Within days, Kimi Linear proved otherwise. The real lesson: it depends on your constraints. Our constraint was simple: we had a strong […]

Read more

Open ASR Leaderboard: Trends and Insights with New Multilingual & Long-Form Tracks

While everyone (and their grandma 👵) is spinning up new ASR models, picking the right one for your use case can feel more overwhelming than choosing your next Netflix show. As of 21 Nov 2025, there are 150 Audio-Text-to-Text and 27K ASR models on the Hub 🤯 Most benchmarks focus on short-form English transcription (<30s), and overlook other important tasks, such as (1) multilingual performance and (2) model throughput, which can a be deciding factor for long-form audio like meetings […]

Read more

20x Faster TRL Fine-tuning with RapidFire AI

Hugging Face TRL now officially integrates with RapidFire AI to accelerate your fine-tuning and post-training experiments. TRL users can now discover, install, and run RapidFire AI as the fastest way to compare multiple fine-tuning/post-training configurations to customize LLMs without major code changes and without bloating GPU requirements. Why this matters When fine-tuning or post-training LLMs, teams often do not have the time and/or budget to compare multiple configs even though that can significantly boost eval metrics. RapidFire AI    

Read more
1 67 68 69 70 71 1,024