How we leveraged distilabel to create an Argilla 2.0 Chatbot

Discover how to build a Chatbot for a tool of your choice (Argilla 2.0 in this case) that can understand technical documentation and chat with users about it. In this article, we’ll show you how to leverage distilabel and fine-tune a domain-specific embedding model to create a conversational model that’s both accurate and engaging. This article outlines the process of creating a Chatbot for Argilla 2.0. We will: create a synthetic dataset from the technical documentation to fine-tune a domain-specific […]

Read more

SmolLM – blazingly fast and remarkably powerful

This blog post introduces SmolLM, a family of state-of-the-art small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset. It covers data curation, model evaluation, and usage. Introduction There is increasing interest in small language models that can operate on local devices. This trend involves techniques such as distillation or quantization to compress large models, as well as training small models from scratch on large datasets. These approaches enable novel applications while dramatically    

Read more

TGI Multi-LoRA: Deploy Once, Serve 30 models

Are you tired of the complexity and expense of managing multiple AI models? What if you could deploy once and serve 30 models? In today’s ML world, organizations looking to leverage the value of their data will likely end up in a fine-tuned world, building a multitude of models, each one highly specialized for a specific task. But how can you keep up with the hassle and cost of deploying a model for each use case? The answer is Multi-LoRA […]

Read more

WWDC 24: Running Mistral 7B with Core ML

WWDC’ 24 is the moment Apple officially unveiled Apple Intelligence and reiterated their commitment to efficient, private, and on-device AI. During the keynote and the sessions that followed, they demonstrated Apple Intelligence, which powers a huge array of AI-enhanced features that show practical uses for everyday tasks. These are not *AI-for-the-sake-of-AI* shiny demos. These are time-saving, appropriate (and fun!) helpers that are deeply integrated with apps and the OS, that also offer developers a number of ways to include these […]

Read more

Llama 3.1 – 405B, 70B & 8B with multilinguality and long context

Llama 3.1 is out! Today we welcome the next iteration of the Llama family to Hugging Face. We are excited to collaborate with Meta to ensure the best integration in the Hugging Face ecosystem. Eight open-weight models (3 base models and 5 fine-tuned ones) are available on the Hub. Llama 3.1 comes in three sizes: 8B for efficient deployment and development on consumer-size GPU, 70B for large-scale AI native applications, and 405B for synthetic data, LLM as a Judge or […]

Read more

Google releases Gemma 2 2B, ShieldGemma and Gemma Scope

One month after the release of Gemma 2, Google has expanded their set of Gemma models to include the following new additions: Gemma 2 2B – The 2.6B parameter version of Gemma 2, making it a great candidate for on-device use. ShieldGemma – A series of safety classifiers, trained on top of Gemma 2, for developers to filter inputs and outputs of their applications. Gemma Scope – A comprehensive, open suite of sparse autoencoders for Gemma 2 2B and 9B. […]

Read more
1 43 44 45 46 47 1,023