SLM Master Class Workshop:
Practical Model Distillation for Efficient Language Models
Workshop Highlights
Learn how to distill the intelligence of an open-source large language model (Llama 3.1 405B) into smaller, more efficient models, Llama 3.1 8B and 3.2 3B.
Using an open-source dataset, we’ll demonstrate how to leverage the expertise of the 405B model to fine-tune the 3B/8B models for specialized medical tasks.
Discover techniques to maintain quality and accuracy while achieving:
- 2.8x increase in processing speed
- 93% reduction in operational costs
Follow a step-by-step guide to create your own production-ready 3B/8B models, tailored for specific use cases.
By the end of this workshop, participants will have practical knowledge of SLM fine-tuning and distillation techniques, enabling them to create powerful, efficient, and domain-specific language models for their unique applications.
Don’t miss this opportunity to get practical knowledge of SLM fine-tuning and distillation techniques.