SLM Master Class Workshop:

Practical Model Distillation for Efficient Language Models

Date:
December 12th
Time:
1:00 PM ET
Location:
Zoom
Duration:
35 minutes + Q&A

Workshop Highlights

Model Distillation:

Learn how to distill the intelligence of an open-source large language model (Llama 3.1 405B) into smaller, more efficient models, Llama 3.1 8B and 3.2 3B.

Real-world Application:

Using an open-source dataset, we’ll demonstrate how to leverage the expertise of the 405B model to fine-tune the 3B/8B models for specialized medical tasks.

Performance Optimization:

Discover techniques to maintain quality and accuracy while achieving:

  • 2.8x increase in processing speed
  • 93% reduction in operational costs
Hands-on Experience:

Follow a step-by-step guide to create your own production-ready 3B/8B models, tailored for specific use cases.

By the end of this workshop, participants will have practical knowledge of SLM fine-tuning and distillation techniques, enabling them to create powerful, efficient, and domain-specific language models for their unique applications.

Don’t miss this opportunity to get practical knowledge of SLM fine-tuning and distillation techniques.