Enable AI Workloads with Hugging Face*
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
This workshop empowers developers to effectively use Hugging Face* abstracted APIs to build AI applications on Intel® hardware platforms. Whether you are a novice or experienced AI engineer developing on Intel hardware for the first time, this session provides valuable insights into enabling AI workloads and various optimizations available through Hugging Face libraries like transformers and accelerators.
Get a walk-through on fine-tuning, inference, and more specialized tasks like working with distributed systems to scale workloads efficiently. Attendees have the opportunity to follow along with hands-on examples and demos, and they can:
- Gain practical experience with using Hugging Face to run AI workloads on Intel hardware.
- Unlock optimizations through Hugging Face APIs.
- Understand the full potential of using Hugging Face on Intel hardware.
Skill level: novice.
This course delivers:
- Practical skills in fine-tuning and deploying AI workloads using Hugging Face on Intel hardware.
- Strategies for using Hugging Face APIs for optimized AI model performance.
- Insights into distributed systems for efficient scaling of AI workloads.
- An understanding of the synergies between Hugging Face libraries and Intel hardware capabilities.