Unlock the Power of LLMs on AI PCs: Efficient Inferencing and Multimodal Chat Using Hugging Face*
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
In this hands-on session, participants delve into the cutting-edge concepts of large language model (LLM) inferencing and multimodal chat applications, using the powerful tools provided by Hugging Face*. Designed for AI enthusiasts and professionals, this session guides attendees through the practical implementation of LLMs on AI PCs, focusing on real-world applications and performance optimization.
Learn how to integrate text, image, and audio inputs to create sophisticated multimodal chat systems. Additionally, the session covers essential techniques for optimizing AI models, including quantization, to ensure efficient deployment on AI PC hardware. Gain a comprehensive understanding of LLM inferencing and multimodal chat, equipped with the skills to build and optimize advanced AI applications using Hugging Face on AI PCs.
This session includes:
- Set up the development environment on AI PCs.
- Install and configure PyTorch* and Hugging Face libraries.
- Integrate text, image, and audio inputs for multimodal chat using the Hugging Face Transformers library for multimodal processing.
- Load pretrained LLMs from the Hugging Face Model Hub and implement LLM inferencing using multimodal chat.