Create Inventive Generative AI with AI PCs from Intel
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
The potential of AI PCs has barely been realized, and innovative opportunities abound. With the launch of Intel® Core™ Ultra processors, systems now include integrated NPUs in addition to CPUs and GPUs. Using LLMs as a model for building generative AI (GenAI) solutions, this session explains NPU architecture, the significance of LLMs, and how to use the Intel® NPU Acceleration Library.
This webinar is designed for developers at all levels of experience who are intent on gaining AI development acumen, learning the latest techniques for using LLMs to create inventive GenAI solutions, and exploring the capabilities of NPUs and AI PCs from Intel. Code examples demonstrate techniques for optimizing for the resources of AI PCs from Intel and taking advantage of the unique features that make them suitable for AI development.
Topics covered include:
- Understand LLMs, the advantages of local inference, and the challenges encountered.
- See how acceleration of AI workloads is accomplished for Intel Core Ultra processors and the benefits of NPUs from Intel to operations.
- Discover techniques for quick prototyping of LLMs using the Intel Core Ultra processor with the Intel NPU Acceleration Library.
- Demonstrate how to deploy NPUs with OpenVINO™ toolkit and the NPU plug-in.
Skill level: All skill levels
Featured Software
- Download OpenVINO toolkit and the configurations for the NPU.
- Download the Intel NPU Acceleration Library.
You May Also Like
Related Articles
Related Webinar