Develop AI with the Latest Intel Libraries and Tools
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
To keep pace with the progress in AI development, Intel has modernized and enhanced development tools, beginning with the SYCL*-based parallel offload platform. With the capability to migrate smoothly from CUDA*, developers gain freedom from vendor lock.
A solid open source performance foundation has been established with a combination of libraries, including Intel® oneAPI Deep Neural Network Library (oneDNN), Intel® oneAPI Data Analytics Library (oneDAL), and Intel® oneAPI Collective Communications Library (oneCCL), building machine learning and AI frameworks for the future. This session explores the OpenVINO™ toolkit and PyTorch*, how to use these frameworks, and the role that NumPy, Numba*, and data parallel extensions play in this environment.
A demonstration of Intel® VTune™ Profiler completes the tour, showing how to resolve performance bottlenecks in AI workloads.
Topics covered include how to:
- Use scalable libraries to optimize data preparation, quantization, and convolutional neural network (CNN) and deep neural network (DNN) matrix calculations.
- Streamline the porting of CUDA code to SYCL.
- Perform open standards-based scalable GPU operations and SYCL accelerator offloading.
- Enhance predictive AI and generative AI (GenAI) in multi-node environments.
- Find and resolve performance bottlenecks in a variety of AI situations.
Learn that AI training and inference performance starts with a fully optimized software stack, and that AI workloads are best handled by having a solid understanding of platform use.
Skill level: Intermediate