AI Analytics Part 2: Enhance Deep-Learning Workloads on 3rd Generation Intel® Xeon® Scalable Processors
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
In This Series:
AI Analytics Part 1: Optimize End-to-End Data Science and Machine-Learning Acceleration
AI Analytics Part 3: Walk through the Steps to Optimize End-to-End Machine-Learning Workflows
This webinar looks at the Intel® oneAPI AI Analytics Toolkit (AI Kit) from the perspective of deep-learning workloads, including the performance benefits and features that can enhance the deep-learning training, inference, and workflows.
Join software engineer Louis Tsai to get insights into the latest optimizations for Intel® Optimization for TensorFlow* and Intel® Optimization for PyTorch*, which take advantage of the new acceleration instructions including Intel® Deep Learning Boost (Intel® DL Boost) and BF16 support from the 3rd Generation Intel® Xeon® Scalable processors.
Topics covered:
- How to quantize a model from FP32 and BF16 to int8 and analyze the performance speedup among different data types (FP32, BF16, and int8) in depth
- Model Zoo for Intel® architecture and low-precision tools included in the AI Kit
- Efficiencies when building machine-learning pipelines
Get the Software
Download the Intel® oneAPI AI Analytics Toolkit for Linux*.
Other Resources
- Get the Jupyter* Notebook in the first demonstration—These Jupyter Notebooks help users analyze the performance benefit from using Intel Optimization for TensorFlow with the Intel® oneAPI Deep Neural Network Library (oneDNN) library.
- Read the latest AI Analytics blogs on Medium.
- Develop in the cloud—sign up for an Intel® DevCloud account, a free development sandbox with access to the latest Intel® hardware and oneAPI software.
- Subscribe to Code Together—an interview series that explores the challenges at the forefront of cross-architecture development. Each biweekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Listen and subscribe today.
- Alexa (Say “Alexa, play the podcast Code Together”)
- FeedBurner*
- iTunes*
- Spotify*
- Stitcher
- SoundCloud*
- TuneIn
Louie Tsai
Software engineer, Intel Corporation
Louie is part of the Technical Computing, Analyzers, and Runtimes group in Intel. He is responsible for driving customer engagements with and adoption for Intel® Performance Libraries, leveraging the synergies between Python* and the Intel® Math Kernel Library (Intel® MKL). In addition, Louie focuses on embedded applications, with particular focus on autonomous driving and helping customers optimize their deep-learning workloads. Louie has a master’s degree in computer science and information engineering from National Chiao Tung University.
Intel® oneAPI AI Analytics Toolkit
Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.