Review AI concepts and use cases, as well as the fundamentals of deep learning, the objectives behind its use, and understand the challenges faced by developers during the development phase. Learn how to create deep learning applications seamlessly using the Intel® Distribution of OpenVINO™ toolkit. Explore the functions, features, and workflow for using Intel DevCloud.
Using a practical example, learn about why it is crucial to optimize deep learning models for inference when developing deep learning applications. Explore in-depth the optimization tools included in the Intel Distribution of OpenVINO toolkit, including Model Optimizer, Post-Training Optimization Tool, and other supporting software.
Get an introduction to the Inference Engine and learn how to use its streamlined workflow. Learn how to efficiently use all available compute resources using heterogeneous and multidevice plug-ins to boost the performance and hardware agnostic inference with a write once, deploy anywhere approach.
Explore the advantages and limitations of hardware platforms available for deep learning inference, including Intel® CPU, iGPU, and the Intel® Movidius™ Myriad™ X VPU platforms.
Learn about the various features of the Deep Learning Workbench. Use a sample application to review the features of the Intel Distribution of the OpenVINO toolkit.