Deploying Intel® FPGAs for Deep Learning Inferencing with OpenVINO™ Toolkit (ODLADEPLOY)

29 Minutes Online Course

Course Description

The Intel® FPGA Deep Learning Acceleration (DLA) Suite provides users with the tools and optimized architectures to accelerate inference using a variety of today’s common CNN topologies with Intel® FPGAs. In this training we will discuss how to use the Deep Learning Deployment Toolkit, a component of the OpenVINO™ toolkit, to optimize and deploy trained deep learning networks from Caffe* and TensorFlow* on FPGAs through high-level C++ APIs.

At Course Completion

You will be able to:

  • Understand the contents of the Intel® FPGA Deep Learning Acceleration Suite
  • Accelerate deep neural network inference tasks on FPGAs with the Deep Learning Deployment Toolkit
  • Use the Model Optimizer, part of the Deep Learning Deployment Toolkit, to import trained models from popular frameworks such as Caffe* and TensorFlow*, and automatically prune, quantize, and layer compress the model for optimal execution on the FPGA.
  • Use the Inference Engine, part of the Deep Learning Deployment Tool

Skills Required

  • Familiarity with Caffe or TensorFlow frameworks
  • Understanding of how FPGAs can be used to accelerate deep neural networks:


We recommend completing the following courses:

Applicable Training Curriculum

This course is part of the following Intel FPGA training curriculum:

Class Schedule

Result Showing 1

On-lineAnytimeFreeRegister Now