Course DescriptionThe Intel® FPGA Deep Learning Acceleration (DLA) Suite provides users with the tools and optimized architectures to accelerate inference using a variety of today’s common CNN topologies with Intel® FPGAs. In this training we will discuss how to use the Deep Learning Deployment Toolkit, a component of the OpenVINO™ toolkit, to optimize and deploy trained deep learning networks from Caffe* and TensorFlow* on FPGAs through high-level C++ APIs.
At Course Completion
You will be able to:
- Understand the contents of the Intel® FPGA Deep Learning Acceleration Suite
- Accelerate deep neural network inference tasks on FPGAs with the Deep Learning Deployment Toolkit
- Use the Model Optimizer, part of the Deep Learning Deployment Toolkit, to import trained models from popular frameworks such as Caffe* and TensorFlow*, and automatically prune, quantize, and layer compress the model for optimal execution on the FPGA.
- Use the Inference Engine, part of the Deep Learning Deployment Tool
- Familiarity with Caffe or TensorFlow frameworks
- Understanding of how FPGAs can be used to accelerate deep neural networks: https://www.altera.com/solutions/technology/artificial-intelligence/overview.html
We recommend completing the following courses:
Upon completing this course, we recommend the following courses (in no particular order):
Applicable Training Curriculum
This course is part of the following Intel FPGA training curriculum:
Result Showing 1