A preview is not available for this record, please engage by choosing from the available options ‘download’ or ‘view’ to engage with the material
Description
The Intel® FPGA Deep Learning Acceleration (DLA) Suite provides users with the tools and optimized architectures to accelerate inference using a variety of today’s common Convolutional Neural Network (CNN) topologies with Intel® FPGAs. The Intel® FPGA DLA Suite, included as part of OpenVINO™ toolkit, also makes it easy to write software that targets FPGA for machine learning inference. The FPGA image or architecture used to accelerate deep learning algorithms on the FPGA can be customized and tuned for performance for a specific target deep learning topology. In this training, we will discuss how to create performance-tuned custom FPGA bitstreams through simple-to-use architecture description file scripts, without the need to write RTL code.