A newer version of this document is available. Customers should click here to go to the newest version.
1. Intel® FPGA AI Suite Getting Started Guide
2. Intel® FPGA AI Suite Components
3. Intel® FPGA AI Suite Installation Overview
4. Installing the Intel® FPGA AI Suite Compiler and IP Generation Tools
5. Installing the Intel® FPGA AI Suite PCIe-Based Design Example Prerequisites
6. Intel® FPGA AI Suite Quick Start Tutorial
A. Intel® FPGA AI Suite Getting Started Guide Archives
B. Intel® FPGA AI Suite Getting Started Guide Document Revision History
4.1. Supported FPGA Families
4.2. Operating System Prerequisites
4.3. Installing the Intel® FPGA AI Suite With System Package Management Tools
4.4. Installing OpenVINO™ Toolkit
4.5. Installing Intel® Quartus® Prime Pro Edition Software
4.6. Setting Required Environment Variables
4.7. Installing Intel® Threading Building Blocks (TBB)
4.8. Finalizing Your Intel® FPGA AI Suite Installation
6.1. Creating a Working Directory
6.2. Preparing OpenVINO™ Model Zoo
6.3. Preparing a Model
6.4. Running the Graph Compiler
6.5. Preparing an Image Set
6.6. Programming the FPGA Device
6.7. Performing Inference on the PCIe-Based Example Design
6.8. Building an FPGA Bitstream for the PCIe Example Design
6.9. Building the Example FPGA Bitstreams
6.10. Preparing a ResNet50 v1 Model
6.11. Performing Inference on the Inflated 3D (I3D) Graph
6.12. Performing Inference on YOLOv3 and Calculating Accuracy Metrics
6.3. Preparing a Model
A model must be converted from a framework (such as TensorFlow, Caffe, or Pytorch) into a pair of .bin and .xml files before the Intel® FPGA AI Suite compiler (dla_compiler command) can ingest the model.
The following commands download the ResNet-50 TensorFlow model and run Model Optimizer:
source ~/build-openvino-dev/openvino_env/bin/activate omz_downloader --name resnet-50-tf \ --output_dir $COREDLA_WORK/demo/models/ omz_converter --name resnet-50-tf \ --download_dir $COREDLA_WORK/demo/models/ \ --output_dir $COREDLA_WORK/demo/models/
The omz_downloader command downloads the trained model to $COREDLA_WORK/demo/models folder. The omz_converter command runs model optimizer that converts the trained model into intermediate representation .bin and .xml files in the $COREDLA_WORK/demo/models/public/resnet-50-tf/FP32/ directory.
The directory $COREDLA_WORK/demo/open_model_zoo/models/public/resnet-50-tf/ contains two useful files that do not appear in the $COREDLA_ROOT/demo/models/ directory tree:
- The README.md file describes background information about the model.
- The model.yml file shows the detailed command-line information given to Model Optimizer (mo.py) when it converts the model to a pair of .bin and .xml files
For a list OpenVINO™ Model Zoo models that the Intel® FPGA AI Suite supports, refer to the Intel® FPGA AI Suite IP Reference Manual .