1. FPGA AI Suite Getting Started Guide
2. FPGA AI Suite Components
3. FPGA AI Suite Installation Overview
4. Installing the FPGA AI Suite Compiler and IP Generation Tools
5. Installing the FPGA AI Suite PCIe-Based Design Example Prerequisites
6. FPGA AI Suite Quick Start Tutorial
A. FPGA AI Suite Getting Started Guide Archives
B. FPGA AI Suite Getting Started Guide Document Revision History
4.1. Supported FPGA Families
4.2. Operating System Prerequisites
4.3. Installing the FPGA AI Suite with System Package Management Tools
4.4. Installing OpenVINO™ Toolkit
4.5. Installing Quartus® Prime Pro Edition Software
4.6. Setting Required Environment Variables
4.7. Finalizing Your FPGA AI Suite Installation
6.1. Creating a Working Directory
6.2. Preparing OpenVINO™ Model Zoo
6.3. Preparing a Model
6.4. Running the Graph Compiler
6.5. Preparing an Image Set
6.6. Programming the FPGA Device
6.7. Performing Inference on the PCIe-Based Example Design
6.8. Building an FPGA Bitstream for the PCIe Example Design
6.9. Building the Example FPGA Bitstreams
6.10. Preparing a ResNet50 v1 Model
6.11. Performing Inference on the Inflated 3D (I3D) Graph
6.12. Performing Inference on YOLOv3 and Calculating Accuracy Metrics
6.13. Performing Inference Without an FPGA Board
6.3. Preparing a Model
A model must be converted from a framework (such as TensorFlow, Caffe, or Pytorch) into a pair of .bin and .xml files before the FPGA AI Suite compiler (dla_compiler command) can ingest the model.
The following commands download the ResNet-50 TensorFlow model and run the OpenVINO™ Open Model Zoo tools with the following commands:
source ~/build-openvino-dev/openvino_env/bin/activate omz_downloader --name resnet-50-tf \ --output_dir $COREDLA_WORK/demo/models/ omz_converter --name resnet-50-tf \ --download_dir $COREDLA_WORK/demo/models/ \ --output_dir $COREDLA_WORK/demo/models/
The omz_downloader command downloads the trained model to $COREDLA_WORK/demo/models folder. The omz_converter command runs model optimizer that converts the trained model into intermediate representation .bin and .xml files in the $COREDLA_WORK/demo/models/public/resnet-50-tf/FP32/ directory.
The directory $COREDLA_WORK/demo/open_model_zoo/models/public/resnet-50-tf/ contains two useful files that do not appear in the $COREDLA_ROOT/demo/models/ directory tree:
- The README.md file describes background information about the model.
- The model.yml file shows the detailed command-line information given to Model Optimizer (mo.py) when it converts the model to a pair of .bin and .xml files
For a list OpenVINO™ Model Zoo models that the FPGA AI Suite supports, refer to the FPGA AI Suite IP Reference Manual .
Troubleshooting OpenVINO™ Open Model Zoo Converter Errors
You might get the following error while running the omz_converter on a TensorFlow model:
ValueError: Invalid filepath extension for saving. Please add either a '.keras' extension for the native Keras format (recommended) or a '.h5' extension. Use 'model.export(filepath)' if you want to export a SavedModel for use with TFLite/TFServing/etc.
If you get this error, you can follow a process similar to the following example process that convert MobilenetV3 TensorFlow model to an OpenVINO model:
- Run the following Python code that converts MobileNetV3 to Tensorflow .savedmodel format:
import os import tensorflow as tf COREDLA_WORK = os.environ.get("COREDLA_WORK") DOWNLOAD_DIR = f"{COREDLA_WORK}/demo/models/" OUTPUT_DIR = f"{COREDLA_WORK}/demo/models/" # Set the image data format tf.keras.backend.set_image_data_format("channels_last") # Load the MobileNetV3Large model with the specified weights model = tf.keras.applications.MobileNetV3Large( weights=str( f"{DOWNLOAD_DIR}/public/mobilenet-v3-large-1.0-224-tf/weights_mobilenet_v3_large_224_1.0_float.h5" ) ) # Save the model to the specified output directory model.export(filepath=f"{OUTPUT_DIR}/mobilenet_v3_large_224_1.0_float.savedmodel")
- Run the following command to convert the TensorFlow .savedmodel format to OpenVINO model format:
mo \ --input_model=$COREDLA_WORK/demo/models/\ mobilenet_v3_large_224_1.0_float.savedmodel \ --model_name=mobilenet_v3_large_224_1.0_float \ --input_shape=[1, 224,224,3] --layout nhwc