1. FPGA AI Suite Design Examples User Guide
2. FPGA AI Suite Design Examples
3. Design Example Components
4. [PCIE] Getting Started with the FPGA AI Suite PCIe* -based Design Example
5. [PCIE] Building the FPGA AI Suite Runtime
6. [PCIE] Running the Design Example Demonstration Applications
7. [PCIE] Design Example System Architecture for the Agilex™ 7 FPGA
8. [OFS-PCIE] Getting Started with Open FPGA Stack (OFS) for PCIe* -Attach Design Examples
9. [OFS-PCIE] Design Example Components
10. [HL-NO-DDR] Getting Started with the FPGA AI Suite DDR-Free Design Example
11. [HL-NO-DDR] Running the Hostless DDR-Free Design Example
12. [HL-NO-DDR] Design Example System Architecture
13. [HL-NO-DDR] Quartus® Prime System Console
14. [HL-NO-DDR] JTAG to Avalon MM Host Register Map
15. [HL-NO-DDR] Updating MIF Files
16. [HL-JTAG] Getting Started
17. [HL-JTAG] Design Example Components
18. [SOC] FPGA AI Suite SoC Design Example Prerequisites
19. [SOC] FPGA AI Suite SoC Design Example Quick Start Tutorial
20. [SOC] FPGA AI Suite SoC Design Example Run Process
21. [SOC] FPGA AI Suite SoC Design Example Build Process
22. [SOC] FPGA AI Suite SoC Design Example Quartus® Prime System Architecture
23. [SOC] FPGA AI Suite SoC Design Example Software Components
24. [SOC] Streaming-to-Memory (S2M) Streaming Demonstration
A. FPGA AI Suite Example Designs User Guide Archives
B. FPGA AI Suite Example Designs User Guide Revision History
6.1. [PCIE] Exporting Trained Graphs from Source Frameworks
6.2. [PCIE] Compiling Exported Graphs Through the FPGA AI Suite
6.3. [PCIE] Compiling the PCIe* -based Example Design
6.4. [PCIE] Programming the FPGA Device ( Agilex™ 7)
6.5. [PCIE] Performing Accelerated Inference with the dla_benchmark Application
6.6. [PCIE] Running the Ported OpenVINO™ Demonstration Applications
8.2.1. [OFS-PCIE] Setup the OFS Environment for the FPGA Device
8.2.2. [OFS-PCIE] Exporting Trained Graphs from Source Frameworks.
8.2.3. [OFS-PCIE] Compiling Exported Graphs Through the FPGA AI Suite
8.2.4. [OFS-PCIE] Compiling the OFS for PCIe* Attach Design Example
8.2.5. [OFS-PCIE] Programming the FPGA Green Bitstream
8.2.6. [OFS-PCIE] Performing Accelerated Inference with the dla_benchmark application
16.1. [HL-JTAG] Prerequisites
16.2. [HL-JTAG] Building the FPGA AI Suite Runtime
16.3. [HL-JTAG] Building an FPGA Bitstream for the JTAG Design Examples
16.4. [HL-JTAG] Programming the FPGA Device
16.5. [HL-JTAG] Preparing Graphs for Inference with FPGA AI Suite
16.6. [HL-JTAG] Performing Inference on the Agilex™ 5 FPGA E-Series 065B Modular Development Kit
16.7. [HL-JTAG] Inference Performance Measurement
16.8. [HL-JTAG] Known Issues and Limitations
19.1. [SOC] Initial Setup
19.2. [SOC] Initializing a Work Directory
19.3. [SOC] (Optional) Create an SD Card Image (.wic)
19.4. [SOC] Writing the SD Card Image (.wic) to an SD Card
19.5. [SOC] Preparing SoC FPGA Development Kits for the FPGA AI Suite SoC Design Example
19.6. [SOC] Adding Compiled Graphs (AOT files) to the SD Card
19.7. [SOC] Verifying FPGA Device Drivers
19.8. [SOC] Running the Demonstration Applications
19.5.1. [SOC] Preparing the Agilex™ 5 FPGA E-Series 065B Modular Development Kit
19.5.2. [SOC] Preparing the Agilex™ 7 FPGA I-Series Transceiver-SoC Development Kit
19.5.3. [SOC] Preparing the Arria® 10 SX SoC FPGA Development Kit
19.5.4. [SOC] Configuring the SoC FPGA Development Kit UART Connection
19.5.5. [SOC] Determining the SoC FPGA Development Kit IP Address
19.5.1.1. [SOC] Confirming the Agilex™ 5 FPGA E-Series 065B Modular Development Kit Board Setup
19.5.1.2. [SOC] Programming the Agilex™ 5 FPGA Device with the JTAG Indirect Configuration (.jic) File
19.5.1.3. [SOC] Programming the Agilex™ 5 FPGA Device with the SRAM Object File (.sof)
19.5.1.4. [SOC] Connecting the Agilex™ 5 FPGA E-Series 065B Modular Development Kit to the Host Development System
19.5.2.1. [SOC] Confirming Agilex™ 7 FPGA I-Series Transceiver-SoC Development Kit Board Set Up
19.5.2.2. [SOC] Programming the Agilex™ 7 FPGA Device with the JTAG Indirect Configuration (.jic) File
19.5.2.3. [SOC] Programming the Agilex™ 7 FPGA Device with the SRAM Object File (.sof)
19.5.2.4. [SOC] Connecting the Agilex™ 7 FPGA I-Series Transceiver-SoC Development Kit to the Host Development System
22.1. [SOC] FPGA AI Suite SoC Design Example Inference Sequence Overview
22.2. [SOC] Memory-to-Memory (M2M) Variant Design
22.3. [SOC] Streaming-to-Memory (S2M) Variant Design
22.4. [SOC] Top Level
22.5. [SOC] The SoC Design Example Platform Designer System
22.6. [SOC] Fabric EMIF Design Component
22.7. [SOC] PLL Configuration
23.1.1. [SOC] Yocto Recipe: recipes-core/images/coredla-image.bb
23.1.2. [SOC] Yocto Recipe: recipes-bsp/u-boot/u-boot-socfpga_%.bbappend
23.1.3. [SOC] Yocto Recipe: recipes-drivers/msgdma-userio/msgdma-userio.bb
23.1.4. [SOC] Yocto Recipe: recipes-drivers/uio-devices/uio-devices.bb
23.1.5. [SOC] Yocto Recipe: recipes-kernel/linux/linux-socfpga-lts_%.bbappend
23.1.6. [SOC] Yocto Recipe: recipes-support/devmem2/devmem2_2.0.bb
23.1.7. [SOC] Yocto Recipe: wic
19.6.2. [SOC] Preparing a Model
A model must be converted from a framework (such as TensorFlow, Caffe, or Pytorch) into a pair of .bin and .xml files before the FPGA AI Suite compiler (dla_compiler command) can ingest the model.
The following commands download the ResNet-50 TensorFlow model and run the OpenVINO™ Open Model Zoo tools with the following commands:
source ~/build-openvino-dev/openvino_env/bin/activate omz_downloader --name resnet-50-tf \ --output_dir $COREDLA_WORK/demo/models/ omz_converter --name resnet-50-tf \ --download_dir $COREDLA_WORK/demo/models/ \ --output_dir $COREDLA_WORK/demo/models/
The omz_downloader command downloads the trained model to $COREDLA_WORK/demo/models folder. The omz_converter command runs model optimizer that converts the trained model into intermediate representation .bin and .xml files in the $COREDLA_WORK/demo/models/public/resnet-50-tf/FP32/ directory.
The directory $COREDLA_WORK/demo/open_model_zoo/models/public/resnet-50-tf/ contains two useful files that do not appear in the $COREDLA_ROOT/demo/models/ directory tree:
- The README.md file describes background information about the model.
- The model.yml file shows the detailed command-line information given to Model Optimizer (mo.py) when it converts the model to a pair of .bin and .xml files
For a list OpenVINO™ Model Zoo models that the FPGA AI Suite supports, refer to the FPGA AI Suite IP Reference Manual .
Troubleshooting OpenVINO™ Open Model Zoo Converter Errors
You might get the following error while running the omz_converter on a TensorFlow model:
ValueError: Invalid filepath extension for saving. Please add either a '.keras' extension for the native Keras format (recommended) or a '.h5' extension. Use 'model.export(filepath)' if you want to export a SavedModel for use with TFLite/TFServing/etc.
If you get this error, you can follow a process similar to the following example process that convert MobilenetV3 TensorFlow model to an OpenVINO model:
- Run the following Python code that converts MobileNetV3 to Tensorflow .savedmodel format:
import os import tensorflow as tf COREDLA_WORK = os.environ.get("COREDLA_WORK") DOWNLOAD_DIR = f"{COREDLA_WORK}/demo/models/" OUTPUT_DIR = f"{COREDLA_WORK}/demo/models/" # Set the image data format tf.keras.backend.set_image_data_format("channels_last") # Load the MobileNetV3Large model with the specified weights model = tf.keras.applications.MobileNetV3Large( weights=str( f"{DOWNLOAD_DIR}/public/mobilenet-v3-large-1.0-224-tf/weights_mobilenet_v3_large_224_1.0_float.h5" ) ) # Save the model to the specified output directory model.export(filepath=f"{OUTPUT_DIR}/mobilenet_v3_large_224_1.0_float.savedmodel")
- Run the following command to convert the TensorFlow .savedmodel format to OpenVINO model format:
mo \ --input_model=$COREDLA_WORK/demo/models/\ mobilenet_v3_large_224_1.0_float.savedmodel \ --model_name=mobilenet_v3_large_224_1.0_float \ --input_shape=[1, 224,224,3] --layout nhwc