1. Publication Deprecation Notice
2. About the PCIe* -based Design Example
3. Getting Started with the FPGA AI Suite PCIe* -based Design Example
4. Building the FPGA AI Suite Runtime
5. Running the Design Example Demonstration Applications
6. Design Example Components
7. Design Example System Architecture for the Agilex™ 7 FPGA
A. FPGA AI Suite PCIe-based Design Example User Guide Archives
B. FPGA AI Suite PCIe-based Design Example User Guide Document Revision History
5.1. Exporting Trained Graphs from Source Frameworks
5.2. Compiling Exported Graphs Through the FPGA AI Suite
5.3. Compiling the PCIe* -based Example Design
5.4. Programming the FPGA Device ( Agilex™ 7)
5.5. Performing Accelerated Inference with the dla_benchmark Application
5.6. Running the Ported OpenVINO™ Demonstration Applications
5.2. Compiling Exported Graphs Through the FPGA AI Suite
The network as described in the .xml and .bin files (created by the Model Optimizer) is compiled for a specific FPGA AI Suite architecture file by using the FPGA AI Suite compiler.
The FPGA AI Suite compiler compiles the network and exports it to a .bin file that uses the same .bin format as required by the OpenVINO™ Inference Engine.
This .bin file created by the compiler contains the compiled network parameters for all the target devices (FPGA, CPU, or both) along with the weights and biases. The inference application imports this file at runtime.
The FPGA AI Suite compiler can also compile the graph and provide estimated area or performance metrics for a given architecture file or produce an optimized architecture file.
For more details about the FPGA AI Suite compiler, refer to the FPGA AI Suite Compiler Reference Manual .