Intel® FPGA AI Suite: PCIe-based Design Example User Guide

ID 768977
Date 9/06/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

5.2. Compiling Exported Graphs Through the Intel FPGA AI Suite

The network as described in the .xml and .bin files (created by the Model Optimizer) is compiled for a specific Intel FPGA AI Suite Architecture File by using the Intel FPGA AI Suite compiler.

The Intel FPGA AI Suite compiler compiles the network and exports it to a .bin file that uses the same .bin format as required by the OpenVINO™ Inference Engine.

This .bin file created by the compiler contains the compiled network parameters for all the target devices (FPGA, CPU, or both) along with the weights and biases. The inference application imports this file at runtime.

The Intel FPGA AI Suite compiler can also compile the graph and provide estimated area or performance metrics for a given Architecture File or produce an optimized Architecture File.

For more details about the Intel FPGA AI Suite compiler, refer to the Intel FPGA AI Suite Compiler Reference Manual .