Intel® FPGA AI Suite: PCIe-based Design Example User Guide

ID 768977
Date 9/06/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

7.1. System Overview

The system consists of the following components connected to a host system via a PCIe* interface as shown in the following figure.
  • A board with the FPGA device
  • On-board DDR memory

The FPGA image consists of the Intel FPGA AI Suite IP and an additional logic that connects it to a PCIe* interface and DDR. The host can read and write to the DDR memory through the PCIe* port. In addition, the host can communicate and control the Intel FPGA AI Suite instances through the PCIe* connection which is also connected the direct memory access (DMA) CSR port of Intel FPGA AI Suite instances.

The Intel FPGA AI Suite IP accelerates neural network inference on batches of images. The process of executing a batch follows these steps:

  1. The host writes a batch of images, weights, and config data to DDR where weights can be reused between batches.
  2. The host writes to the Intel FPGA AI Suite CSR to start execution.
  3. Intel FPGA AI Suite computes the results of the batch and stores them in DDR.
  4. Once the computation is complete, Intel FPGA AI Suite raises an interrupt to the host.
  5. The host reads back the results from DDR.
Figure 2.  Intel FPGA AI Suite Example Design System Overview