FPGA AI Suite: Design Examples User Guide

ID 848957
Date 4/22/2025
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

12.2.2. [HL-NO-DDR] On-Chip Memory Modules

The on-chip memory modules store input data and final inference results. These memories are accessible via Avalon-MM interfaces. There are two modules: one for staging input memory and one for staging output inference results. These are referred to as ingress and egress on-chip memory, respectively. The sizes of the on-chip memory modules are defined in Table 9.

  • Ingress On-Chip Memory

    This module is dedicated to storing the input data before it is processed by the FPGA AI Suite IP. It serves as the staging area for data that will be read by the ingress mSGDMA engine and streamed into the inference IP.

  • Egress On-Chip Memory

    This module is used to store the final inference results after they have been processed by the FPGA AI Suite IP. The egress mSGDMA engine writes the inference results from the FPGA AI Suite IP to this memory, making it available for retrieval and further use.

The following table provides the specific sizes allocated for each on-chip memory module, ensuring that the system has adequate storage for both input data and inference result:

Table 9.  On-Chip Memory Module Sizes

On-Chip Memory Module

Size (in bytes)

Ingress

524288

Egress

131072