AN 886: Intel® Agilex™ Device Design Guidelines

ID 683634
Date 8/26/2022
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

5.1.8.1.4. Interface Bandwidths

To identify which interface should be used to move data between the HPS and FPGA fabric, an understanding of the bandwidth of each interface is necessary. The figure below illustrates the peak throughput available between the HPS and FPGA fabric as well as the internal bandwidths within the HPS. The example shown assumes that the FPGA fabric operates at 400 MHz, the MPU operates at 1500 MHz, and the 64-bit external SDRAM operates at 3200 Mbits per second.

Figure 8.  Intel® Agilex™ HPS Memory-Mapped BandwidthFor abbreviations, refer to the figure in Overview of HPS Memory-Mapped Interfaces.

Relative Latencies and Throughputs for Each HPS Interface

Interface

Transaction Use Case

Latency

Throughput

HPS-to-FPGA

MPU accessing memory in FPGA

Medium

Medium

HPS-to-FPGA

MPU accessing peripheral in FPGA

Medium

Very Low

Lightweight HPS-to-FPGA

MPU accessing register in FPGA

Low

Low

Lightweight HPS-to-FPGA

MPU accessing memory in FPGA

Low

Very Low

FPGA-to-HPS

FPGA master accessing non-cache coherent SDRAM

High

Medium

FPGA-to-HPS

FPGA master accessing the HPS on-chip RAM

Low

High

FPGA-to-HPS

FPGA master accessing the HPS peripheral

Low

Low

FPGA-to-HPS

FPGA master accessing coherent memory resulting in cache miss

High

Medium

FPGA-to-HPS

FPGA master accessing coherent memory resulting in cache hit

Low

Medium-High

FPGA-to-HPS

FPGA master accessing the HPS directly

Medium

High-Very High

Note: For the interfaces with no configuration recommended, refer to the corresponding interface sections: "HPS-to-FPGA Bridge", "Lightweight HPS-to-FPGA Bridge", and "FPGA-to-HPS Bridge".

GUIDELINE: Avoid using the HPS-to-FPGA bridge to access peripheral registers in the FPGA from the MPU.

The HPS-to-FPGA bridge is optimized for bursting traffic and peripheral accesses are typically short word-sized accesses of only one beat. As a result if peripherals are accessed through the HPS-to-FPGA bridge, the transaction can be stalled by other bursting traffic that is already in flight.

GUIDELINE: Avoid using the Lightweight HPS-to-FPGA bridge to access memory in the FPGA from the MPU.

The Lightweight HPS-to-FPGA bridge is optimized for non-bursting traffic and typically memory accesses are performed as bursts (often 32 bytes due to cache operations). As a result, if memory is accessed through the Lightweight HPS-to-FPGA bridge, the throughput is limited.

GUIDELINE: Use soft logic in the FPGA (for example, a DMA controller) to move shared data between the HPS and FPGA. Avoid using the MPU and the HPS DMA controller for this use case.

When moving shared data between the HPS and FPGA Intel® recommends to do so from the FPGA instead of moving the data using the MPU or HPS DMA controller. If the FPGA must access cache coherent data then it must access the FPGA-to-HPS bridge with the appropriate ACE-Lite cache extensions signaling to issue a cacheable transaction. If non-cache coherent data must be moved to the FPGA or HPS, a DMA engine implemented in FPGA logic can move the data through the FPGA-to-HPS bridge, achieving the highest throughput possible. Even though the HPS includes a DMA engine internally that can move data between the HPS and FPGA, its purpose is to assist peripherals that do not master memory or provide memory to memory data movements on behalf of the MPU.