22. [SOC] FPGA AI Suite SoC Design Example Quartus® Prime System Architecture
The FPGA AI Suite SoC design examples provide two variants for demonstrating the FPGA AI Suite operation.
All designs are Platform Designer based systems.
There is a single top-level Verilog RTL file for instantiating the Platform Designer system.
These two variants demonstrate FPGA AI Suite operations in the two most common usage scenarios. These scenarios are as follows:
- Memory to Memory (M2M): In this variant, the following steps occur:
- The Arm* processor host presents input data buffers to the FPGA AI Suite that are stored in a system memory.
- The FPGA AI Suite IP performs an inference on these buffers.
- The host system collects the inference results.
- Streaming to Memory (S2M): This variant offers a superset of the M2M functionality. The S2M variant demonstrates sending streaming input source data into the FPGA AI Suite IP and then collecting the results. An Avalon® streaming input captures live input data, stores the data into system memory, and then automatically triggers FPGA AI Suite IP inference operations.
You can use this variant as a starting point for larger designs that stream input data to the FPGA AI Suite IP with minimal host intervention.
On the Agilex 7, the S2M mode uses the FPGA AI Suite IP internal layout transform capability instead of the external demonstration layout transform described in [SOC] The Layout Transform IP as an Application-Specific Block. The internal transform capability allows for a wider range of input bus widths and supports folding. For more information about the internal transform capability, refer to "Input Layout Transform Hardware" in FPGA AI Suite IP Reference Manual .
Section Content
[SOC] FPGA AI Suite SoC Design Example Inference Sequence Overview
[SOC] Memory-to-Memory (M2M) Variant Design
[SOC] Streaming-to-Memory (S2M) Variant Design
[SOC] Top Level
[SOC] The SoC Design Example Platform Designer System
[SOC] Fabric EMIF Design Component
[SOC] PLL Configuration