A newer version of this document is available. Customers should click here to go to the newest version.
1. Intel® FPGA AI Suite IP Reference Manual
2. About the Intel® FPGA AI Suite IP
3. Intel® FPGA AI Suite IP Generation Utility
4. Intel® FPGA AI Suite Ahead-of-Time Splitter Utility
5. CSR Map and Descriptor Queue
A. Intel® FPGA AI Suite IP Reference Manual Archives
B. Intel® FPGA AI Suite IP Reference Manual Document Revision History
2.4.2.1. Parameter Group: Global Parameters
2.4.2.2. Parameter Group: activation
2.4.2.3. Parameter Group: pe_array
2.4.2.4. Parameter Group: pool
2.4.2.5. Module: softmax
2.4.2.6. Parameter Group: dma
2.4.2.7. Parameter Group: xbar
2.4.2.8. Parameter Group: filter_scratchpad
2.4.2.9. Parameter Group: config_network
4.1. Files Generated by the Intel® FPGA AI Suite Ahead-of-Time (AOT) Splitter Utility
4.2. Building the Intel® FPGA AI Suite Ahead-of-Time (AOT) Splitter Utility
4.3. Running the Intel® FPGA AI Suite Ahead-of-Time (AOT) Splitter Utility
4.4. Intel® FPGA AI Suite Ahead-of-Time (AOT) Splitter Utility Example Application
2.5.4.1. Multiple Input Graphs
For graphs with more than one input, each tensor is structured as described in the previous section. The multiple input tensors must be packed together at address offsets as reported by the Intel® FPGA AI Suite compiler.
The compiler generates CSV files that describe the input and output tensor, unless you specify the --fno-transform-tables option. Each row of the CSV file gives information about one input. For more details, refer to the Intel® FPGA AI Suite Compiler Reference Manual .
For multiple inputs, the inputs are stored by batch and then by input number. For example, for 3 inputs and 2 batches, the input tensors would be stored as follows:
- input 1, batch 1
- input 1, batch 2
- input 2, batch 1
- input 2, batch 2
- input 3, batch 1
- input 3, batch 2