Multi Channel DMA Intel® FPGA IP for PCI Express User Guide
ID
683821
Date
8/19/2022
Public
A newer version of this document is available. Customers should click here to go to the newest version.
1. Before You Begin
2. Introduction
3. Functional Description
4. Interface Overview
5. Parameters (H-Tile)
6. Parameters (P-Tile and F-Tile)
7. Designing with the IP Core
8. Software Programming Model
9. Registers
10. Troubleshooting/Debugging
11. Multi Channel DMA Intel FPGA IP for PCI Express User Guide Archives
12. Revision History for Multi Channel DMA Intel FPGA IP for PCI Express User Guide
4.1. Port List
4.2. Clocks
4.3. Resets
4.4. Multi Channel DMA
4.5. Bursting Avalon-MM Master (BAM) Interface
4.6. Bursting Avalon-MM Slave (BAS) Interface
4.7. Config Slave Interface (RP only)
4.8. Hard IP Reconfiguration Interface
4.9. Config TL Interface
4.10. Configuration Intercept Interface (EP Only)
4.11. Data Mover Interface
4.12. Hard IP Status Interface
8.1.6.1. ifc_api_start
8.1.6.2. ifc_mcdma_port_by_name
8.1.6.3. ifc_qdma_device_get
8.1.6.4. ifc_num_channels_get
8.1.6.5. ifc_qdma_channel_get
8.1.6.6. ifc_qdma_acquire_channels
8.1.6.7. ifc_qdma_release_all_channels
8.1.6.8. ifc_qdma_device_put
8.1.6.9. ifc_qdma_channel_put
8.1.6.10. ifc_qdma_completion_poll
8.1.6.11. ifc_qdma_request_start
8.1.6.12. ifc_qdma_request_prepare
8.1.6.13. ifc_qdma_descq_queue_batch_load
8.1.6.14. ifc_qdma_request_submit
8.1.6.15. ifc_qdma_pio_read32
8.1.6.16. ifc_qdma_pio_write32
8.1.6.17. ifc_qdma_pio_read64
8.1.6.18. ifc_qdma_pio_write64
8.1.6.19. ifc_qdma_pio_read128
8.1.6.20. ifc_qdma_pio_write128
8.1.6.21. ifc_qdma_pio_read256
8.1.6.22. ifc_qdma_pio_write256
8.1.6.23. ifc_request_malloc
8.1.6.24. ifc_request_free
8.1.6.25. ifc_app_stop
8.1.6.26. ifc_qdma_poll_init
8.1.6.27. ifc_qdma_poll_add
8.1.6.28. ifc_qdma_poll_wait
8.1.6.29. ifc_mcdma_port_by_name
8.4.2.2. Channel Management
Each network device associated with one physical or virtual function and can support upto 512 channels. As part of network device (ifc_mcdma<i>) bringup, all channels of that device are initialized and enabled. Whenever a packet arrives from application, one of the Tx queues are selected to transfer it.
Strategies for Queue Selection
- Linux default queue selection: In this case, queue is selected based on logic provided by Linux multi-queue support feature.
- XPS (Transmit Packet Steering): This technique is part of the Linux kernel & provides a mechanism to map multiple cores to a Tx queue of the device. For all packets coming from any of these cores, the mapped Tx queue will be used for transfer. For more information, refer to XPS: Transmit Packet Steering.
- MCDMA custom queue selection: This technique provides a mechanism to map multiple queues to each core. For each core, a separate list is managed to keep track of every flow of transfer coming to the core. This is done using 4 touple hash over IP and TCP addresses of each packet and the Tx the queue allocated for that flow.
On receival of a Tx packet from upper layers, a lookup on this table is done using the hash of that packet. If a match is found the corresponsing queue is used for transfer. Otherwise a new queue is allocated to this new flow.