Visible to Intel only — GUID: igd1668562758146
Ixiasoft
1. F-tile Overview
2. F-tile Architecture
3. Implementing the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
4. Implementing the F-Tile Reference and System PLL Clocks Intel® FPGA IP
5. Implementing the F-Tile Global Avalon® Memory-Mapped Interface Intel® FPGA IP
6. F-tile PMA/FEC Direct PHY Design Implementation
7. Supported Tools
8. Debugging F-Tile Transceiver Links
9. F-tile Architecture and PMA and FEC Direct PHY IP User Guide Archives
10. Document Revision History for F-tile Architecture and PMA and FEC Direct PHY IP User Guide
2.1.1. FHT and FGT PMAs
2.1.2. 400G Hard IP and 200G Hard IP
2.1.3. PMA Data Rates
2.1.4. FEC Architecture
2.1.5. PCIe* Hard IP
2.1.6. Bonding Architecture
2.1.7. Deskew Logic
2.1.8. Embedded Multi-die Interconnect Bridge (EMIB)
2.1.9. IEEE 1588 Precision Time Protocol for Ethernet
2.1.10. Clock Networks
2.1.11. Reconfiguration Interfaces
2.2.1. PMA-to-Fracture Mapping
2.2.2. Determining Which PMA to Map to Which Fracture
2.2.3. Hard IP Placement Rules
2.2.4. IEEE 1588 Precision Time Protocol Placement Rules
2.2.5. Topologies
2.2.6. FEC Placement Rules
2.2.7. Clock Rules and Restrictions
2.2.8. Bonding Placement Rules
2.2.9. Preserving Unused PMA Lanes
2.2.2.1. Implementing One 200GbE-4 Interface with 400G Hard IP and FHT
2.2.2.2. Implementing One 200GbE-2 Interface with 400G Hard IP and FHT
2.2.2.3. Implementing One 100GbE-1 Interface with 400G Hard IP and FHT
2.2.2.4. Implementing One 100GbE-4 Interface with 400G Hard IP and FGT
2.2.2.5. Implementing One 10GbE-1 Interface with 200G Hard IP and FGT
2.2.2.6. Implementing Three 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.7. Implementing One 50GbE-1 and Two 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.8. Implementing One 100GbE-1 and Two 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.9. Implementing Two 100GbE-1 and One 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.10. Implementing 100GbE-1, 100GbE-2, and 50GbE-1 Interfaces with 400G Hard IP and FHT
3.1. F-Tile PMA/FEC Direct PHY Intel® FPGA IP Overview
3.2. Designing with F-Tile PMA/FEC Direct PHY Intel® FPGA IP
3.3. Configuring the IP
3.4. Signal and Port Reference
3.5. Bit Mapping for PMA and FEC Mode PHY TX and RX Datapath
3.6. Clocking
3.7. Custom Cadence Generation Ports and Logic
3.8. Asserting Reset
3.9. Bonding Implementation
3.10. Independent Port Configurations
3.11. Configuration Registers
3.12. Configurable Intel® Quartus® Prime Software Settings
3.13. Configuring the F-Tile PMA/FEC Direct PHY Intel® FPGA IP for Hardware Testing
3.14. Hardware Configuration Using the Avalon® Memory-Mapped Interface
3.4.1. TX and RX Parallel and Serial Interface Signals
3.4.2. TX and RX Reference Clock and Clock Output Interface Signals
3.4.3. Reset Signals
3.4.4. RS-FEC Signals
3.4.5. Custom Cadence Control and Status Signals
3.4.6. TX PMA Control Signals
3.4.7. RX PMA Status Signals
3.4.8. TX and RX PMA and Core Interface FIFO Signals
3.4.9. PMA Avalon® Memory Mapped Interface Signals
3.4.10. Datapath Avalon® Memory Mapped Interface Signals
3.5.1. Parallel Data Mapping Information
3.5.2. TX and RX Parallel Data Mapping Information for Different Configurations
3.5.3. Example of TX Parallel Data for PMA Width = 8, 10, 16, 20, 32 (X=1)
3.5.4. Example of TX Parallel Data for PMA width = 64 (X=2)
3.5.5. Example of TX Parallel Data for PMA width = 64 (X=2) for FEC Direct Mode
3.8.1. Reset Signal Requirements
3.8.2. Power On Reset Requirements
3.8.3. Reset Signals—Block Level
3.8.4. Reset Signals—Descriptions
3.8.5. Status Signals—Descriptions
3.8.6. Run-time Reset Sequence—TX
3.8.7. Run-time Reset Sequence—RX
3.8.8. Run-time Reset Sequence—TX + RX
3.8.9. Run-time Reset Sequence—TX with FEC
6.1. Implementing the F-tile PMA/FEC Direct PHY Design
6.2. Instantiating the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
6.3. Implementing a RS-FEC Direct Design in the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
6.4. Instantiating the F-Tile Reference and System PLL Clocks Intel® FPGA IP
6.5. Enabling Custom Cadence Generation Ports and Logic
6.6. Connecting the F-tile PMA/FEC Direct PHY Design IP
6.7. Simulating the F-Tile PMA/FEC Direct PHY Design
6.8. F-tile Interface Planning
Visible to Intel only — GUID: igd1668562758146
Ixiasoft
5.4.3. 400G MAC/PCS Interface Registers Access Example
The 400G MAC/PCS interface registers are a part of the F-Tile Ethernet Intel FPGA Hard IP register map.
For a design where multiple IP instances are accessed by a single F-Tile Global Avalon® Memory-Mapped Interface Intel® FPGA IP, you need to use the corresponding base address to access the separate IP instances. The following table shows base address for each Ethernet mode.
Note: For the F-Tile PMA/FEC Direct PHY Intel® FPGA IP, there is no PCS or MAC utilized, hence there is no need to access the 400G MAC/PCS registers. For other protocol IPs where the MAC or PCS is used, you can access the 400G MAC/PCS registers.
Ethernet Mode | Base Address (Byte Address) |
---|---|
25GE_0 | 0x00000 |
50GE_0 | 0x01000 |
100GE_0 | 0x02000 |
200GE_0 | 0x03000 |
400GE_0 | 0x04000 |
25GE_1 | 0x05000 |
25GE_2 | 0x06000 |
50GE_1 | 0x07000 |
25GE_3 | 0x08000 |
25GE_4 | 0x09000 |
50GE_2 | 0x0A000 |
100GE_1 | 0x0B000 |
25GE_5 | 0x0C000 |
25GE_6 | 0x0D000 |
50GE_3 | 0x0E000 |
25GE_7 | 0x0F000 |
25GE_8 | 0x10000 |
50GE_4 | 0x11000 |
100GE_2 | 0x12000 |
200GE_1 | 0x13000 |
25GE_9 | 0x14000 |
25GE_10 | 0x15000 |
50GE_5 | 0x16000 |
25GE_11 | 0x17000 |
25GE_12 | 0x18000 |
50GE_6 | 0x19000 |
100GE_3 | 0x1A000 |
25GE_13 | 0x1B000 |
25GE_14 | 0x1C000 |
50GE_7 | 0x1D000 |
25GE_15 | 0x1E000 |
Note: The table is only applicable for global Avalon® memory-mapped interface access. For local Avalon® memory-mapped interface access, refer to Ethernet Avalon® Memory-Mapped Interface Address Space in the F-Tile Ethernet Intel FPGA Hard IP User Guide.
As an example, if a design has four IPs instantiated and accessed by a single F-Tile Global Avalon® Memory-Mapped Interface Intel® FPGA IP, where:
- The first IP is a 1x25Gbps module; placed in stream0, fracture st_x1_0, Ethernet mode 25GE_0
- The second IP is a 2x25Gbps module; placed in stream2 and stream3, fracture st_x2_1, Ethernet mode 50GE_1
- The third IP is a 1x50Gbps module; placed in stream4 and stream5, fracture st_x2_2, Ethernet mode 50GE_2
- The fourth IP is a 4x25Gbps module; placed in stream8 to stream11, fracture st_x4_2, Ethernet mode 100GE_2
Note: Use the F-Tile Channel Placement Tool to find out where each IP module is placed; in which streams, what fracture type, and refer to the Fracture Type and Ethernet Mode Mapping table to determine the Ethernet mode.
To read the RX PCS status register:
- Write register 0xffffc with value 0x2
- Read the following registers for the RX PCS status register value:
- For the first IP instance, read register 0x0084
- For the second IP instance, read register 0x7084
- For the third IP instance, read register 0xA084
- For the fourth IP instance, read register 0x12084
Did you find the information on this page useful?
Feedback Message
Characters remaining: