1. Overview
2. Implementing the Transceiver PHY Layer in L-Tile/H-Tile
3. PLLs and Clock Networks
4. Resetting Transceiver Channels
5. Stratix® 10 L-Tile/H-Tile Transceiver PHY Architecture
6. Reconfiguration Interface and Dynamic Reconfiguration
7. Calibration
8. Debugging Transceiver Links
A. Logical View of the L-Tile/H-Tile Transceiver Registers
2.1. Transceiver Design IP Blocks
2.2. Transceiver Design Flow
2.3. Configuring the Native PHY IP Core
2.4. Using the Stratix® 10 L-Tile/H-Tile Transceiver Native PHY Stratix® 10 FPGA IP Core
2.5. Implementing the PHY Layer for Transceiver Protocols
2.6. Unused or Idle Transceiver Channels
2.7. Simulating the Native PHY IP Core
2.8. Implementing the Transceiver Native PHY Layer in L-Tile/H-Tile Revision History
2.3.1. Protocol Presets
2.3.2. GXT Channels
2.3.3. General and Datapath Parameters
2.3.4. PMA Parameters
2.3.5. PCS-Core Interface Parameters
2.3.6. Analog PMA Settings Parameters
2.3.7. Enhanced PCS Parameters
2.3.8. Standard PCS Parameters
2.3.9. PCS Direct Datapath Parameters
2.3.10. Dynamic Reconfiguration Parameters
2.3.11. Generation Options Parameters
2.3.12. PMA, Calibration, and Reset Ports
2.3.13. PCS-Core Interface Ports
2.3.14. Enhanced PCS Ports
2.3.15. Standard PCS Ports
2.3.16. Transceiver PHY PCS-to-Core Interface Reference Port Mapping
2.3.17. IP Core File Locations
2.4.2.1. Receiver Word Alignment
2.4.2.2. Receiver Clock Compensation
2.4.2.3. Encoding/Decoding
2.4.2.4. Running Disparity Control and Check
2.4.2.5. FIFO Operation for the Enhanced PCS
2.4.2.6. Polarity Inversion
2.4.2.7. Data Bitslip
2.4.2.8. Bit Reversal
2.4.2.9. Byte Reversal
2.4.2.10. Double Rate Transfer Mode
2.4.2.11. Asynchronous Data Transfer
2.4.2.12. Low Latency
2.5.1.1. Transceiver Channel Datapath for PIPE
2.5.1.2. Supported PIPE Features
2.5.1.3. How to Connect TX PLLs for PIPE Gen1, Gen2, and Gen3 Modes
2.5.1.4. How to Implement PCI Express (PIPE) in Stratix® 10 Transceivers
2.5.1.5. Native PHY IP Core Parameter Settings for PIPE
2.5.1.6. fPLL IP Core Parameter Settings for PIPE
2.5.1.7. ATX PLL IP Core Parameter Settings for PIPE
2.5.1.8. Native PHY IP Core Ports for PIPE
2.5.1.9. fPLL Ports for PIPE
2.5.1.10. ATX PLL Ports for PIPE
2.5.1.11. Preset Mappings to TX De-emphasis
2.5.1.12. How to Place Channels for PIPE Configurations
2.5.1.13. Link Equalization for Gen3
2.5.1.14. Timing Closure Recommendations
3.1. PLLs
3.2. Input Reference Clock Sources
3.3. Transmitter Clock Network
3.4. Clock Generation Block
3.5. FPGA Fabric-Transceiver Interface Clocking
3.6. Double Rate Transfer Mode
3.7. Transmitter Data Path Interface Clocking
3.8. Receiver Data Path Interface Clocking
3.9. Channel Bonding
3.10. PLL Cascading Clock Network
3.11. Using PLLs and Clock Networks
3.12. PLLs and Clock Networks Revision History
4.1. When Is Reset Required?
4.2. Transceiver PHY Reset Controller Stratix® 10 FPGA IP Implementation
4.3. How Do I Reset?
4.4. Using PCS Reset Status Port
4.5. Using Transceiver PHY Reset Controller Stratix® 10 FPGA IP
4.6. Using a User-Coded Reset Controller
4.7. Combining Status or PLL Lock Signals with User Coded Reset Controller
4.8. Resetting Transceiver Channels Revision History
4.3.1.1. Resetting the Transmitter After Power Up
4.3.1.2. Resetting the Transmitter During Device Operation
4.3.1.3. Resetting the Receiver After Power Up
4.3.1.4. Resetting the Receiver During Device Operation (Auto Mode)
4.3.1.5. Clock Data Recovery in Manual Lock Mode
4.3.1.6. Special TX PCS Reset Release Sequence
5.1. PMA Architecture
5.2. Enhanced PCS Architecture
5.3. Stratix® 10 Standard PCS Architecture
5.4. Stratix® 10 PCI Express Gen3 PCS Architecture
5.5. PCS Support for GXT Channels
5.6. Square Wave Generator
5.7. PRBS Pattern Generator
5.8. PRBS Pattern Verifier
5.9. Loopback Modes
5.10. Stratix® 10 L-Tile/H-Tile Transceiver PHY Architecture Revision History
5.1.2.1.1. Programmable Differential On-Chip Termination (OCT)
5.1.2.1.2. Signal Detector
5.1.2.1.3. Continuous Time Linear Equalization (CTLE)
5.1.2.1.4. Variable Gain Amplifier (VGA)
5.1.2.1.5. Adaptive Parametric Tuning (ADAPT) Engine
5.1.2.1.6. Decision Feedback Equalization (DFE)
5.1.2.1.7. On-Die Instrumentation
5.2.1.1. TX Core FIFO
5.2.1.2. TX PCS FIFO
5.2.1.3. Interlaken Frame Generator
5.2.1.4. Interlaken CRC-32 Generator
5.2.1.5. 64B/66B Encoder and Transmitter State Machine (TX SM)
5.2.1.6. Scrambler
5.2.1.7. Interlaken Disparity Generator
5.2.1.8. TX Gearbox, TX Bitslip and Polarity Inversion
5.2.1.9. KR FEC Blocks
5.2.2.1. RX Gearbox, RX Bitslip, and Polarity Inversion
5.2.2.2. Block Synchronizer
5.2.2.3. Interlaken Disparity Checker
5.2.2.4. Descrambler
5.2.2.5. Interlaken Frame Synchronizer
5.2.2.6. 64B/66B Decoder and Receiver State Machine (RX SM)
5.2.2.7. 10GBASE-R Bit-Error Rate (BER) Checker
5.2.2.8. Interlaken CRC-32 Checker
5.2.2.9. RX PCS FIFO
5.2.2.10. RX Core FIFO
5.3.1.4.1. 8B/10B Encoder Control Code Encoding
5.3.1.4.2. 8B/10B Encoder Reset Condition
5.3.1.4.3. 8B/10B Encoder Idle Character Replacement Feature
5.3.1.4.4. 8B/10B Encoder Current Running Disparity Control Feature
5.3.1.4.5. 8B/10B Encoder Bit Reversal Feature
5.3.1.4.6. 8B/10B Encoder Byte Reversal Feature
5.3.2.1.1. Word Aligner Bitslip Mode
5.3.2.1.2. Word Aligner Manual Mode
5.3.2.1.3. Word Aligner Synchronous State Machine Mode
5.3.2.1.4. Word Aligner Deterministic Latency Mode
5.3.2.1.5. Word Aligner Pattern Length for Various Word Aligner Modes
5.3.2.1.6. Word Aligner RX Bit Reversal Feature
5.3.2.1.7. Word Aligner RX Byte Reversal Feature
5.3.2.6.1. Byte Deserializer Disabled Mode
5.3.2.6.2. Byte Deserializer Deserialize x2 Mode
5.3.2.6.3. Byte Deserializer Deserialize x4 Mode
5.3.2.6.4. Bonded Byte Deserializer
5.3.2.6.5. Byte Ordering Register-Transfer Level (RTL)
5.3.2.6.6. Byte Serializer Effects on Data Propagation at the RX Side
5.3.2.6.7. ModelSim Byte Ordering Analysis
6.1. Reconfiguring Channel and PLL Blocks
6.2. Interacting with the Reconfiguration Interface
6.3. Multiple Reconfiguration Profiles
6.4. Arbitration
6.5. Recommendations for Dynamic Reconfiguration
6.6. Steps to Perform Dynamic Reconfiguration
6.7. Direct Reconfiguration Flow
6.8. Native PHY IP or PLL IP Core Guided Reconfiguration Flow
6.9. Reconfiguration Flow for Special Cases
6.10. Changing Analog PMA Settings
6.11. Ports and Parameters
6.12. Dynamic Reconfiguration Interface Merging Across Multiple IP Blocks
6.13. Embedded Debug Features
6.14. Timing Closure Recommendations
6.15. Unsupported Features
6.16. Transceiver Register Map
6.17. Reconfiguration Interface and Dynamic Revision History
7.5.1. Recalibrating a Duplex Channel (Both PMA TX and PMA RX)
7.5.2. Recalibrating the PMA RX Only in a Duplex Channel
7.5.3. Recalibrating the PMA TX Only in a Duplex Channel
7.5.4. Recalibrating a PMA Simplex RX Only (No PMA Simplex TX Used)
7.5.5. Recalibrating a PMA Simplex TX Only (No PMA Simplex RX Used)
7.5.6. Recalibrating the PMA Simplex RX in a Channel where PMA Simplex RX and TX are Merged
7.5.7. Recalibrating the PMA Simplex TX in a Channel where PMA Simplex RX and TX are Merged
7.5.8. Recalibrating the fPLL
7.5.9. Recalibrating the ATX PLL
7.5.10. Recalibrating the CMU PLL When it is Used as a TX PLL
A.4.1. Transmitter PMA Logical Register Map
A.4.2. Receiver PMA Logical Register Map
A.4.3. Pattern Generators and Checkers
A.4.4. Loopback
A.4.5. Optional Reconfiguration Logic PHY- Capability
A.4.6. Optional Reconfiguration Logic PHY- Control & Status
A.4.7. Embedded Streamer (Native PHY)
A.4.8. Static Polarity Inversion
A.4.9. Reset
A.4.10. CDR/CMU and PMA Calibration
3.11.4. Mix and Match Example
In the Stratix® 10 transceiver architecture, the separate Native PHY IP core and the PLL IP core scheme allows great flexibility. It is easy to share PLLs and reconfigure data rates. The following design example illustrates PLL sharing and both bonded and non-bonded clocking configurations.
Figure 171. Mix and Match Design Example
PLL Instances
In this example, two ATX PLL instances and three fPLL instances are used. Choose an appropriate reference clock for each PLL instance. The IP Catalog lists the available PLLs.
Use the following data rates and configuration settings for PLL IP cores:
- Transceiver PLL Instance 0: ATX PLL with output clock frequency of 6.25 GHz
- Enable the Master CGB and bonding output clocks.
- Transceiver PLL instance 1: fPLL with output clock frequency of 5.1625 GHz
- Select the Use as Transceiver PLL option.
- Transceiver PLL instance 2: fPLL with output clock frequency of 0.625 GHz
- Transceiver PLL instance 3: fPLL with output clock frequency of 2.5 GHz
- Select Enable PCIe clock output port option.
- Select Use as Transceiver PLL option.
- Set Protocol Mode to PCIe Gen2.
- Transceiver PLL instance 4: ATX PLL with output clock frequency of 4 GHz
- Enable Master CGB and bonding output clocks.
- Select Enable PCIe clock switch interface option.
- Set Number of Auxiliary MCGB Clock Input ports to 1.
Native PHY IP Core Instances
In this example, three Transceiver Native PHY IP core instances and two 10GBASE-KR PHY IP instances are used. Use the following data rates and configuration settings for the PHY IPs:
- 12.5 Gbps Interlaken with a bonded group of 10 channels
- Select the Interlaken 10x12.5 Gbps preset from the Stratix® 10 Transceiver Native PHY IP core GUI.
- 1.25 Gbps Gigabit Ethernet with a non-bonded group of four channels
- Select the GIGE-1.25Gbps preset from the Stratix® 10 Transceiver Native PHY IP core GUI.
- Change the Number of data channels to 2.
- PCIe Gen3 with a bonded group of 8 channels
- Select the PCIe PIPE Gen3x8 preset from the Stratix® 10 Transceiver Native PHY IP core GUI.
- Under TX Bonding options, set the PCS TX channel bonding master to channel 5.
Note: The PCS TX channel bonding master must be physically placed in channel 1 or channel 4 within a transceiver bank. In this example, the 5th channel of the bonded group is physically placed at channel 1 in the transceiver bank.
- Refer to PCI Express (PIPE) for more details.
- 10.3125 Gbps 10GBASE-KR non-bonded group of 2 channels
- Instantiate the Stratix® 10 1G/10GbE and 10GBASE-KR PHY IP two times, with one instance for each channel.
- Refer to 10GBASE-KR PHY IP Core for more details.
Connection Guidelines for PLL and Clock Networks
- For 12.5 Gbps Interlaken with a bonded group of 10 channels, connect the tx_bonding_clocks to the transceiver PLL's tx_bonding_clocks output port. Make this connection for all 10 bonded channels. This connection uses a master CGB and the x6 / x24 clock line to reach all the channels in the bonded group.
- Connect the tx_serial_clk port of the two instances of the 10GBASE-KR PHY IP to the tx_serial_clk port of PLL instance 1 (fPLL at 5.1625 GHz). This connection uses the x1 clock line within the transceiver bank.
- Connect the 1.25 Gbps Gigabit Ethernet non-bonded PHY IP instance to the tx_serial_clk port of the PLL instance 2. Make this connection twice, one for each channel. This connection uses the x1 clock line within the transceiver bank.
- Connect the PCIe Gen3 bonded group of 8 channels as follows:
- Connect the tx_bonding_clocks of the PHY IP to the tx_bonding_clocks port of the Transceiver PLL Instance 4. Make this connection for each of the 8 bonded channels.
- Connect the pipe_sw_done of the PHY IP to the pipe_sw port of the transceiver PLL instance 4.
- Connect the pll_pcie_clk port of the PLL instance 3 to the PHY IP's pipe_hclk_in port.
- Connect tx_serial_clk port of the PLL instance 3 to the mcgb_aux_clk0 port of the PLL instance 4. This connection is required as a part of the PCIe speed negotiation protocol.
Related Information