and H-tile production devices
Stratix® 10 devices support a transceiver tile architecture. A
tile consists of 24 transceiver channels and associated phase locked loops (PLLs), reference
clock buffers, and Hard IPs.
The range of capabilities in each tile type offers a customized solution
suited to the various transceiver applications. The next section describes the L-tile in
greater detail. An
Stratix® 10 device contains one or
more tiles on the left and right side of the device. The types of tiles do not have to be
Refer to the table "Transceiver Tile Variants—Comparison of Transceiver
Capabilities" in the Overview chapter of the
Stratix® 10 L- and H-Tile Transceiver PHY User Guide for additional
Figure 1. Transceiver Tile LayoutExample
H-tile on the left side of the
In L-tile up to 8 transceiver channels can be configured as GXT channels,
reaching datarates up to 26.6 Gbps. Similarly in an H-tile, up to 16 channels can be
configured as GXT channels reaching datarates up to 28.3 Gbps.
Stratix® 10 L-tile/H-tile transceiver bank
includes the following TX Phase Locked Loops (PLLs):
Two Advanced Transmit (ATX) PLLs
Two Fractional PLLs (fPLL)
Two Clock Multiplier Unit (CMU) PLLs (Located in channel 1 and channel 4 of
Table 1. Transmitter PLLs in Stratix 10 L-Tile/H-Tile Devices
Best jitter performance
LC tank based voltage
controlled oscillator (VCO)
synthesis mode (in cascade mode only)
Used for both bonded
and non-bonded channel configurations
Fractional PLL (fPLL)
Ring oscillator based
Used for both bonded
and non-bonded channel configurations
Used as an additional
clock source for non-bonded applications
The total number of TX PLLs per tile is:
Eight ATX PLLs (2 ATX PLLs per bank * 4 banks per tile)
Eight fPLLs (2 fPLLs per bank * 4 banks per tile)
Eight CMU PLLs (2 CMU PLLs per bank * 4 banks per tile)
Figure 2. Stratix 10 PLLs and Clock Networks in Two Banks of
Stratix® 10 L-Tile/H-Tile The ATX PLL, fPLL and CMU PLLs can drive the x1 clock network to
support non-bonded transceivers. The ATX PLL and fPLL can drive the x6 clock network to
support bonded transceivers within the bank. The x6 clock network can drive the x24 clock
network in adjacent banks, allowing ATX PLLs and fPLLs to support up to 24 bonded
transceiver channels. The x1, x6, and x24 clock networks are described in the Transceiver Clock Network section.
Note: For further details on CGB, refer to "PLL
and Clock Networks" chapter in
Stratix® 10 L- and H-Tile
Transceiver PHY User Guide.
1 The CMU PLL or Channel PLL of channel 1 and channel 4
can be used as a transmitter PLL or as a clock data recovery (CDR) block. The
channel PLL of all other channels (0, 2, 3, and 5) can only be used as a CDR.
188.8.131.52. ATX PLL
The ATX PLLs can be used for bonded and non-bonded applications. The ATX PLLs
can access x1, x6, and x24 clock lines. There are spacing rules between two ATX PLLs running
at the same VCO frequency. You can find the VCO frequency by looking at your PLL IP Platform Designer parameter. For more details, refer to Transceiver
Clock Network and ATX PLL Spacing Requirements.
The fPLLs can be used for bonded and non-bonded applications. The fPLLs can
access x1, x6, and x24 clock lines. There are no spacing rules between fPLLs regardless of
their VCO frequencies.
Figure 4. fPLL Block Diagram
184.108.40.206. CMU PLL
CMU PLLs can only be used for non-bonded applications and can only access the
x1 clock lines.
When using a CMU PLL in channel 1 or channel 4 of a bank, that channel is no
longer available to receive data, but the channel can still be used for transmitting data.
Figure 5. CMU PLL Block Diagram
1.1.2. Transmitter Clock Network
The transmitter clock network routes the clock from the transmitter PLL to
one or more transmitter channels. It provides two types of clocks to the transmitter
High-Speed Serial Clock - high-speed clock for the serializer
Low-Speed Parallel Clock - low-speed clock for the serializer and the PCS
In a bonded channel configuration, both the serial clock and the parallel
clock are routed from the transmitter PLL to the transmitter channels. In a non-bonded channel
configuration, only the serial clock is routed to the transmitter channels, while the parallel
clock is generated locally within each channel.
To support various bonded and non-bonded clocking configurations, three types of transmitter
clock network lines are available:
x1 clock lines: Span a single bank within a tile and are used for
non-bonded channel clocking only
x6 clock lines: Span a single bank within a tile and are used for bonded
x24 clock lines: Span all banks within a tile and are used for both PMA
bonded and PMA-PCS bonded transceiver channels.
All clock lines are contained within a single tile and cannot span across multiple tiles.
Figure 6. x1 Clock Lines
Figure 7. x6 Clock Lines
Figure 8. x24 Clock Lines
There are two x24 lines available per tile:
x24 Up: Routes clocks to transceiver banks located above the current
x24 Down: Routes clocks to transceiver banks located below the current
When using the x24 lines, the maximum channel span is two banks above and two
banks below the master bank containing the instantiated TX PLL. If using the x24 clock lines
across all four banks within the tile, the TX PLL must be instantiated in one of the middle
banks to comply with the channel span requirements.
Both the L-tile and H-tile contains the GXT clock network. The GXT clock
network allows an ATX PLL to drive up to six transmitter channels—four in its bank and two in
an adjacent bank. The GXT clock network is used for data rates above 17.4
Refer to GXT Channels for
details. See "Using the
ATX PLL for GXT Channels" in the
Stratix® 10 L- and H-Tile Transceiver PHY
User Guide for an in-depth discussion of the GXT clock network and the ATX PLL usage
Figure 9. Top ATX PLL GXT Network
Reach (Right Side of
Below Figure)If the ATX PLL is in the upper triplet, its drive span is all four GXT
channels within its own bank and channels ch0 and ch1 of the bank above.
Figure 10. Bottom ATX PLL GXT Network
Reach (Left Side of
Below Figure)If the ATX PLL is in the bottom triplet, its drive span is all four GXT
channels within its own bank and channels ch3 and ch4 from the bank below.
The transceiver is calibrated at device power on. The OSC_CLK_1 signal is used for device configuration and by transceiver calibration
logic. OSC_CLK_1 must be driven by a free running 25 MHz, 100
MHz, or 125 MHz clock source if the transceiver tiles are used. The internal FPGA oscillator
cannot be used for transceiver calibration.
The clock source must be stable at FPGA device configuration and should continue to run
during device operation.
2. Tile Architecture Constraints
2.1. Transceiver Channel Placement
Stratix® 10 product family
introduces several transceiver tile variants to support a wide variety of protocol
Table 3. Channel TypesThere are a total of 24 channels available per tile. You can configure
them as either GX channels or as a combination of GX and (up to 16) GXT channels as long as
the total does not exceed 24. You can use a GXT channel as a GX channel, but it would be
subject to all of the GX channel placement constraints.
Figure 18. Example Combination 4: 1 GXT and 3 GX Channels
4 If you use GXT channel
data rates, the VCCR_GXB and VCCT_GXB voltages must be set
to 1.12 V.
2.1.2. GX Channels
Stratix® 10 GX transceiver channels can support
data rates up to 17.4 Gbps for chip-to-chip applications, and 12.5 Gbps for backplane
L-tile and H-tile
transceiver clocking architecture supports both bonded and non-bonded
transceiver channel configurations. Channel bonding is used to minimize the clock skew between
multiple transceiver channels. For
L-tile and H-tile
transceivers, the term bonding you can refer to PMA bonding as well as
220.127.116.11. Non-bonded GX Channels
Non-bonded channels can be placed anywhere within the transceiver tile.
Separate PHY IP cores, TX PLL, and REFCLK sources are required for each tile
even if the transceivers are running at the same datarate with the same functionality.
18.104.22.168. Bonded GX Channels
Bonding across multiple transceiver tiles is not supported. All bonded
channels must be placed within the same transceiver tile. A maximum of 24 channels can be
When PMA bonding is enabled, the channels do not need to be placed
contiguously in the transceiver tile. When both PMA and PCS bonding are enabled, the channels
must be placed contiguously in transceiver tile and in ascending order.
Figure 19. x4 ConfigurationThe figure below shows a way of placing 4 bonded channels. In this case,
the logical PCS Master Channel number 2 must be specified as Physical channel 4 of bank
Figure 20. Mix and Match GX Channels Design ExampleExample
Stratix® 10 L-tile/H-tile,
with Interlaken, 10GBaseKR,
2.1.3. GXT Channels
For more information on different channel types and the datarates supported
by them, please refer to table "Channel Types" in L-Tile and H-Tile
2.1.4. Reference Clock Guidelines for L-Tile and H-Tile
The transmitter PLL and the clock data recovery (CDR) block need an input
reference clock source to generate the clocks required for transceiver operation. The input
reference clock must be stable and free-running at device power-up for proper PLL
L-tile and H-tile
transceiver PLLs have five possible input reference clock sources, depending
on jitter requirements:
Dedicated reference clock pins
Receiver input pins
Reference clock network
PLL cascade output (fPLL only)
Core clock network (fPLL only)
Note: Each core clock network reference clock
pin cannot drive fPLLs located on multiple
Intel recommends using the
dedicated reference clock pins and the reference clock network for the best jitter
For the best jitter performance, Intel recommends placing the reference clock as close as
possible to the transmitter PLL. The following protocols require the reference clock to be
placed in same bank as the transmitter PLL:
OTU2e, OTU2, OC-192 and 10G PON
6G and 12G SDI
Note: For optimum performance of
GXT channel, the reference clock of transmitter PLL is recommended to be from a dedicated
reference clock pin in the same
Figure 26. Input Reference Clock Sources
Stratix® 10 devices, the FPGA fabric core clock network can be used as an input reference
source for fPLL only.
The input reference clock is a differential signal. Intel recommends using the dedicated reference clock pin in the same bank
as the transmitter PLL for optimal jitter performance. The input reference clock must be
stable and free-running at device power-up for proper PLL operation and PLL calibration. If
the reference clock is not available at device power-up, then PLL must be recalibrated when
the reference clock is available.
Figure 27. Dedicated Reference Clock Pins and Other Reference Clock
Stratix® 10 L-tile and H-tile devices,
dedicated reference clock pins and reference clock network can be used by the transmitter
PLL (ATX PLL and fPLL).
the remaining channels of the
by ATX PLL if 4 or more channels of
are used at Gen2
or Gen3 speeds. Using ATX PLL to drive these channels helps achieve better performance.
Quartus® Prime issues a critical warning if fPLL is used to
drive the remaining channels.
2.2. Unsupported Dynamic Reconfiguration Features
The following is a list of the unsupported dynamic reconfiguration
Reconfiguration from a bonded configuration to a non-bonded
configuration, or vice versa
Reconfiguration from a bonded protocol to another bonded protocol
Reconfiguration from PCIe (with Hard IP) to PCIe (without Hard IP), or
non-PCIe bonded protocol switching
2.3. Intel Stratix 10 L-Tile Transceiver to H-Tile Transceiver Migration
All of the L-tile transceiver constraints apply to H-tile transceivers as
well. The H-tile transceivers have no further restrictions than the L-tile transceivers, with
the exception of the GXT channels.
If you plan to use GXT channels in the H-tile, the VCCR_GXB and VCCT_GXB pins on that tile must be set to 1.12
Note: When migrating from L-tile to H-tile
transceivers, use the
and Thermal Calculator
tool to validate your regulator sizing.
The placement constraints for the GXT channels
mentioned in the GXT Channels section.
Optimal thermal performance can be achieved by reducing the power density
within the transceiver tile. Placing many high data rate channels next to each other results
in high power density areas within a tile. Following a general guideline of minimizing power
in a less complex, and cheaper cooling solution for the FPGA.
For best thermal performance you can minimize power density by picking transceiver channel
locations early on. Follow these guidelines when placing your transceiver channels within a
Spread out channels as much as possible
If all channels in a tile are used, intersperse low and high data rate channels
The middle of the tile has the best thermal performance, followed by the
bottom and then the top of each tile when looking at the Pin Planner
and Thermal Calculator (PTC) contains a Thermal worksheet to help you
determine the impact of transceiver placement on your thermal solution requirements. Prior to
finalizing your board design you should analyze your transceiver channel placement using the
to ensure it is thermally optimal.
Note: Contact your local FAE to have Intel run a
thermal analysis of your board design after you have determined placement of all transceiver
3. PCIe Guidelines
3.1. PCIe Hard IP
There is one PCIe Hard IP available per transceiver tile.
3.1.1. Channel Placement for PCIe Hard IP
The PCIe lane 0 is always mapped to ch0 of the transceiver tile. Channel 0 of
the transceiver tile = Bank 0, Channel 0.
The PCIe x1, x2, x4 and x8 configurations always consume a total of eight
Only the bottom left transceiver tile supports configuration via protocol
Figure 32. Transceiver Channel Usage for PCIe x1, X2, x4, x8 and x16
3.1.2. PLL Placement for PCIe Hard IP
If the PCIe Hard IP is configured as Gen1/Gen2 capable IP, the fPLL is used
as a transmitter PLL.
If the PCIe Hard IP is configured as Gen3 capable IP, then
fPLL is used as a transmitter PLL when running at Gen1/Gen2 speeds.
ATX PLL is used as a transmitter PLL when running at Gen3 speeds.
Figure 33. PLL Placement for Gen1 and Gen2 x1/ x2/ x4/ x8
Figure 34. PLL Placement for Gen1 and Gen2 x16
Figure 35. PLL Placement for Gen3 x1/x2/x4/x8
Figure 36. PLL Placement for Gen3 x16
TX PLL Guidelines When Using PCIe
The remaining channels of the
recommended to be driven by ATX PLL if 4 or more channels of PCIe are used at Gen2 or Gen3
speeds. Using ATX PLL to drive these channels helps achieve better performance.
Quartus® Prime issues a critical warning if fPLL is used to drive
the remaining channels.
issues a critical warning if FPLL is used instead of ATX PLL.
3.2. PHY Interface for PCIe Express (PIPE)
This can be used when you want flexible channel placement or to
Stratix® 10 PCIe PHY with existing 3rd party PCIe IP
3.2.1. Channel Placement for PIPE
any transceiver channel running at data rate above 6.5 Gbps that shares a tile with
an active PCI Express interface that are Gen2 or Gen3 capable and configured with
more than 2 lanes (Gen2/3 x4, x8, x16) may observe momentary bit errors (BER) during
a PCI Express rate change event (PCIe link training both up and down, i.e., link
down and start of link training ). Transceiver channels that share a tile with
active PCI Express interfaces that are only Gen1 capable are not impacted.
For details on channel placement for PIPE, refer to the section "How
to place channels for PIPE configurations" in
H-Tile Transceiver PHY User Guide.
When instantiating PIPE interfaces and PCIe Hard IP in the same transceiver
tile, be aware of ATX PLL and ATX-fPLL spacing rules. For more details refer to
PLL Placement section.
TX PLL Guidelines When Using PCIe
Intel® recommends that the
remaining channels of the
to be driven by ATX PLL if 4 or more channels of PCIe are used at Gen2 or
Gen3 speeds. Using ATX PLL to drive these channels helps achieve better
Quartus® Prime issues a critical
warning if fPLL is used to drive the remaining channels.