Visible to Intel only — GUID: vdm1743465878880
Ixiasoft
2.1.1.1. Requirement for F-Tile Devices which are Powered and Unconfigured
2.1.1.2. FGT PAM4 Bounding Solution
2.1.1.3. FGT Transceivers Do Not Support Direct EXTEST JTAG Instruction in Boundary Scan Test
2.1.1.4. F-Tile: Unsuccessful TX Equalization
2.1.1.5. Link May Not Downgrade With Corrupt Lanes (F-Tile)
2.1.1.6. Intermittent Equalization Timeout or Speed Degrade during Link Disable, Hot Reset, and Speed Change
2.1.1.7. Link Fault Detection window of the F-Tile Ethernet Intel FPGA Hard IP in 10GE-1 or 25GE-1 mode
2.1.1.8. FHT PMA Transmitter-to-Receiver Internal Serial Loopback operation for error-free BER results
2.1.1.9. F-Tile Ethernet Intel FPGA Hard IP rst_tx_stats and rst_rx_stats register bits might not function correctly
2.1.1.10. F-Tile Ethernet Intel FPGA Hard IP force_rf register bit might not function correctly
2.1.1.11. F-Tile Ethernet Intel FPGA Hard IP tx_pause_request register bit might not function correctly
2.1.1.12. F-Tile Ethernet Intel FPGA Hard IP PTP statistics might not clear correctly
2.1.1.13. F-Tile Ethernet Intel FPGA Hard IP unable to achieve 100% throughput with some variants
2.1.2.1. Gen3/Gen4 link might be established without successfully performing Transmit Equalization (TX EQ)
2.1.2.2. Link may not downgrade with corrupt lanes
2.1.2.3. Malformed TLP incorrectly flagged as ECRC error
2.1.2.4. Assertion of PERST / warm reset during the Functional Level Reset results in PCIe* Link Failure
2.1.2.5. No Support for Page Request Services in Port 2 and Port 3 of 4x4 Configuration
2.1.2.6. Multiple Fatal Error Messages
2.1.2.7. PCIe* x4 cores may report Uncorrectable Fatal Error or Malformed TLP
2.1.2.8. Receiver Errors logged during back-to-back Secondary Bus Resets (SBR) operations when running at Gen 2 speed
2.1.2.9. Polling.Active time out during Link Disable-Enable loop tests
2.1.2.10. CXL 1.1 version does not support uncorrectable error reporting when receiving two LLCTRL-Init packet
2.2.1. DDR5 RDIMM support
2.2.2. DDR5 x32 + user bits through NoC is not supported
2.2.3. PhyMux EMIF lockstep pinout configurations Restrictions
2.2.4. DDR4/DDR5 x8 is not supported
2.2.5. GPIO-B blocked in mixed GPIO-B/EMIF use cases
2.2.6. DDR4 rDBI is not working
2.2.7. LPDDR5 and DDR5 reduced sequential write/read bandwidth when using Fabric Sync Fabric direct mode in x16 2-channel configuration
Description
Workaround
Status
2.2.8. LPDDR5 and DDR5 maximum bandwidth for sequential short write is limited
2.2.9. LPDDR5 and DDR5 controller does not support non-aligned transactions
3.1. 843819: Memory Locations May be Accessed Speculatively Due to Instruction Fetches When HCR.VM is Set
3.2. 845719: A Load May Read Incorrect Data
3.3. 855871: ETM Does Not Report IDLE State When Disabled Using OSLOCK
3.4. 855872: A Store-Exclusive Instruction Might Pass When it Should Fail
3.5. 711668: Configuration Extension Register Has Wrong Value Status
3.6. 720107: Periodic Synchronization Can Be Delayed and Cause Overflow
3.7. 855873: An Eviction Might Overtake a Cache Clean Operation
3.8. 853172: ETM May Assert AFREADY Before All Data Has Been Output
3.9. 836870: Non-Allocating Reads May Prevent a Store Exclusive From Passing
3.10. 836919: Write of the JMCR in EL0 Does Not Generate an UNDEFINED Exception
3.11. 845819: Instruction Sequences Containing AES Instructions May Produce Incorrect Results
3.12. 851672: ETM May Trace an Incorrect Exception Address
3.13. 851871: ETM May Lose Counter Events While Entering WFx Mode
3.14. 852071: Direct Branch Instructions Executed Before a Trace Flush May be Output in an Atom Packet After Flush Acknowledgment
3.15. 852521: A64 Unconditional Branch May Jump to Incorrect Address
3.16. 855827: PMU Counter Values May Be Inaccurate When Monitoring Certain Events
3.17. 855829: Reads of PMEVCNTR<n> are not Masked by HDCR.HPMN
3.18. 855830: Loads of Mismatched Size May not be Single-Copy Atomic
Visible to Intel only — GUID: vdm1743465878880
Ixiasoft
2.2.7. LPDDR5 and DDR5 reduced sequential write/read bandwidth when using Fabric Sync Fabric direct mode in x16 2-channel configuration
Description
In LPDDR5 and DDR5 memory interfaces, when using x16 2-channel configuration for long burst sequential access during write or read operations, in fabric sync access mode, performance is limited to 50% per read/write sub-channel.
There is no performance degradation in the following configurations:
- LPDDR5 x16 1-channel when placed in the bottom GPIO-B sub-bank
- DDR5 x16 1-channel
- All impacted configurations, listed below, with random access and in sequential access with mixed read-write traffic
- In all supported configurations with NoC access mode
Impacted configurations:
- DDR5 x32 configuration in DIMM or component configuration
- LPDDR5 x32 configuration
- LPDDR5 and DDR5 x16 2-channel configuration only with Fabric Sync
- LPDDR5 x32 1-channel
- LPDDR5 x16 1-channel when placed in the top GPIO-B sub-bank
Workaround
We recommend using the Fabric Async mode for the impacted configurations. You can access via NoC or Fabric Async mode with the user clock set to ¼ of the memory clock which provides full sequential bandwidth on a single sub-channel.
Status
Devices Affected | Planned Fix |
---|---|
|
None |