Intel® Agilex™ 5 FPGA and SoC FPGA D-Series
Manufactured using Intel 7 technology and delivering 2X better performance-per-watt vs. 7 nm node competitors1, D-Series devices deliver high performance, low power, and small form factors for midrange FPGA applications.
Midrange FPGAs Optimized for Performance and Power Efficiency
Designed for applications across multiple markets, including communications infrastructure, broadcast, medical, test & measurement, industrial, artificial intelligence (AI), robotics, and more.
Highest Performance-per-watt for Midrange FPGA Applications
With Intel 7 process technology, 2nd Gen Intel® Hyperflex™ FPGA Architecture, and high level of system integration, D-Series FPGAs achieve higher performance and lower power consumption.
Up to
2X
better performance-per-watt vs 7 nm node competitors1
Up to
40%
lower total power consumption vs. Intel® Stratix® 10 FPGAs
Up to
1.5X
fabric performance compared to Intel® Stratix® 10 FPGAs
Up to
2X
better performance-per-watt vs 7 nm node competitors1
Up to
40%
lower total power consumption vs. Intel® Stratix® 10 FPGAs
Up to
1.5X
fabric performance compared to Intel® Stratix® 10 FPGAs
Built for Performance and Power Efficiency
Industry-leading fabric performance-per-watt, based on Intel 7 technology targeted for midrange FPGA applications requiring high performance, lower power consumption, smaller form factors, transceiver data rates <28 Gbps, and lower logic densities (down to 100k LEs). Get the same fabric performance as Intel® Agilex™ 7 FPGAs F-Series and I-Series with up to 15% lower total power.
Predictable and Flexible
D-Series devices leverage Intel’s advanced manufacturing capabilities for supply resiliency, which means you can expect consistent best-in-class lead times and predictable and reliable delivery. Flexible I/O with a comprehensive set of options support a variety of I/O standards configurable in a single D-Series FPGA.
Accelerate
Accelerate AI-based workloads with the industry’s first FPGAs integrating Enhanced DSP with AI Tensor Blocks—offering up to 56.5 INT8 TOPS enabling low-latency, high-throughput AI inference applications. Its ease of use enables you to optimize the AI resources in the device and create custom-sized inference IP with the FPGA industry's only single push-button flow incorporating AI frameworks like CAFFE, PyTorch, and TensorFlow.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary.