FPGAi
Learn how FPGAs revolutionize AI applications with custom hardware acceleration, low latency, and energy efficiency.
Leading a New Era in AI with Altera FPGAs
The fusion of FPGAs and AI is not just an evolution — it's a revolution.
Altera®, an Intel company, is delivering high performance FPGA fabric enhanced and infused with AI capabilities and tools for this new era of FPGAi. This tight coupling of programmable logic and artificial intelligence capabilities enables innovators to add intelligence to new custom solutions — empowering systems to make autonomous decisions, adapt to new data in real time, and process information quickly and efficiently to conquer the complexities of next-generation solutions.
Solution Capability: High-Performance FPGAs
The Agilex™ brand of high-performance devices accelerates innovators in the new era of FPGAi. With 2x better performance per watt vs. competing 7nm FPGAs1, users can meet the demand needs for lower power.
The Agilex™ 7 FPGA M-Series unleashes the industry’s highest memory bandwidth with over 1 TBps. Using in-package HBM2E (up to 32GB capacity) and hardened DDR5/LPDDR5 memory controller (supporting 5,600 Mbps)2, the device alleviates bottlenecks for memory-bound AI solutions like large language models.
Agilex™ FPGAs exemplify the Altera® technological prowess while redefining efficiency and transformative approaches to address the needs of today's data-driven world.
The Agilex™ 5 product family contains the only FPGA fabric infused with AI, making it ideal for compute-bound AI networks. This mid-range device family incorporates digital signal processing (DSP) with AI Tensor Blocks to vastly improve the available compute. With up to 56 INT8 TOPs, Agilex™ 5 devices are well suited for low-power embedded edge solutions.
Developer Usability: Common RTL and IP Flow
Intel® FPGA AI Suite offers a seamless flow for embedding AI inference into Altera® FPGAs. This suite takes advantage of the traditional RTL and IP flow. It amplifies the fundamental benefits of FPGAs—like reconfigurability and superior I/O management, allowing developers to create real-time, low-latency AI applications without sacrificing power efficiency.
The Intel® FPGA AI Suite integrates with Quartus® Prime Design Software and Platform Designer to simplify the incorporation of AI inference IP. It bridges the gap from AI model development to FPGA deployment, ensuring compatibility with leading frameworks such as TensorFlow and PyTorch through the OpenVINO toolkit.
Workload Agility: AI That Needs Flexibility
Altera® FPGAs offer a canvas of re-programmability for continuous AI innovation, allowing engineers to tailor solutions for new challenges. A diverse set of I/O protocols offers reconfigurability, enabling workload agility – efficiently handling various applications from real-time data processing to AI-driven analytics without costly redesign.
FPGAi has the capacity to integrate with existing systems and future technologies, ensuring a sustainable approach to development. Industries can confidently stride towards a future where their technological investments are intelligent and responsible. Agility, flexibility, extensibility, sustainability, and longevity set FPGAi apart, promising a smarter and more adaptable world.
FPGAi Applications
Edge AI
FPGAs are especially suited for edge AI in various industrial, medical, test and measurement applications, aerospace, defense, and automotive. Data at the edge can be diverse. Diverse I/O protocols, low latency, low power, and long lifetime are additional FPGA advantages at the edge.
Network
The network facilitates data transfers and communication between edge devices, cloud services, and other interconnected components. FPGAs are equipped with the latest generation of high-speed I/O standards to accelerate wireless and wireline networking usage. They can play an effective role as networks add intelligence to emerging applications, such as anomaly detection, wireless channel estimation, and wireless decoder convergence.
Cloud/Data Center/High Performance Computing
FPGAs have been widely deployed in cloud and data center environments to accelerate databases, genomics, and networking and help AI inference tasks, such as large language models, conversational AI, and recommendation systems. Neural network applications include anomaly detection at very high data rates in NICs, financial fraud detection, and high-speed trading. In addition, the high energy efficiency of FPGAs helps mitigate cooling costs and supports the development of greener AI technologies.
Watch the video about Myrtle.ai's turnkey software application VOLLO for low-latency inference ›
Read the solution brief about a new approach to Neuromorphic Computing ›
Explore Resources to Get You Started
Intel® FPGA AI Suite
Speed up your FPGA development for AI inference using frameworks such as TensorFlow or PyTorch and OpenVINO toolkit, while leveraging robust and proven FPGA development flows with the Intel Quartus Prime Software.
Learn more
Intel® Distribution of OpenVINO Toolkit
An open-source toolkit that makes it easier to write once and deploy anywhere.
Learn more
Need More Information?
Let us know how we can help with your questions.
Contact us
Discover More AI Resources
Why FPGAs Are Good for Implementing Edge AI and Machine Learning Applications
Read the emerging use cases of FPGA-based AI inference in the edge and custom AI applications and Intel’s software and hardware solutions for edge FPGA AI.
FPGA Vs. GPU for Deep Learning
While no single architecture works best for all machine and deep learning applications, FPGAs can offer distinct advantages over GPUs and other types of hardware.
Quantized Neural Networks for FPGA Inference
Low-precision quantization for neural networks supports AI application specifications by providing greater throughput for the same footprint or reducing resource usage.
Partners Accelerating AI at the Edge
Watch these videos to learn how Altera’s partners can help you accelerate AI workloads on FPGAs.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.intel.com/PerformanceIndex. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary.
Agilex™ 7 FPGA M-Series theoretical maximum bandwidth of 1.099 TBps with 2 banks of HBM2e using ECC as data and 8 DDR5 DIMMs as compared to Xilinx Versal HBM memory bandwidth of 1.056 TBps as of October 14, 2021, and to Achronix Speedster 7t memory bandwidth of 0.5 TBps as of October 14, 2021.