3. Intel® FPGA AI Suite Model Development Overview
Intel® FPGA AI Suite was developed to simplify the development of artificial intelligence (AI) inference applications on Intel® FPGA devices. Intel® FPGA AI Suite facilitates the collaboration between software developers, ML engineers, and FPGA designers to create optimized FPGA AI platforms efficiently.
Utilities in Intel® FPGA AI Suite speed up FPGA development for AI inference using familiar and popular industry frameworks such as TensorFlow* or PyTorch* and OpenVINO™ toolkit, while also leveraging robust and proven FPGA development flows with Intel® Quartus® Prime software.
The Intel® FPGA AI Suite tool flow works with OpenVINO™ toolkit, which is an open-source project to optimize inference on a variety of hardware architectures. OpenVINO™ toolkit takes deep learning models from all the major deep learning frameworks (such as TensorFlow*, PyTorch*, or Keras*) and optimizes them for inference on a variety of hardware architectures, including various CPUs, CPU-GPU combinations, and FPGA devices.
- Compile the IR from OpenVINO™ Model Optimizer to an FPGA bitstream.
- Estimate the performance of a graph or partition of a graph.
- Estimate the FPGA area required by an architecture.
- Generate an optimized architecture or an optimized architecture for a frame rate target value.
- TensorFlow* 1
- TensorFlow* 2
- PyTorch*
- Keras*
- ONNX*
- Caffe
- MXNet*
Fully Connected |
2D Convolution |
Depthwise |
Scale-Shift |
Deconvolution |
Transpose Convolution |
ReLU |
pReLU |
Leaky ReLU |
Clamp |
H-Sigmoid |
H-Swish |
Max Pool |
Average Pool |
Softmax |
For a complete list of supported layers, refer to "Intel FPGA AI Suite Layer / Primitive Ranges" in the Intel® FPGA AI Suite IP Reference Manual.
You can run layers that are not supported by Intel® FPGA AI Suite by transferring data between the FPGA device and another supported device such as CPU or GPU. If your goal is to fully port an AI model to an FPGA device, you might need to consider a performance tradeoff from switching devices for processing.