Deep Learning Inference
After a neural network is trained, it is deployed to run inference—to classify, recognize, and process new inputs. Develop and deploy your application quickly with the lowest deterministic latency on a real-time performance platform. Simplify the acceleration of convolutional neural networks (CNN) for applications in the data center and at the edge.
Intel® FPGAs for the Data Center and at the Edge
Field programmable gate arrays (FPGAs) are customizable integrated circuits containing logic elements, DSP blocks, on-die memory, and flexible I/O. These building blocks enable the developer to implement any number of functions directly in the hardware.
Machine Learning on Intel® FPGAs
Take advantage of the flexibility of FPGAs to add in-line machine learning capability to any custom interface for the lowest deterministic latency and real-time inference.
Frameworks in the Data Center
Design and deploy models on familiar Intel-based architecture that offers competitive performance and cost efficiency for most AI frameworks.
Inference at the Edge
This toolkit includes the Deep Learning Deployment Toolkit Beta, which contains everything you need to optimize your models for inference and heterogenous performance by taking advantage of execution across Intel® accelerators.