AI Inferencing for Computer Vision Solutions using OpenVINO™ Toolkit

With the Open Visual Inference and Neural Network Optimization (OpenVINO™) toolkit, you can now develop applications and solutions that emulate human vision on Intel architecture. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel hardware and maximizes performance.

 

What is the OpenVINO Toolkit?

The OpenVINO™ toolkit is designed to increase performance and reduce time to market for computer vision solutions. It simplifies access to the benefits from the rich set of hardware options available from Intel which can increase performance, reduce power, and maximize hardware utilization – letting you do more with less and opening new design possibilities.

  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across computer vision accelerators such as Intel® CPU, graphics processing unit (GPU), Intel Movidius™ Neural Compute Stick, and FPGA—using common application programming interfaces (APIs)
  • Speeds time to market via a library of functions and pre-optimized kernels
  • Includes optimized calls for OpenCV and OpenVX*
  • Includes the Intel® Deep Learning Deployment Toolkit to accelerate deep learning inference performance

What to know:

What is the Intel Deep Learning Deployment Toolkit?

The Intel Deep Learning Deployment Toolkit (Intel DL Deployment Toolkit) is available in the OpenVINO toolkit. The Intel DL Deployment Toolkit enables cross-platform tool enablement for accelerated deep learning inference performance. The Intel DL Deployment Toolkit consists of:

  • Inference engine with plugins for CPU, GPU, VPU, and FPGA, enabling multi-platform support
  • Model optimizer converts deep learning frameworks such as Caffe and TensorFlow to intermediate representation (IR).

What is the Intel FPGA Deep Learning Acceleration Suite?

The Intel® FPGA Deep Learning Acceleration Suite (Intel FPGA DL Acceleration Suite) is pre-packaged with the Intel DL Deployment Toolkit within OpenVINO. It is a collection of software graph compiler, libraries, and runtimes for machine learning researchers and developers who are seeking real-time AI inferencing optimized for performance, power, and cost using Intel FPGAs.

Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA

The IEI Mustang-F100-A10 is an Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA GX in a small form factor, PCI Express* (PCIe*) acceleration card that is supported natively by the OpenVINO™ toolkit to deliver low-latency video inference for edge and, cloud deployment.  It has been designed to add acceleration capabilities to PCIe host platforms and has been validated on the IEI TANK-870AI compact IPC for those with space and power constraints.


Intel® Programmable Acceleration Card with Intel Arria® 10 GX FPGA

Intel FPGA-based acceleration platforms include PCIe-based programmable acceleration cards, socket-based server platforms with integrated FPGAs, and others that are supported by the Acceleration Stack for Intel Xeon® CPU with FPGAs. Intel platforms are qualified and validated for several leading original equipment manufacturers (OEM) server providers to support large scale FPGA deployment. 


Intel® Arria® 10 GX FPGA Development Kit

The Intel Arria 10 GX FPGA Development Kit delivers a complete design environment that includes all hardware and software you need to start taking advantage of the performance and capabilities that are available in Intel Arria 10 GX FPGAs for your design needs.


OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.