Overview of Intel® Distribution of OpenVINO™ Toolkit
AI inference applies capabilities learned after training a neural network to yield results. The Intel Distribution of OpenVINO toolkit enables you to optimize, tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
Discover the Capabilities
High Performance, Deep Learning
Convert and optimize models to achieve high performance for deep-learning inference applications.
Streamlined Development
Facilitate a smoother development process using the included inference tools for low-precision optimization and media processing, computer vision libraries, and preoptimized kernels.
Write Once, Deploy Anywhere
Deploy your same application across combinations of host processors, accelerators, and environments, including CPUs, GPUs, VPUs, on-premise and on-device, and in the browser or in the cloud.
How It Works
Use the Open Model Zoo to find open-source, pretrained, and preoptimized models ready for inference, or use your own deep-learning model.
Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation (IR), which is represented in a pair of files (.xml and .bin). These files describe the network topology and contain the weights and biases binary data of the model.
Use the Post-training Optimization tool to optimize the inference of deep-learning models by applying special methods without model retraining or fine-tuning, like post-training quantization.
Neural Network Compression Framework (NNCF) provides a suite of advanced algorithms for neural networks inference optimization with minimal accuracy drop.
Both Model Optimizer and the Post-training Optimization Tool are part of the Deep Learning Workbench, which is included in the Intel Distribution of OpenVINO toolkit. Use the Deep Learning Workbench to import a model, analyze its performance and accuracy, visualize the outputs, and optimize and prepare the model for deployment on various Intel® platforms.
Use the Inference Engine to run inference and output results on multiple processors, accelerators, and environments with a write once, deploy anywhere efficiency.
Get Started
OpenVINO™ Integration with TensorFlow*
Use the OpenVINO™ toolkit inline optimizations and runtime for an enhanced level of TensorFlow* compatibility by adding the following two lines of code to your Python* code or Jupyter* Notebook:
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
Accelerate inference across many AI models on a variety of Intel® silicon, such as:
- Intel® CPUs
- Integrated Graphics from Intel
- Intel® Movidius™ Vision Processing Units (VPU)
- Intel® Vision Accelerator Design with eight Intel® Movidius™ Myriad™ X VPUs
What You Can Do
See how developers use the Intel Distribution of OpenVINO toolkit on multiple Intel® architectures to enable new and enhanced use cases across industries, including manufacturing, health and life sciences, retail, security, and more.
Intel® Edge AI Certification Program
Free Tools & Training
Gain hands-on coding experience with edge AI tools, platforms, and pretrained models.
Create AI Applications
Complete hands-on learning exercises to build your own portfolio of completed AI solutions.
Solve Real Problems
Create business value for your customers and your organization.
Career Enhancement
Advance your career in AI with cutting-edge skills.
What's New in the 2021.4.2 LTS Release
- Support for the 12th generation Intel® Core™ processor family that is built on the Intel 7 process with new performance hybrid architecture that delivers enhancements in multithreaded performance to handle compute-intensive workloads.
- Specific fixes, capability improvements, and support updates to known issues with:
- Model Optimizer, specifically issues causing accuracy regression
- Inference Engine (plug-ins for Inference Engine Python* API, C API, GPU, Intel® Movidius™ Myriad™ VPU, HDDL, and Intel® Gaussian & Neural Accelerator)
- Added support for the 12th generation Intel® Core™ processor family that enables Intel® Gaussian & Neural Accelerator (Intel® GNA) 3.0 and Intel GNA generation with native 2D convolutions
- Functional performance improvements to testing and accuracy, fixes to bugs that caused performance degradation for several models, fixed heap-use-after-free, and memory leaks
- Minor capability changes and bug fixes to the Open Model Zoo, specifically issues that affected the Accuracy Checker in the Deep Learning Workbench
- Additional Jupyter* Notebook tutorials:
- Use your laptop and webcam to run demonstrations for object detection and human pose estimation
- Apply 8-bit quantization with a neural network compression framework to optimize your Keras and TensorFlow or PyTorch models
- Learn how to show live inference, and optimize and quantize a segmentation model
- See all tutorials
Community and Support
Explore different ways to get involved and stay up-to-date with the latest announcements.
Awarded by the Embedded Vision Alliance*
The productive, smart path to freedom for accelerated computing from the economic and technical burdens of proprietary alternatives.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.