Intel® Distribution of OpenVINO™ Toolkit
Deploy High-Performance, Deep Learning Inference
The latest version (2022.1) of the Intel® Distribution of OpenVINO™ toolkit makes it easier for developers everywhere to start creating. This is the biggest upgrade since the original launch of the toolkit and offers more deep-learning models, device portability, and higher inferencing performance with fewer code changes. Get started quickly with pretrained models from the Open Model Zoo that are optimized for inference. And since the toolkit is compatible with the most popular frameworks, there is minimal disruption and maximum performance.
Get the Most Out Of Your AI Deployment From Start to Finish
The OpenVINO toolkit makes it simple to adopt and maintain your code. Open Model Zoo provides optimized, pretrained models and Model Optimizer API parameters make it easier to convert your model and prepare it for inferencing. The runtime (inference engine) allows you to tune for performance by compiling the optimized network and managing inference operations on specific devices. It also auto-optimizes through device discovery, load balancing, and inferencing parallelism across CPU, GPU, and more.
High Performance, Deep Learning
Train with inferencing in mind, starting with common frameworks like TensorFlow* and PyTorch* and leveraging the OpenVINO™ toolkit Neural Network Compression Framework (NNCF).
Streamlined Development
Import your model into OpenVINO using the Post-Training Optimization Tool for post-training quantization and optimization.
Write Once, Deploy Anywhere
Deploy your same application across combinations of host processors and accelerators (CPUs, GPUs, VPUs) and environments (on-premise, on-device, in the browser, or in the cloud).
How it Works
Convert and optimize models trained using popular frameworks like TensorFlow, Pytorch, and Caffe*. Deploy across a mix of Intel® hardware and environments, on-premise and on-device, in the browser, or in the cloud.
What's New in Version 2022.1
Updated, Cleaner API
Adopt and maintain your code more easily. This version includes better alignment to TensorFlow conventions, fewer parameters, and minimizes conversion.
Broader Model Support
Optimize and deploy with ease across an expanded range of deep-learning models including natural language processing (NLP), double precision, and computer vision.
Portability and Performance
See a performance boost quickly with automatic device discovery, load balancing, and dynamic inferencing parallelism across CPU, GPU, and more.
Explore the Capabilities and Get Started
Democratize deep learning and unleash a new wave of creativity with OpenVINO. Get started with the resources you need to learn, try samples, see performance, and get certified—on your own desktop or laptop.
Use the in-line optimizations and runtime in the OpenVINO toolkit for an enhanced level of TensorFlow compatibility. Add the following two lines of code to your Python* code or Jupyter* Notebook:
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
Accelerate inference across many AI models on a variety of Intel® silicon, such as:
- Intel® CPUs
- Integrated Graphics from Intel
- Intel® Movidius™ Vision Processing Units (VPU)
- Intel® Vision Accelerator Design with eight Intel® Movidius™ Myriad™ X VPUs
Find Supported Models on GitHub*
Success Stories
How Vistry* Uses OpenVINO to Help Restaurants
Vistry* uses data from fast food restaurants to power their AI and IoT data analytics. This helps restaurants measure and improve their speed and quality of service from the moment a customer pulls into the parking lot to when they leave.
Pathr.ai* Gives Mall Operators Data-Driven Insights
With 10x performance gains enabled by OpenVINO, Pathr.ai* helps malls optimize lease rates and improve service.
Community and Support
Explore different ways to get involved and stay up-to-date with the latest announcements.
Get Started
Optimize, fine-tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
The productive, smart path to freedom for accelerated computing from the economic and technical burdens of proprietary alternatives.