Intel® Distribution of OpenVINO™ Toolkit
Deploy High-Performance, Deep Learning Inference
The latest version (2022.3 LTS) of the Intel® Distribution of OpenVINO™ toolkit makes it easier for developers everywhere to start innovating. This new release empowers developers with new performance enhancements, more deep learning models, more device portability, and higher inferencing performance with fewer code changes.
Sign Up for Intel Distribution of OpenVINO Toolkit News
Keep up-to-date on the latest product releases, news, and tips.
Get the Most Out Of Your AI Deployment From Start to Finish
The Intel Distribution of OpenVINO toolkit makes it simple to adopt and maintain your code. Open Model Zoo provides optimized, pretrained models and Model Optimizer API parameters make it easier to convert your model and prepare it for inferencing. The runtime (inference engine) allows you to tune for performance by compiling the optimized network and managing inference operations on specific devices. It also auto-optimizes through device discovery, load balancing, and inferencing parallelism across CPU, GPU, and more.
High Performance, Deep Learning
Train with inferencing in mind, starting with common frameworks like TensorFlow* and PyTorch* and using the Neural Network Compression Framework (NNCF) for the Intel Distribution of OpenVINO toolkit.
Import your model into OpenVINO using the Post-Training Optimization Tool for post-training quantization and optimization.
Write Once, Deploy Anywhere
Deploy your same application across combinations of host processors and accelerators (CPUs, GPUs, VPUs) and environments (on-premise, on-device, in the browser, or in the cloud).
How it Works
Convert and optimize models trained using popular frameworks like TensorFlow, PyTorch, and Caffe*. Deploy across a mix of Intel® hardware and environments, on-premise and on-device, in the browser, or in the cloud.
What's New in Version 2022.3 LTS
Broader Model and Hardware Support
Optimize and deploy with ease across an expanded range of deep learning models that include natural language processing (NLP). Access AI acceleration across an expanded range of hardware.
Improved API and More Integrations
It is simpler to adopt and maintain your code. This version requires fewer code changes, provides more options for integrating with frameworks, and minimizes conversions.
Expanded Model Coverage
See a performance boost quickly with automatic device discovery, load balancing, and dynamic inference parallelism across CPUs, GPUs, and more.
Explore the Capabilities and Get Started
Democratize deep learning and unleash a new wave of creativity with OpenVINO. Get started with the resources you need to learn, try samples, see performance, and get certified—on your own desktop or laptop.
Intel® Geti™ Platform
This is a commercial software platform that enables enterprise teams to develop vision AI models faster. With the platform, companies can build models with minimal data, and with OpenVINO integration, facilitate deploying solutions at scale.
Optimize, fine-tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
The productive, smart path to freedom for accelerated computing from the economic and technical burdens of proprietary alternatives.
Sign Up for Exclusive News, Tips & Releases
Be among the first to learn about everything new with the Intel® Distribution of OpenVINO™ toolkit. By signing up, you get:
• Early access product updates and releases
• Exclusive invitations to webinars and events
• Training and tutorial resources
• Contest announcements
• Other breaking news