Intel® Distribution of OpenVINO™ Toolkit
What's New in Version 2022.1
This release provides functional bug fixes and capability changes for the previous 2021.3 release. This new release empowers developers with new performance enhancements, more deep-learning models, more device portability, and higher inferencing performance with fewer code changes.
Note This is a standard release intended for developers who prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (one year of bug fixes, and two years for security patches).
Updated, Cleaner API
Adopt and maintain your code more easily. This version includes better alignment to TensorFlow* conventions, fewer parameters, and minimizes conversion.
- The new OpenVINO API 2.0 was introduced, which aligns OpenVINO inputs and outputs with frameworks. Input and output tensors use native framework layouts and element types.
- The API parameters in Model Optimizer have been reduced to minimize complexity. Performance has been significantly improved for model conversion on Open Neural Network Exchange (ONNX*) models.
Broader Model Support
Optimize and deploy with ease across an expanded range of deep-learning models including natural language processing (NLP), double precision, and computer vision.
- With Dynamic Input Shapes capabilities on CPU, OpenVINO will be able to adapt to multiple input dimensions in a single model providing more complete NLP support. Support for Dynamic Shapes on additional XPUs is expected in a future dot release.
- New models with a focus on NLP and a new category, Anomaly Detection, and support for conversion and inference of select PaddlePaddle* models:
- Pretrained models for anomaly segmentation focus on industrial inspection making speech denoising trainable, plus updates on speech recognition and speech synthesis
- Combined demonstration that includes noise reduction, speech recognition, question answering, translation, and text to speech
- Public models with a focus on NLP ContextNet, Speech-Transformer, HiFi-GAN, Glow-TTS, FastSpeech2, and Wav2Vec
Portability and Performance
See a performance boost quickly with automatic device discovery, load balancing, and dynamic inferencing parallelism across CPU, GPU, and more.
- New AUTO plug-in self-discovers available system inferencing capacity based on model requirements so applications no longer need to know their compute environment in advance.
- Automatic batching functionality via code hints automatically scale batch size based on XPU and available memory.
- Built with 12th generation Intel® Core™ processors (formerly code named Alder Lake) in mind. Supports the hybrid architecture necessary to deliver enhancements for high performance inferencing on CPUs and integrated GPUs.
Get Started
Optimize, fine-tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.