Intel® Distribution of OpenVINO™ Toolkit

753640
12/21/2022

Introduction

This package contains the Intel® Distribution of OpenVINO™ Toolkit software version 2022.3 LTS for Linux*, Windows* and macOS*.

Available Downloads

  • Ubuntu 18.04 LTS*
  • Size: 45.2 MB
  • SHA1: E1ACCC7E978C279EEED6D73C006FCDD725ABFBDD
  • Ubuntu 20.04 LTS*
  • Size: 48.2 MB
  • SHA1: 721A8D4E84FD196F8B4AA9612598C3CD2ECECC54
  • Red Hat Enterprise Linux 8*
  • Size: 45.4 MB
  • SHA1: B430A58BA8805E9FD42FEA669C55FDD110EE4A11
  • macOS*
  • Size: 106.6 MB
  • SHA1: AFD17CAD4E3993F294ABBACC75CF4DD061E4CED2
  • Windows 11*, Windows 10*
  • Size: 88.9 MB
  • SHA1: ADC500AC50B58A1FE3F8E791D015BEE5C82566A9

Detailed Description

Introduction

The Intel® Distribution of OpenVINO™ toolkit is a comprehensive solution for optimizing and deploying AI inference, in domains such as computer vision, automatic speech recognition, natural language processing, recommendation systems, and others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel hardware.

The Intel® Distribution of OpenVINO™ toolkit:

  • Allows use of models trained with popular frameworks like TensorFlow, PyTorch, and more.
  • Optimizes the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization.
  • Supports heterogeneous execution across Intel accelerators, using a common API for the Intel CPU, Intel Integrated Graphics, Intel Discrete Graphics, Intel® Gaussian & Neural Accelerator, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
  • Includes optimized calls for CV standards, including OpenCV* (available as a separate download) and OpenCL™.

 

New and Changed in 2022.3 LTS

This is a Long-Term Support (LTS) release. LTS releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy v.2 to get details.

 

Major Features and Improvements Summary

  • 2022.3 LTS release provides functional bug fixes, and capability changes for the previous 2022.2 release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability and higher inferencing performance with less code changes.
  • Broader model and hardware support – Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware. 
    • Full support for 4th Generation Intel® Xeon® Scalable processor family (code name Sapphire Rapids) for deep learning inferencing workloads from edge to cloud.
    • Full support for Intel’s discrete graphics cards, such as Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads. 
    • Improved performance when leveraging throughput hint on CPU plugin for 12th and 13th Generation Intel® Core™ processor family (code named Alder Lake and Raptor Lake).
    • Enhanced “Cumulative throughput” and selection of compute modes added to AUTO functionality, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
  • Expanded model coverage - Optimize & deploy with ease across an expanded range of deep learning models.
    • Broader support for NLP models and use cases like text to speech and voice recognition. 
    • Continued performance enhancements for computer vision models Including StyleGAN2, Stable Diffusion, PyTorch RAFT and YOLOv7.
    • Significant quality and model performance improvements on Intel GPUs compared to the previous OpenVINO toolkit release.
    • New Jupyter notebook tutorials for Stable Diffusion text-to-image generation, YOLOv7 optimization and 3D Point Cloud Segmentation.
  • Improved API and More Integrations – Easier to adopt and maintain code. Requires fewer code changes, aligns better with frameworks, & minimizes conversion
    • Preview of TensorFlow Front End – Load TensorFlow models directly into OpenVINO Runtime and easily export OpenVINO IR format without offline conversion. New “–use_new_frontend” flag enables this preview – see further details below in Model Optimizer section of release notes.
    • NEW: Hugging Face Optimum Intel – Gain the performance benefits of OpenVINO (including NNCF) when using Hugging Face Transformers. Initial release supports PyTorch models.
    • Intel® oneAPI Deep Neural Network Library (oneDNN) has been updated to 2.7 for further refinements and significant improvements in performance for the latest Intel CPU and GPU processors.
    • Introducing C API 2.0, to support new features introduced in OpenVINO API 2.0, such as dynamic shapes with CPU, pre-processing and post-process API, unified property definition and usage. The new C API 2.0 shares the same library files as the 1.0 API, but with a different header file.   
  • Note: Intel® Movidius ™ VPU based products are not supported in this release, but will be added back in a future OpenVINO 2022.3.1 LTS update. In the meantime, for support on those products please use OpenVINO 2022.1.
  • Note: Macintosh* computers using the M1* processor can now install OpenVINO and use the OpenVINO ARM* Device Plug-in on OpenVINO 2022.3 LTS and later. This plugin is community supported; no support is provided by Intel and it doesn't fall under the LTS 2-year support policy. Learn more here: https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html

 

System Requirements

Disclaimer. Certain hardware (including but not limited to GPU and GNA) requires manual installation of specific drivers to work correctly. Drivers might require updates to your operating system, including Linux kernel, please refer to their documentation. Operating system updates should be handled by the user and are not part of OpenVINO installation. For system requirements check the System Requirements section in Release Notes.

 

Installation instructions

You can choose how to install OpenVINO™ Runtime according to your operating system:

 

What's included in the download package

  • Runtime/Inference Engine

 

Helpful Links

NOTE: Links open in a new window.

This download is valid for the product(s) listed below.