Intel® Distribution of OpenVINO™ Toolkit



This package contains the Intel® Distribution of OpenVINO™ Toolkit software version 2024.1 for Linux*, Windows* and macOS*.

Available Downloads

  • CentOS 7 (1908)*
  • Size: 51.2 MB
  • SHA256: 8F1D8B7D51DD8364BEB330B8364C8C98B15AE70164E5D2843C6D0D71375B83FD
  • Debian Linux*
  • Size: 25 MB
  • SHA256: 916C33CA6902665F62DE80F25309E0B5BDC252225DA33213164C8E2000ABF035
  • Red Hat Enterprise Linux 8*
  • Size: 44.3 MB
  • SHA256: A6EB3A623B1AEB252A10AC57AAD118871E2907B87C4DBE318CAEBC04519C7B5B
  • Ubuntu 18.04 LTS*
  • Size: 44.3 MB
  • SHA256: BAC6A147EBD6D32A9E097C56652553663191FD5D784E5C11EE16A8D3C35A0718
  • Ubuntu 20.04 LTS*
  • Size: 47.2 MB
  • SHA256: F6DAF300D235458B22A03789F8CB4BC81CA9108A0B72C18480090B4EF84BF751
  • Ubuntu 20.04 LTS*
  • Size: 33.3 MB
  • SHA256: 7B8A88ACC9EF8E65E6B896D4BE4BCCCB9FEE7AC19FC20C62B4F99DB18BF15084
  • Ubuntu 22.04 LTS*
  • Size: 48.3 MB
  • SHA256: 69F15878F54D7B61EB54EB5B2631741F147E85383539F5436A6672FB07C459D2
  • macOS*
  • Size: 126.4 MB
  • SHA256: 4FEB824F610D65D8218183D3453C8DA6DB5EA641F858B5CB98413B675554898F
  • macOS*
  • Size: 30.8 MB
  • SHA256: 6997E398DC14F0E52B7A286374CC7A02FE6B3285CE52E2F6324FB5D928050A95
  • Windows 11*, Windows 10, 64-bit*
  • Size: 99.1 MB
  • SHA256: 4EE0C4036C91A3C1423C14F47E31B5B4C15082A6CFF3A5B7A63CF12DA39B70E6

Detailed Description

What’s new

More Gen AI coverage and framework integrations to minimize code changes.

  • Mixtral* and URLNet* models optimized for performance improvements on Intel® Xeon® processors.
  • Stable Diffusion* 1.5, ChatGLM3-6B*, and Qwen-7B* models optimized for improved inference speed on Intel® Core™ Ultra processors with integrated GPU.
  • Support for Falcon-7B-Instruct*, a GenAI Large Language Model (LLM) ready-to-use chat/instruct model with superior performance metrics.
  • New Jupyter* Notebooks added: YOLO V9*, YOLO V8* Oriented Bounding Boxes Detection (OOB), Stable Diffusion in Keras*, MobileCLIP*, RMBG-v1.4* Background Removal, Magika*, TripoSR*, AnimateAnyone*, LLaVA-NeXT*, and RAG system with OpenVINO™ and LangChain*.

Broader LLM model support and more model compression techniques.

  • LLM compilation time reduced through additional optimizations with compressed embedding. Improved 1st token performance of LLMs on 4th and 5th generations of Intel® Xeon® processors with Intel® Advanced Matrix Extensions (Intel® AMX).
  • Better LLM compression and improved performance with oneDNN, INT4, and INT8 support for Intel® Arc™ GPUs.
  • Significant memory reduction for select smaller GenAI models on Intel® Core™ Ultra processors with integrated GPU.

More portability and performance to run AI at the edge, in the cloud, or locally.

  • The preview NPU plugin for Intel® Core™ Ultra processors is now available in the OpenVINO open-source GitHub* repository, in addition to the main OpenVINO package on PyPI*.
  • The JavaScript* API is now more easily accessible through the npm repository, enabling JavaScript developers’ seamless access to the OpenVINO API.
  • FP16 inference on ARM* processors now enabled for the Convolutional Neural Network (CNN) by default.

OpenVINO™ Runtime 


  • Unicode file paths for cached models are now supported on Windows*.
  • Pad pre-processing API to extend input tensor on edges with constants.
  • A fix for inference failures of certain image generation models has been implemented (fused I/O port names after transformation).
  • Compiler’s warnings-as-errors option is now on, improving the coding criteria and quality. Build warnings will not be allowed for new OpenVINO code and the existing warnings have been fixed.

AUTO Inference Mode

  • Returning the ov::enable_profiling value from ov::CompiledModel is now supported.

CPU Device Plugin

  • 1st token performance of LLMs has been improved on the 4th and 5th generations of Intel® Xeon® processors with Intel® Advanced Matrix Extensions (Intel® AMX).
  • LLM compilation time and memory footprint have been improved through additional optimizations with compressed embeddings.
  • Performance of MoE (such as Mixtral), Gemma*, and GPT-J has been improved further.
  • Performance has been improved significantly for a wide set of models on ARM devices.
  • FP16 inference precision is now the default for all types of models on ARM devices.
  • CPU architecture-agnostic build has been implemented, to enable unified binary distribution on different ARM devices.

GPU Device Plugin

  • LLM first token latency has been improved on both integrated and discrete GPU platforms.
  • For the ChatGLM3-6B* model, average token latency has been improved on integrated GPU platforms.
  • For Stable Diffusion 1.5 FP16 precision, performance has been improved on Intel® Core™ Ultra processors.

NPU Device Plugin

  • NPU Plugin is now part of the OpenVINO GitHub repository. All the most recent plugin changes will be immediately available in the repo. Note that NPU is part of Intel® Core™ Ultra processors.
  • New OpenVINO™ notebook “Hello, NPU!” introducing NPU usage with OpenVINO has been added.
  • Version 22H2 or later is required for Microsoft Windows® 11 64-bit to run inference on NPU.

OpenVINO Python* API

  • GIL-free creation of RemoteTensors is now used - holding GIL means that the process is not suited for multithreading and removing the GIL lock will increase performance which is critical for the concept of RemoteTensors.
  • Packed data type BF16 on the Python API level has been added, opening a new way of supporting data types not handled by NumPy*.
  • ‘pad’ operator support for ov::preprocess::PrePostProcessorItem has been added.
  • ov.PartialShape.dynamic(int) definition has been provided.


  • Two new pre-processing APIs for scale and mean have been added.

OpenVINO Node.js API

  • New methods to align JavaScript API with CPP API have been added, such as CompiledModel.exportModel(), core.import_model(), Core set/get property and Tensor.get_size(), and Model.is_dynamic().
  • Documentation has been extended to help developers start integrating JavaScript applications with OpenVINO™.

TensorFlow Framework Support

  • tf.keras.layers.TextVectorization tokenizer is now supported.
  • Conversion of models with Variable and HashTable (dictionary) resources has been improved.
  • 8 NEW operations have been added (see the list here, marked as NEW).
  • 10 operations have received complex tensor support.
  • Input tensor names for TF1 models have been adjusted to have a single name per input.
  • Hugging Face* model support coverage has increased significantly, due to:
    • extraction of input signature of a model in memory has been fixed,
    • reading of variable values for a model in memory has been fixed.

PyTorch* Framework Support

  • ModuleExtension, a new type of extension for PyTorch models is now supported (PR #23536).
  • 22 NEW operations have been added.
  • Experimental support for models produced by torch.export (FX graph) has been added (PR #23815).

OpenVINO Model Server

  • OpenVINO™ Runtime backend used is now 2024.1
  • OpenVINO™ models with String data type on output are supported. Now, OpenVINO™ Model Server can support models with input and output of the String type, so developers can take advantage of the tokenization built into the model as the first layer. Developers can also rely on any postprocessing embedded into the model which returns text only. Check the demo on string input data with the universal-sentence-encoder model and the String output model demo.
  • MediaPipe* Python calculators have been updated to support relative paths for all related configuration and Python code files. Now, the complete graph configuration folder can be deployed in an arbitrary path without any code changes.
  • KServe* REST API support has been extended to properly handle the string format in JSON body, just like the binary format compatible with NVIDIA Triton*.
  • A demo showcasing a full RAG algorithm fully delegated to the model server has been added.

Neural Network Compression Framework

  • Model subgraphs can now be defined in the ignored scope for INT8 Post-training Quantization, nncf.quantize(), which simplifies excluding accuracy-sensitive layers from quantization.
  • A batch size of more than 1 is now partially supported for INT8 Post-training Quantization, speeding up the process. Note that it is not recommended for transformer-based models as it may impact accuracy. Here is an example demo.
  • Now it is possible to apply fine-tuning on INT8 models after Post-training Quantization to improve model accuracy and make it easier to move from post-training to training-aware quantization. Here is an example demo.

OpenVINO Tokenizers

  • TensorFlow support has been extended - TextVectorization layer translation:
    • Aligned existing ops with TF ops and added a translator for them.
    • Added new ragged tensor ops and string ops.
  • A new tokenizer type, RWKV is now supported:
    • Added Trie tokenizer and Fuse op for ragged tensors.
    • A new way to get OV Tokenizers: build a vocab from file.
  • Tokenizer caching has been redesigned to work with the OpenVINO™ model caching mechanism.

Other Changes and Known Issues

Jupyter Notebooks

The default branch for the OpenVINO™ Notebooks repository has been changed from ‘main’ to ‘latest’. The ‘main’ branch of the notebooks repository is now deprecated and will be maintained until September 30, 2024.

The new branch, ‘latest’, offers a better user experience and simplifies maintenance due to significant refactoring and an improved directory naming structure.

Use the local file and OpenVINO™ Notebooks at GitHub Pages to navigate through the content.

The following notebooks have been updated or newly added:

Known Issues

Component - CPU Plugin

ID - N/A


Default CPU pinning policy on Windows has been changed to follow Windows’ policy instead of controlling the CPU pinning in the OpenVINO plugin. This brings certain dynamic or performance variance on Windows. Developers can use ov::hint::enable_cpu_pinning to enable or disable CPU pinning explicitly.

Component - Hardware Configuration

ID - N/A


Reduced performance for LLMs may be observed on newer CPUs. To mitigate, modify the default settings in BIOS to change the system into 2 NUMA node system:

1. Enter the BIOS configuration menu.

2. Select EDKII Menu -> Socket Configuration -> Uncore Configuration -> Uncore General Configuration -> SNC.

3. The SNC setting is set to AUTO by default. Change the SNC setting to disabled to configure one NUMA node per processor socket upon boot.

4. After system reboot, confirm the NUMA node setting using: numatcl -H. Expect to see only nodes 0 and 1 on a

2-socket system with the following mapping:

Node - 0 - 1

0 - 10 - 21

1 - 21 - 10


System Requirements

Disclaimer. Certain hardware (including but not limited to GPU and NPU) requires manual installation of specific drivers and/or other software components to work correctly and/or utilize hardware capabilities at their best. This might require updates to the operating system, including but not limited to Linux kernel, please refer to their documentation for details. These modifications should be handled by user and are not part of OpenVINO installation. These modifications should be handled by the user and are not part of OpenVINO installation. For system requirements check the System Requirements section in Release Notes.


Installation instructions

You can choose how to install OpenVINO™ Runtime according to your operating system:

What's included in the download package

  • OpenVINO™ Runtime/Inference Engine for C/C++

Helpful Links

NOTE: Links open in a new window.

This download is valid for the product(s) listed below.