Intel® Distribution of OpenVINO™ Toolkit

753640
4/9/2026

Introduction

This package contains the Intel® Distribution of OpenVINO™ Toolkit software version 2026.1 for Linux*, Windows*, and macOS*.

Available Downloads

  • Microsoft Windows*
  • Size: 187.8 MB
  • SHA256: D5C23B1EB54374E020B66446F39CC4009B168C196E3F4BAD8061F47EB1418FA4
  • Microsoft Windows*
  • Size: 736.1 MB
  • SHA256: F12CAB6A76B633F7EFFF089E5E192229E020DCA157E31EBC4DAE378B939F5ACF
  • macOS*
  • Size: 40.1 MB
  • SHA256: F7BB6777383BAD03B7437FE1E256BA469960DD6A595934D7FB2D9681A21E65F1
  • Ubuntu Family*
  • Size: 35.5 MB
  • SHA256: EC7D0147FFAFD5F196E805376469DBE9DE1CBC4CC8EFBD2A033F8C240EDE060B
  • Ubuntu 22.04 LTS*, Ubuntu Family*
  • Size: 93.5 MB
  • SHA256: 3B4D92FEC96860DFEA844CD7C23E190D76C243E75815491D53405B4CED892103
  • Ubuntu 24.04 LTS*, Linux*
  • Size: 95.6 MB
  • SHA256: 0F54D388CDCFC691162BC4FFA28792FC953B6C3F5A1B89CAC03D40C6284379D5
  • Linux*, Debian Linux*
  • Size: 32.7 MB
  • SHA256: 8646F9F20DF5410F905227D582877D4C19962A753C9A2E0AEA79BA4D29BC6A43
  • CentOS Linux Family*, Linux*
  • Size: 70.9 MB
  • SHA256: 39DEBD57818BB9F64589CB17642227A862766A76BA10C9A622663815FB350F51
  • Linux*, Red Hat Linux Family*
  • Size: 73.3 MB
  • SHA256: A63EAFC7A78D9DFB0C1AD597BAA4B0E5C6C87357DCBBA213C7AE8AD882A07B8B
  • Android*
  • Size: 71 MB
  • SHA256: E0A303E60720E71E1FE0A64DAD7067ECD3960A57EB825DC630A7C5E49216A93E

Detailed Description

What’s New

  • More Gen AI coverage and frameworks integrations to minimize code changes


    • New models supported on CPUs & GPUs: Qwen3 VL
    • New models supported on CPUs: GPT-OSS 120B
    • Preview: Introducing the OpenVINO backend for llama.cpp, which enables optimized inference on Intel CPUs, GPUs, and NPUs. Validated on GGUF models such as Llama-3.2-1B-Instruct-GGUF, Phi-3-mini-4k-instruct-gguf, Qwen2.5-1.5B-Instruct-GGUF, and Mistral-7B-Instruct-v0.3.
    • New notebook: Unified VLM chatbot with video file support and interactive model switching across Qwen3-VL, Qwen2.5-VL, and LLaVa-NeXT-Video.
  • Broader LLM model support and more model compression techniques

    • OpenVINO™ GenAI adds TaylorSeer Lite caching for image and video generation, accelerating diffusion-transformer inference across Flux, SD3, and LTX-Video pipelines, aligned with Hugging Face Diffusers.
    • LTX-Video generation on GPU achieves end-to-end acceleration through fusion of RMSNorm and RoPE operators, significantly improving video generation performance.
    • OpenVINO™ GenAI adds dynamic LoRA support for Qwen3-VL and VL models with LLM, allowing developers to swap adapters at runtime for efficient serving of multiple model variants in production without reloading the base model.
    • Preview: The release-weights API for ov::Model enables memory reclamation during model compilation on NPUs, delivering dramatically lower peak memory consumption for edge and client deployments. Users must set this property in ov::Model, and it will be applied during compilation.
  • More portability and performance to run AI at the edge, in the cloud or locally


    • Introducing support for Intel® Core™ Series 3 processors (formerly codenamed Wildcat Lake) and Intel® Arc™ Pro B70 Graphics with 32GB memory for single-GPU inference on 20-30B parameter LLMs
    • Prompt Lookup Decoding extended to vision-language pipelines, delivering significantly faster token generation for multimodal workloads on Intel CPUs and GPUs.
    • OpenVINO™ GenAI now has a smaller runtime footprint after eliminating ICU DLL dependencies from tokenization, leading to reduced memory usage, faster startup, and easier deployment.
    • OpenVINO GenAI introduces WhisperPipeline for Node.js via its NPM package, delivering production-ready speech recognition with word-level audio-to-text transcription.
    • OpenVINO™ Model Server enhances support for Qwen3-MOE and GPT-OSS-20b models, delivering improved performance, accuracy, and robust concurrent request handling with continuous batching. These pre-optimized models are available on Hugging Face for easy deployment. Additionally, the Model Server introduces image inpainting and outpainting capabilities via the /image endpoint for AI image editing.

Get all the details. See 2026.1 release notes.

Installation instructions

You can choose how to install OpenVINO™ Runtime from Archive* according to your operating system:

What's included in the download package (Archive File)

  • Offers both C/C++ and Python APIs
  • Additionally includes code samples

 

Helpful Links

NOTE: Links open in a new window.