Intel® Distribution of OpenVINO™ Toolkit

753640
4/10/2025

Introduction

This package contains the Intel® Distribution of OpenVINO™ Toolkit software version 2025.1 for Linux*, Windows* and macOS*.

Available Downloads

  • Debian Linux*
  • Size: 32.4 MB
  • SHA256: 3063A5F601F7A408D5739C68BEC1DB2D8B91EC3329295DF24066BEFA4D41437F
  • CentOS 7 (2003)*
  • Size: 55.9 MB
  • SHA256: 8C288D2B82BF847570B7B2ABC09E5BD1E954BDA1EA4792BF8729A92E031C2AFE
  • Red Hat Enterprise Linux 8*
  • Size: 60.6 MB
  • SHA256: D909FAA1666B11E4A6A0BE17DECEC20C9C609538C12364D86F0D48A5228983A5
  • Ubuntu 20.04 LTS*
  • Size: 63.8 MB
  • SHA256: CC182AFC22F02F7F660A5CDB3C1811EF982B5940E7D6300F6D615872F10094AC
  • Ubuntu 20.04 LTS*
  • Size: 36.1 MB
  • SHA256: 45F0BF85F2C7C300B1E7B38D5A32C4FEF56AB45555D7A5E2BCF0EEA6716C797A
  • Ubuntu 22.04 LTS*
  • Size: 55.1 MB
  • SHA256: 2B18022D00076F1C2288B519B6D142F33B7687DA0B3534DB36A04EFB59B9CA99
  • Ubuntu 24.04 LTS*
  • Size: 56.3 MB
  • SHA256: B4A7A17FED9AFF3314CBBB15242F0CE85E147F7FADB89B56B67379CA251A2936
  • macOS*
  • Size: 41.9 MB
  • SHA256: 96865735B100DF088123981CAE0D23E7BF252139248649AF0C4530464C475146
  • macOS*
  • Size: 36.9 MB
  • SHA256: C21A05EE934F6BC8BF8361A450646DBA25268221CBA6B38EF166A590A2B8162D
  • Windows 11*, Windows 10*
  • Size: 115.2 MB
  • SHA256: BBFC06E3A14F234B07ABEB96BD8A5AC0160EB9574C1BB3A476CBA35216D58DFF

Detailed Description

What's New

  • More Gen AI coverage and frameworks integrations to minimize code changes.
    • New models supported: Phi-4 Mini, Jina CLIP v1, and Bce Embedding Base v1.
    • OpenVINO™ Model Server now supports VLM models, including Qwen2-VL, Phi-3.5-Vision, and InternVL2.
    • OpenVINO GenAI now includes image-to-image and inpainting features for transformer-based pipelines, such as Flux.1 and Stable Diffusion 3 models, enhancing their ability to generate more realistic content.
    • Preview: AI Playground now utilizes the OpenVINO GenAI backend to enable highly optimized inferencing performance on AI PCs.
  • Broader LLM model support and more model compression techniques.
    • Reduced binary size through optimization of the CPU plugin and removal of the GEMM kernel.
    • Optimization of new kernels for the GPU plugin significantly boosts the performance of Long Short-Term Memory (LSTM) models, used in many applications, including speech recognition, language modeling, and time series forecasting.
    • Preview: Token Eviction implemented in OpenVINO GenAI to reduce the memory consumption of KV Cache by eliminating unimportant tokens. This current Token Eviction implementation is beneficial for tasks where a long sequence is generated, such as chatbots and code generation.
    • NPU acceleration for text generation is now enabled in OpenVINO™ Runtime and OpenVINO™ Model Server to support the power-efficient deployment of VLM models on NPUs for AI PC use cases with low concurrency.
  • More portability and performance to run AI at the edge, in the cloud or locally.
    • Support for the latest Intel® Core™ processors (Series 2, formerly codenamed Bartlett Lake), Intel® Core™ 3 Processor N-series and Intel® Processor N-series (formerly codenamed Twin Lake) on Windows.
    • Additional LLM performance optimizations on Intel® Core™ Ultra 200H series processors for improved 2nd token latency on Windows* and Linux*.
    • Enhanced performance and efficient resource utilization with the implementation of Paged Attention and Continuous Batching by default in the GPU plugin.
    • Preview: The new OpenVINO backend for Executorch will enable accelerated inference and improved performance on Intel hardware, including CPUs, GPUs, and NPUs.

Get all the details. See 2025.1 release notes.

Installation instructions

You can choose how to install OpenVINO™ Runtime from Archive* according to your operating system:

What's included in the download package (Archive File)

  • Offers both C/C++ and Python APIs
  • Additionally includes code samples

 

Helpful Links

NOTE: Links open in a new window.

This download is valid for the product(s) listed below.