OpenVINO™ toolkit: An open source toolkit that makes it easier to write once, deploy anywhere.
What's New in Version 2023.1
This latest release of the OpenVINO™ toolkit brings new features that tap the full potential of generative AI. Generative AI coverage is expanded, enhancing the experience with frameworks like PyTorch* where you can automatically import and convert your models. Large language models (LLM) are boosted in runtime performance and memory optimization. Models for chatbots, code generation, and more are enabled. OpenVINO is more portable and performant to run wherever you need: at the edge, in the cloud, or locally.
Latest Features
Improved User Experience
The toolkit has expanded model coverage, reduced memory constraints, and continues to optimize performance to empower developers.
Product |
Details |
---|---|
PyTorch |
|
Optimum for Intel |
|
Generative AI and LLM Enhancements
Paired with more model compression techniques, OpenVINO gives developers more options when exploring LLMs, including the most prominent models.
Feature |
Details |
---|---|
Generative AI Support |
|
LLMs on GPUs |
|
Neural Network Compression Framework (NNCF) |
|
More Portability and Performance
Develop once, deploy anywhere. OpenVINO enables developers to to run AI at the edge, in the cloud, or locally.
Product |
Details |
---|---|
Integration with MediaPipe* |
|
Intel® Core™ Ultra (Formerly Code Named Meteor Lake) |
|
Stay Up-To-Date

Sign Up for Exclusive News, Tips & Releases
Be among the first to learn about everything new with the Intel® Distribution of OpenVINO™ toolkit. By signing up, you get:
• Early access product updates and releases
• Exclusive invitations to webinars and events
• Training and tutorial resources
• Contest announcements
• Other breaking news
Resources
Community and Support
Explore ways to get involved and stay up-to-date with the latest announcements.
Get Started
Optimize, fine-tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
The productive smart path to freedom from the economic and technical burdens of proprietary alternatives for accelerated computing.