OpenVINO™ toolkit: An open source toolkit that makes it easier to write once, deploy anywhere.
Deploy High-Performance, Deep Learning Inference
A new version of the Intel® Distribution of OpenVINO™ toolkit is now available. The 2023.0 release makes it easier for developers everywhere to start innovating. This new release empowers developers with exciting new features, performance enhancements, increased model support, more device portability, and higher inferencing performance with fewer code changes.
Sign Up for OpenVINO Toolkit News
Keep up-to-date on the latest product releases, news, and tips.
How It Works
Convert and optimize models trained using popular frameworks like TensorFlow*, PyTorch*, and Caffe*. Deploy across a mix of Intel hardware and environments, on-premise and on-device, in the browser, or in the cloud.
Get started with OpenVINO and all the resources you need to learn, try samples, see performance, and more.
Explore software solutions across different industries.
Digital solutions in smart cities create efficient networks and services that benefit people and businesses.
AI optimizes customer experiences, forecasting, and inventory management, improving the retail businesses.
Sign Up for Exclusive News, Tips & Releases
Be among the first to learn about everything new with the Intel® Distribution of OpenVINO™ toolkit. By signing up, you get:
• Early access product updates and releases
• Exclusive invitations to webinars and events
• Training and tutorial resources
• Contest announcements
• Other breaking news
Community and Support
Explore ways to get involved and stay up-to-date with the latest announcements.
Optimize, fine-tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
The productive, smart path to freedom for accelerated computing from the economic and technical burdens of proprietary alternatives.