Intel® Distribution of OpenVINO™ Toolkit

How it Works

Convert and optimize models trained using popular frameworks like TensorFlow, Pytorch, and Caffe*.  Deploy across a mix of Intel® hardware and environments, on-premise and on-device, in the browser, or in the cloud.

 

 

 

 

What's New in Version 2022.1

Updated, Cleaner API

Adopt and maintain your code more easily. This version includes better alignment to TensorFlow conventions, fewer parameters, and minimizes conversion.

 

Broader Model Support

Optimize and deploy with ease across an expanded range of deep-learning models including natural language processing (NLP), double precision, and computer vision.

 

Portability and Performance

See a performance boost quickly with automatic device discovery, load balancing, and dynamic inferencing parallelism across CPU, GPU, and more.