Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
491 Discussions

Intel® Distribution of OpenVINO™ toolkit Execution Provider for ONNX Runtime – Installation Now Made Easier

MaryT_Intel
Employee
0 0 5,709

 

ONNX is an open format to represent both deep learning and traditional models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. ONNX is developed and supported by a community of partners such as Microsoft, Facebook and Amazon Web Services.

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. ONNX helps to solve the challenge of hardware dependency related to AI models and enables deploying same AI models to several HW accelerated targets.

Now let's add hardware, like Intel processors, to this mix. In order to take full advantage of the Intel® processors running on your laptop or desktop, which often comes with a side of integrated GPU, you can also leverage the OpenVINO Execution Provider for ONNX Runtime. Developers, like yourself, can utilize the power of the Intel® Distribution of OpenVINO™ toolkit through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from AI frameworks like TensorFlow, PyTorch, Keras and much, much, more. The OpenVINO Execution Provider for ONNX Runtime enables ONNX models for running inference using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic acceleration on Intel® CPU, GPU, VPU and FPGA. Best of all you can get that better performance you were looking for with just one line of code.

Now theories aside, as a developer, you always want the installation to be quick and easy so that you can use the package as soon as possible. Earlier, if you wanted to get access to OpenVINO Execution Provider for ONNX Runtime on your machine, there were multiple installation steps that were involved. In order to make your life easier, now we have simple python wheel packages that can be installed using pip install. Now, in a matter of seconds OpenVINO Execution Provider for ONNX Runtime will be installed on your machine.

In our previous blog, you learned about OpenVINO Execution Provider for ONNX Runtime in depth and tested out some of the object detection samples that we created for different programming languages (Python, C++, C#). Now, it’s time for us to explain to you how easy it is for you to install the OpenVINO Execution Provider for ONNX Runtime on your Linux machines and get that faster inference for your ONNX deep learning models that you’ve been waiting for.

How to Install

Prerequisites

Now, before we go through the process of how we can install the OpenVINO EP for ONNX Runtime wheel packages, it is important to have the following prerequisites:

For Ubuntu OS Machines: Python version 3.6 OR 3.7 OR 3.8 OR 3.9

For Cent-OS Machines: Python version version 3.6 OR 3.7 OR 3.8 OR 3.9

Ubuntu/Cent-OS Linux Machine
Intel® Distribution of OpenVINO™ toolkit (latest version)


Installation

Go to: https://github.com/intel/onnxruntime/releases/latest to find the ONNXRuntime OpenVINO Execution Provider wheels in a zipped archive.

 

wget the wheel zip file:

For example, wget https://github.com/intel/onnxruntime/releases/download/v4.0/ubuntu18-v4.0-python3.8-whl.zip

 

Unzip the zipped archives:
unzip <whl.zip>

Example of the actual wheel file:
onnxruntime_openvino-1.8.0-cp36-cp36m-linux_x86_64.whl

Now, pip install.
pip3 install onnxruntime_openvino-1.8.0-cp36-cp36m-linux_x86_64.whl

output:

image of code

To verify if the installation was done:
> pip3 list

onnxruntime-openvino          1.8.0

You can check out the sample here.


Read about the sample below and try it out yourself on your machine. The following example demonstrates how you can create a separate Python conda environment with the choice of Python version you want to use and then install the corresponding OpenVINO Execution Provider for ONNX Runtime pip wheel package.

Example

Other ways to install OpenVINO Execution Provider for ONNX Runtime

There are also other ways to install the OpenVINO Execution Provider for ONNX Runtime. One such way is to build from source. By building from source, you will also get access to C++, C# and Python API’s. Another way to install OpenVINO Execution Provider for ONNX Runtime is to download the docker image from Docker Hub. By pulling the docker image from docker hub and launching the container out of it, you will get access to Python API for OpenVINO Execution Provider for ONNX Runtime.

 

Notices & Disclaimers

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates.  See backup for configuration details.  No product or component can be absolutely secure. 

Your costs and results may vary. 

Intel technologies may require enabled hardware, software or service activation.

© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.  

About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.