Get Started

The following image illustrates how to initialize a built-in TensorFlow model for inferencing with the OpenVINO toolkit within an application.

Insert the following two lines of code in your TensorFlow applications.

import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')

Supported back-end options include:

  • CPU
  • GPU
  • MYRIAD
  • VAD-M

To change the hardware on which inferencing is done, invoke the following function:

openvino_tensorflow.set_backend('<backend_name>')

To determine what inferencing hardware is supported with your system, use the following:

openvino_tensorflow.list_backends()

Resource Description
Object Detection Sample Application Illustrates how to perform object detection using OpenVINO integration with TensorFlow
Classification Sample Application Illustrates how to perform classification using OpenVINO integration with TensorFlow
GitHub* Repository Additional documentation on installation, minimum prerequisites, and more
OpenVINO Integration with TensorFlow Installer Download and install the packages for use on your local edge devices