Intel® DevCloud

Get Started

The following image illustrates how to initialize a built-in TensorFlow model for inferencing with the OpenVINO toolkit within an application.

Insert the following two lines of code in your TensorFlow applications.

import openvino_tensorflow

Supported back-end options include:

  • CPU
  • GPU
  • VAD-M

To change the hardware on which inferencing is done, invoke the following function:


To determine what inferencing hardware is supported with your system, use the following:


Operator Capability Manager (OCM)

This module implements checks on TensorFlow operators to determine which abstraction layers go to OpenVINO integration back ends and which layers should fall back on stock TensorFlow runtime. 


Graph Partitioner

Examine the nodes that OCM marked for clustering and assign them to clusters. Some clusters are dropped after further analysis. Each cluster of operators is then encapsulated into a custom operator that runs on the OpenVINO integration.


TensorFlow Importer
Translate TensorFlow operators to OpenVINO integration operators and create nGraph functions wrapped into a convolutional neural network (CNN) to run on the toolkit back end.


Backend Manager
This module creates a back end to run the CNN. There are two types of back ends: basic back end and VAD-M back end. The basic back end supports CPU, iGPU, MYRIAD. The VAD-M back end is used for Intel® Vision Accelerator Design with eight VPUs (referred as VAD-M or HDDL). 


The Intel DevCloud has several sample applications and tutorials that illustrate how the OpenVINO integration with TensorFlow works.

Resource Description
Object Detection Sample Application Illustrates how to perform object detection using OpenVINO integration with TensorFlow
Classification Sample Application Illustrates how to perform classification using OpenVINO integration with TensorFlow
GitHub* Repository Additional documentation on installation, minimum prerequisites, and more
OpenVino Integration with TensorFlow Installer Download and install the packages for use on your local edge devices
Note For maximum performance, efficiency, tooling customization, and hardware control, built-in OpenVINO toolkit APIs and runtime are recommended.