Intel® Developer Cloud for the Edge
OpenVINO™ Integration with TensorFlow*
TensorFlow* developers can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a wide range of Intel® compute devices by adding just two lines of code. The Intel® Developer Cloud for the Edge comes preinstalled with OpenVINO integration with TensorFlow.
OpenVINO integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:
- Intel® CPUs
- Intel® integrated GPUs
- Intel® Movidius™ Vision Processing Units (VPU)
- Intel® Vision Accelerator Design with eight Intel® Movidius™ Myriad™ X VPUs
Get Started
The following image illustrates how to initialize a built-in TensorFlow model for inferencing with the OpenVINO toolkit within an application.
Insert the following two lines of code in your TensorFlow applications.
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
Supported back-end options include:
- CPU
- GPU
- MYRIAD
- VAD-M
To change the hardware on which inferencing is done, invoke the following function:
openvino_tensorflow.set_backend('<backend_name>')
To determine what inferencing hardware is supported with your system, use the following:
openvino_tensorflow.list_backends()
Architecture
The following diagram provides a high-level overview of the functionality of each module and how it transforms the original TensorFlow graph.
Operator Capability Manager (OCM)
This module implements checks on TensorFlow operators to determine which abstraction layers go to OpenVINO integration back ends and which layers should fall back on stock TensorFlow runtime.
Graph Partitioner
Examine the nodes that OCM marked for clustering and assign them to clusters. Some clusters are dropped after further analysis. Each cluster of operators is then encapsulated into a custom operator that runs on the OpenVINO integration.
TensorFlow Importer
Translate TensorFlow operators to OpenVINO integration operators and create nGraph functions wrapped into a convolutional neural network (CNN) to run on the toolkit back end.
Backend Manager
This module creates a back end to run the CNN. There are two types of back ends: basic back end and VAD-M back end. The basic back end supports CPU, iGPU, MYRIAD. The VAD-M back end is used for Intel® Vision Accelerator Design with eight VPUs (referred as VAD-M or HDDL).
Subgraph Partitioning
OpenVINO integration with TensorFlow accelerates the subgraphs with back end supported operators with OpenVINO optimizations. Unsupported operators fall back to stock TensorFlow. The graph has four nodes where the first three are supported by the OpenVINO toolkit and optimized during the runtime. The fourth unsupported operator falls back on to the original TensorFlow runtime.
Resources
The Intel Developer Cloud for the Edge has several sample applications and tutorials that illustrate how the OpenVINO integration with TensorFlow works.
Resource | Description |
---|---|
Object Detection Sample Application | Illustrates how to perform object detection using OpenVINO integration with TensorFlow |
Classification Sample Application | Illustrates how to perform classification using OpenVINO integration with TensorFlow |
GitHub* Repository | Additional documentation on installation, minimum prerequisites, and more |
OpenVINO Integration with TensorFlow Installer | Download and install the packages for use on your local edge devices |
Note For maximum performance, efficiency, tooling customization, and hardware control, built-in OpenVINO toolkit APIs and runtime are recommended.