Converting TensorFlow Object Detection API Models for Inference on the Intel® Neural Compute Stick 2 (Intel® NCS 2)

Documentation

Troubleshooting

000055228

06/06/2023

Follow the Get Started Guide for the Intel® NCS 2 to install the OpenVINO™ toolkit and configure your Intel® NCS 2.

Note The Get Started Guide and this article also apply to users with the original Intel® Movidius™ Neural Compute Stick.

The mo_tf.py script is located in the ~/intel/openvino/deployment_tools/model_optimizer directory. The following parameters need to be specified when converting your model to Intermediate Representation (IR) for inference with the Intel® NCS 2.

--input_model <path_to_frozen.pb>

--tensorflow_use_custom_operations_config <path_to_subgraph_replacement_configuration_file.json>

  • The configuration files are located in the ~/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf directory. Chose a configuration file that matches the topology of your model.  Take a look at How to Convert a Model for a list of configuration files.

--tensorflow_object_detection_api_pipeline_config <path_to_pipeline.config>

--reverse_input_channels

  • This parameter is required if you are using the converted TensorFlow Object Detection API model with the Inference Engine sample applications.

--data_type FP16

  • Specifies half-precision floating-point format to run on the Intel® NCS 2

Example of a Model Optimizer command:

python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_model.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --data_type FP16

Additional information regarding the Model Optimizer can be found in the OpenVINO™ toolkit documentation.