Follow the Get Started Guide for the Intel® NCS 2 to install the OpenVINO™ toolkit and configure your Intel® NCS 2.
Note | The Get Started Guide and this article also apply to users with the original Intel® Movidius™ Neural Compute Stick. |
The mo_tf.py script is located in the ~/intel/openvino/deployment_tools/model_optimizer directory. The following parameters need to be specified when converting your model to Intermediate Representation (IR) for inference with the Intel® NCS 2.
--input_model <path_to_frozen.pb>
--tensorflow_use_custom_operations_config <path_to_subgraph_replacement_configuration_file.json>
--tensorflow_object_detection_api_pipeline_config <path_to_pipeline.config>
--reverse_input_channels
--data_type FP16
Example of a Model Optimizer command:
python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_model.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --data_type FP16
Additional information regarding the Model Optimizer can be found in the OpenVINO™ toolkit documentation.