One of the two main tools in the Intel® Distribution of OpenVINO™ Toolkit is the Model Optimizer, a powerful conversion tool used for turning the pre-trained models that you’ve already created using frameworks like TensorFlow*, Caffe*, and ONNX* into a format usable by the Inference Engine while also optimizing them for use with the Inference Engine.
Half-Precision Floating Point
When developing for Intel® Neural Compute Stick 2 (Intel® NCS 2), Intel® Movidius VPUs,and Intel® Arria 10 FPGAs, and Intel® GPUs, you want to make sure that you use a model that uses FP16 precision. The Open Model Zoo (https://github.com/opencv/open_model_zoo), provided by, maintained by Intel and the open-source community as a repository for publicly available pre-trained models, has nearly three dozen FP16 models that can be used right away with your applications. If these don’t meet your needs, or you want to download one of the models that are not already in an IR format, then you can use the Model Optimizer to convert your model for the Inference Engine and Intel® NCS 2.
Frameworks Supported by Model Optimizer
Note: The Model Optimizer currently supports the following frameworks:
- Caffe*
- TensorFlow*
- MXNet*
- ONNX*
- Kaldi*
The Model Optimizer also supports custom layers through extensions to the Model Optimizer.
For more information see:
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer.html
The Model Optimizer
The Model Optimizer is located in openvino\deployment_tools\model_optimizer folder as a Python script called mo.py. The Intel® Distribution of OpenVINO™ Toolkit default install location is C:\Program Files(x86)\IntelSWTools\openvino for 64-bit Windows* operating systems. The Model Optimizer is located in the Intel® Distribution of OpenVINO™ Toolkit installation directory under the deployment_tools folder. By default, this is in the following location:
Windows*: C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer
Ubuntu*: /opt/intel/openvino/.deployment_tools/model_optimizer
The Model Optimizer is a python script that converts models from their pre-trained frozen format (for example .pb for TensorFlow or .caffemodel and .prototxt for Caffe) by reading the models, defining them in an Inference Engine readable format, and optimizing them for use with the Inference engine.
The Model Optimizer requires at least Python* 3.6.5, though it also supports Python 2.7. Although Python 3 should have been installed as part of the installation process for the Intel® Distribution of OpenVINO™ Toolkit, make sure that you have the proper
version installed by running the following command in your Command Prompt (Windows*) or terminal (Unix Systems):
python --version
The command should return the version of Python installed. If you see Python 2.7.x installed, but want to use Python 3, use python3 instead of python in the following commands.
You’ll also need to make sure that you have the proper Python modules installed. A shell script has been provided to automate this process for you. Scope to the Model Optimizer folder to install them. For Windows, you may need to be using an elevated Administrator Command Prompt.
Windows:
cd “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites\”
install_prerequisites.bat
Ubuntu:
cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
sudo ./install_prerequisites.sh
Note: This will setup the Model Optimizer for converting and optimizing models from any of the supported frameworks. If you would like to configure the Model Optimizer for only a single framework, like TensorFlow, multiple shell scripts have been provided. Use the script for your specific framework, such as install_prerequisites_tf.sh (Ubuntu*) or install_prerequisites_tf.bat (Windows)
The Model Optimizer script has many flags for a variety of options, but for our use we will focus on the --data-type flag., the basic command that you might use to convert your model for use with Intel® NCS 2 should use at least the following syntax: may look like the following:
python3.\mo.py –m INPUT_MODEL –o OUTPUT_DIR --data_type FP16
The –data-type flag determines the data type for all intermediate tensors and weights. FP32 models are quantized to FP16 models with this flag, making them compatible with Intel® NCS 2 and other VPUs. You can check out the other options for the script by running the command with just the –h flag or by visiting https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html
Output
The Model Optimizer outputs an .xml file and a .bin file named the same name as your input model, unless the –n or --model_name flag has been set. The .xml file describes the network topology and the .bin file contains the weights and biases binary data.
Note: Make sure your network is supported by Intel® NCS 2 and other VPUs by checking the network compatibility list at https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_MYRIAD.html
Example Usage
This example converts the SqueezeNet V1.0 DNN downloaded from https://github.com/DeepScale/SqueezeNet from its native Caffe framework to IR format compatible with Intel® NCS 2 and places it into a directory under the current Windows user:
python .\mo.py –m %USERPROFILE%\Documents\Models\SqueezeNet_v1.0\squeezenet_v1.0.caffemodel -o %USERPROFILE\Documents\Models\FP16\squeezenet1.0\ --data_type FP16
This example will guide you through downloading a model using the Model Downloader (more information for which can be found at https://software.intel.com/en-us/articles/model-downloader-essentials), converting that model for use with an Intel® NCS 2, and using the converted model and an Intel® NCS 2 with a benchmarking sample included with the Intel® Distribution of OpenVINO™ Toolkit.
Model Downloader
The Model Downloader is a tool used for fetching models from the Intel Open Model Zoo, a collection of open pre-trained neural network models curated by Intel and the open-source community. You can find the Open Model Zoo at https://github.com/opencv/open_model_zoo.The Model Downloader is found in the OpenVINO™ toolkit installation directory:
Windows: C:\Program Files(x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\
Ubuntu: /opt/intel/openvino/deployment_tools/tools/model_downloader/
Scope to this directory and use the following command to download the squeezenet1.0 image classification network:
Windows:
cd “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\”
python downloader.py --name squeezenet1.0
Ubuntu:
cd /opt/intel/openvino/deployment_tools/tools/model_downloader
python3 downloader.py --name squeezenet1.0
This will place the squeezenet1.0 .prototxt and .caffemodel files in the classification subfolder in the Model Downloader directory. This folder is specifically classification\squeezenet\1.0\caffe\.
Next, scope to the Model Optimizer folder and run the Optimizer on the model you’ve just downloaded. In this example, we’ve placed the output model in the current user’s user folder for Windows and the current user’s home directory for Ubuntu:
Windows:
cd “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer”
python mo.py –m ..\tools\model_downloader\classification\squeezenet\1.0\caffe\squeezenet1.0.caffemodel
–o %USERPROFILE% --data_type FP16
Ubuntu:
cd /opt/intel/openvino/deployment_tools/model_optimizer
python mo.py –m ../tools/model_downloader/classification/squeezenet/1.0/caffe/squeezenet1.0.caffemodel
–o ~/--data_type FP16
Remember that this specifies the output data type as FP16, compatible with GPUs, FPGAs, and VPUs such as the Intel® Neural Compute Stick 2. If you’re converting a model for use with an FP32 compatible device such as an Intel® CPU, use --data_type
FP32 or omit the --data_type flag altogether.
The model is now converted for use with the Intel® Distribution of OpenVINO™ toolkit Inference Engine.To see for yourself, run the benchmark_app sample included with the tookit using the commands below. This command uses an image that you can find attached at the bottom of this article. Make sure you have your Intel® NCS 2 plugged into your computer. You’ll first run a script to set the proper environment variables:
Windows:
“C:\Program Files (x86)\IntelSWTools\openvino\bin\setupvars.bat”
cd %USERPROFILE%\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release
benchmark_app.exe –m %USERPROFILE%\squeezenet1.0.xml –i %USERPROFILE%\Downloads\president_reagan-62x62.png –d MYRIAD
Ubuntu:
source /opt/intel/openvino/bin/setupvars.sh
cd ~/inference_engine_samples_build/intel64/Release
./benchmark_app –m ~\squeezenet1.0.xml –i ~\Downloads\president_reagan-62x62.png –d MYRIAD
Note: This article assumes that you’ve already built the samples included with the Intel® Distribution of OpenVINO™ toolkit. Refer to the OpenVINO™ toolkit documentation at https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html for instructions on how to build the samples.
The sample should return statistics on the model that you used. If the sample completes without error, then your converted model should be ready to use in developing OpenVINO™ toolkit applications. Remember, if you’re using a GPU, FPGA, or VPU (MYRIAD and HDDL) device, use an FP16 model. If you’re using a CPU or GPU, use a FP32 model.