Get Started Guide

  • 2021.3
  • 10/21/2021
  • Public Content

Run OpenVINO™ Sample Applications in Docker* Container

Run the Sample Application

  1. Go to the
    AMR_containers
    folder:
    cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/AMR_containers
  2. Run the command below to start the Docker container as root:
    ./run_interactive_docker.sh amr-ubuntu2004-full-flavour-sdk:<TAG> root
  3. Set up the OpenVINO™ environment:
    source /opt/intel/openvino/bin/setupvars.sh
  4. Download the models and demos for the OpenVINO ™ environment:
    1. Download the vehicle models:
      cd /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/ ./downloader.py --name vehicle-detection-adas-0002 ./downloader.py --name vehicle-license-plate-detection-barrier-0106
    2. Build the demos:
      cd /opt/intel/openvino/deployment_tools/open_model_zoo/demos ./build_demos.sh cp /root/omz_demos_build/intel64/Release/object_detection_demo /usr/bin/
    3. Download the object detection models:
      cd /home/eiforamr/data_samples wget http://download.tensorflow.org/models/object_detection/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz tar -xf ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz mv ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03 shared_box_predictor rm ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz chmod 755 -R shared_box_predictor
  5. Run Inference Engine object detection on a pretrained network using the Single-Shot multibox Detection (SSD) method. Run the detection demo application for CPU:
    object_detection_demo -i /opt/intel/openvino_2021.2.200/deployment_tools/open_model_zoo/models/intel/vehicle-license-plate-detection-barrier-0106/description/vehicle-license-plate-detection-barrier-0106.jpeg -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.xml -d CPU -at ssd --loop
    You should see an image with one license plate of a car recognized by the Neural Network.
  6. Run the detection demo application for GPU:
    object_detection_demo -i /opt/intel/openvino_2021.2.200/deployment_tools/open_model_zoo/models/intel/vehicle-license-plate-detection-barrier-0106/description/vehicle-license-plate-detection-barrier-0106.jpeg -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.xml -d GPU -at ssd --loop
    You should see the same image as the previous step, with one license plate of a car recognized by the Neural Network.
    There is a known issue that if you choose to run the
    object_detection_demo
    using the
    –d MYRIAD
    option, a core dump error will be thrown when the demo ends.
    If errors occur, remove the following file and try again:
    rm -rf /tmp/mvnc.mutex
  7. Use the Model Optimizer to convert a TensorFlow Neural Network model:
    python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --reverse_input_channels --input_model /home/eiforamr/data_samples/shared_box_predictor/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/eiforamr/data_samples/shared_box_predictor/pipeline.config --output_dir /home/eiforamr/data_samples/shared_box_predictor_ie
    Expected output:
    [ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: /data_samples/shared_box_predictor_ie/frozen_inference_graph.xml [ SUCCESS ] BIN file: /data_samples/shared_box_predictor_ie/frozen_inference_graph.bin [ SUCCESS ] Total execution time: 32.58 seconds. [ SUCCESS ] Memory consumed: 1207 MB.
  8. After the conversion is done, run the Neural Network against the Inference Engine for CPU.
    object_detection_demo -i /opt/intel/openvino_2021.2.200/deployment_tools/open_model_zoo/models/intel/vehicle-license-plate-detection-barrier-0106/description/vehicle-license-plate-detection-barrier-0106.jpeg -m /home/eiforamr/data_samples/shared_box_predictor_ie/frozen_inference_graph.xml -d CPU -at ssd --loop
    You should see an image with a car that is recognized by the Neural Network.
    Expected output:
    [ INFO ] InferenceEngine: API version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading Inference Engine [ INFO ] Device info: [ INFO ] CPU MKLDNNPlugin version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Checking that the inputs are as the demo expects [ INFO ] Checking that the outputs are as the demo expects [ INFO ] Loading model to the device
    To close the application, press
    CTRL+C
    here or switch to the output window and press
    ESC
    or the
    q
    key.
    To switch between min_latency and user_specified modes, press the
    TAB
    key in the output window.
  9. Run the Neural Network again with the Inference Engine for integrated GPU:
    object_detection_demo -i /opt/intel/openvino_2021.2.200/deployment_tools/open_model_zoo/models/intel/vehicle-license-plate-detection-barrier-0106/description/vehicle-license-plate-detection-barrier-0106.jpeg -m /home/eiforamr/data_samples/shared_box_predictor_ie/frozen_inference_graph.xml -d GPU -at ssd --loop
    You should see an image with a car that is recognized by the Neural Network.
    Expected output:
    [ INFO ] InferenceEngine: API version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading Inference Engine [ INFO ] Device info: [ INFO ] GPU clDNNPlugin version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Checking that the inputs are as the demo expects [ INFO ] Checking that the outputs are as the demo expects [ INFO ] Loading model to the device
    To close the application, press
    CTRL+C
    here or switch to the output window and press
    ESC
    or the
    q
    key.
    To switch between min_latency and user_specified modes, press the
    TAB
    key in the output window.

Troubleshooting

If the following error is encountered:
$ ./run_interactive_docker.sh amr-ubuntu2004-full-flasvour-sdk:<TAG> eiforamr bash: ./run_interactive_docker.sh Permission denied
Give executable permission to the script:
$ chmod 755 run_interactive_docker.sh

Summary and Next Steps

In this tutorial, you learned how to run Inference Engine object detection on a pretrained network using the SSD method and how to run the detection demo application for CPU and GPU. You also learned that the Model Optimizer can convert a TensorFlow Neural Network model and after the conversion is done, how to run the Neural Network with Inference Engine for CPU and GPU.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.