Containerize DLDT App: Linux*

Published: 09/04/2019  

Last Updated: 11/11/2019

This tutorial walks through the steps to containerize a sample DLDT application, such as those found in the Open Model Zoo. We will build all the software on a Linux* (Ubuntu* 18.04) host, then copy the built applications and software dependencies into a Linux (Ubuntu 18.04) container.

The following instructions are generalized for theoretically any DLDT application. We specifically tested this process with the Security Barrier Camera demo app. For certain steps, side instructions and notes will be provided to guide you through the process of building the Security Barrier Camera demo app.

Note: Since DLDT is open-source and still in development, the new commits that are rolled out may not work with this tutorial exactly. To guarantee this process works, you should run it on on the same versions that this process was tested on:

  • DLDT: 2019 branch, commit c585b530c1b969930df61252057ccea2f72dfc76
  • Open Model Zoo: 2019 branch (should be latest commit but just for reference, commit f70b8743cbef6482391912d466e839783293c19b)

Step 1: Install Dependencies

sudo apt-get update
sudo apt-get install docker.io
sudo apt-get install git
sudo apt-get install python3-pip

Install OpenCV

  1. Download the OpenCV tar file. Version 4.1.2 was used for this tutorial.

Get OpenCV

  1. Extract the tar file.
tar xf opencv-4.1.2.tar.gz
  1. Install OpenCV by running the following commands
cd opencv-4.1.2
mkdir build &&
cd    build &&
cmake -DCMAKE_INSTALL_PREFIX=/usr      \
      -DCMAKE_BUILD_TYPE=Release       \
      -DENABLE_CXX11=ON                \
      -DBUILD_PERF_TESTS=OFF           \
      -DWITH_XINE=ON                   \
      -DBUILD_TESTS=OFF                \
      -DENABLE_PRECOMPILED_HEADERS=OFF \
      -DCMAKE_SKIP_RPATH=ON            \
      -DBUILD_WITH_DEBUG_INFO=OFF      \
      -Wno-dev  ..                     &&
make

Note: if you are working behind a corporate proxy, then you will have to configure certain settings in order for Docker* to work behind the proxy.

Step 2: Build DLDT Inference Engine

You can follow these instructions to build the Inference Engine.

Note: Again, you should make sure to use the version of DLDT specifically mentioned above.

1. Clone DLDT and Install Submodules for Inference Engine

git clone https://github.com/opencv/dldt.git
cd dldt
git checkout c585b530c1b969930df61252057ccea2f72dfc76
cd inference-engine
git submodule init
git submodule update --recursive

2. Install Dependencies

./install_dependencies.sh

3. (Optional) Get OpenCL™ Support for GPU

Install OpenCL drivers. This is necessary for the Inference Engine GPU plugin to infer models; if you don't want to use the GPU plugin, use the -DENABLE_CLDNN=ON CMake build option in step 4 and skip this step.

mkdir neo
cd neo
wget https://github.com/intel/compute-runtime/releases/download/19.04.12237/intel-gmmlib_18.4.1_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/19.04.12237/intel-igc-core_18.50.1270_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/19.04.12237/intel-igc-opencl_18.50.1270_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/19.04.12237/intel-opencl_19.04.12237_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/19.04.12237/intel-ocloc_19.04.12237_amd64.deb
sudo dpkg -i *.deb
cd ..
rm -rf neo

4. Build DLDT Inference Engine

mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make --jobs=$(nproc --all)

5. Set Environment Variables

To build the sample DLDT apps from Open Model Zoo, you need to set environment variables that point to DLDT libraries so that the build script can find them when building the demo applications. Remember to replace INSTALL_DIR_DLDT with the path to the dldt repo and OPENCV_DIR with the path to the OpenCV build.

export InferenceEngine_DIR={INSTALL_DIR_DLDT}/dldt/inference-engine/build
export OpenCV_DIR={OPENCV_DIR}/build

Step 3: Build the DLDT Application

Now we actually build the sample DLDT apps from Open Model Zoo. This will build all the pretrained deep learning demos in the model zoo, from which you can pick any to work with.

git clone -b 2019 --single-branch https://github.com/opencv/open_model_zoo.git
cd open_model_zoo/demos
./build_demos.sh

This script saves the demos to the directory $HOME/omz_demos_build/intel64/Release/.

Note: among these you can find the executable security_barrier_camera_demo.

Step 4: Download Machine Learning Models

Included in the Open Model Zoo are pre-trained models that are used to run the demos. You can click into each of the demos from the Open Model Zoo page to find out which models each demo uses. To download the models, you will have to install the Python* modules "requests" and "pyyaml".

sudo -E pip3 install pyyaml requests

For the security barrier camera app: download the vehicle attributes recognition, license plate detection, and license plate recognition models in both the FP32 and FP16 versions. Remember to replace INSTALL_DIR_MODELZOO with the path to the model zoo directory.

cd {INSTALL_DIR_MODELZOO}/model_downloader
./downloader.py --name vehicle-license-plate-detection-barrier-0106,vehicle-license-plate-detection-barrier-0106-fp16,vehicle-attributes-recognition-barrier-0039,vehicle-attributes-recognition-barrier-0039-fp16,license-plate-recognition-barrier-0001,license-plate-recognition-barrier-0001-fp16

Step 5: Aggregate Dependencies

  1. Create a new folder (path of your choice) and copy over the demo and ML models.
mkdir build_on_host
cd build_on_host

Remember to replace INSTALL_DIR_DLDT with the path to the dldt repo and INSTALL_DIR_MODELZOO with the path to the model zoo directory.

For security barrier camera demo:

cp $HOME/omz_demos_build/intel64/Release/security_barrier_camera_demo .
mkdir models && cd models
cp {INSTALL_DIR_MODELZOO}/model_downloader/Security/object_detection/barrier/0106/dldt/* .
cp {INSTALL_DIR_MODELZOO}/model_downloader/Security/object_attributes/vehicle/resnet10_update_1/dldt/* .
cp {INSTALL_DIR_MODELZOO}/model_downloader/Security/optical_character_recognition/license_plate/dldt/* .
cd ..
  1. Copy over the library dependencies into a newly created folder lib that is inside the folder for the last step build_on_host.
mkdir lib
cp -r {INSTALL_DIR_DLDT}/inference-engine/bin/intel64/Release/lib/* lib/.
cp {INSTALL_DIR_DLDT}/inference-engine/temp/opencv_4.1.0_ubuntu16/lib/* lib/.
cp {INSTALL_DIR_DLDT}/inference-engine/temp/tbb/lib/libtbb.so.2 lib/.
cp {INSTALL_DIR_DLDT}/inference-engine/thirdparty/clDNN/common/intel_ocl_icd/6.3/linux/Release/bin/x64/* lib/.

Step 6: Build and Run Docker Container

The following three steps can be generalized, with some tweaking, for any demo application. In this case, these commands will be made for the security camera demo.

Docker File

In order to build the Docker image, you will need the Docker file. Below is a premade Docker file for the security camera demo. This file will need to be placed in the build_on_host folder.

# Use Ubuntu 18.04
FROM ubuntu:18.04

# Install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends libgtk2.0-0 wget sudo libusb-1.0-0 \
&& apt install -y --no-install-recommends libdc1394-22 libavcodec57 libavformat57 libswscale4 libopenexr22

# Copy files over
COPY . /security-camera/.

# Copy libraries
COPY ./lib /lib

Additional files for Security Camera Demo

car b m p
Figure 1. car_1.bmp

This file will also need to be placed in build_on_host.

Build the Docker Image

sudo docker build -t security-camera .

After this finishes running (may take up to an hour), you will have built your Docker image that contains all of the software and applications you need inside. Remember, you can modify the names and files for any respective demo application.

Run the Container

Option 1: No X11 Forwarding

Without X11 forwarding, you will not be able to pass in visual input or receive visual output.

sudo docker run -it security-camera

For security barrier camera demo: you won't be able to pass in live camera stream as input nor receive visual output as results after running the demo. Instead, you will have to pass in extra options when running the demo to enable text output. See the third step, Run the demo inside the container. Also, there are different flags that you have to pass in for running the app on the different processors CPU, GPU, and VPU:

  • Running on CPU: no extra options necessary
  • Running on GPU: pass in --privileged flag and (if you haven't already) install OpenCL drivers
  • Running on VPU (MYRIAD): pass in --privileged -v /dev:/dev --network=host --device /dev/video0

Option 2: With X11 Forwarding

If you enable X11 forwarding, you will be able to pass in visual input and receive visual output.

xhost +local:root
sudo docker run -it --privileged --env="DISPLAY=:0" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /dev/dri:/dev/dri -v /dev:/dev --network=host --device /dev/video0 security-camera /bin/bash

For security barrier camera demo: you can use this same command for the purposes of running the demo on CPU, GPU, and VPU (MYRIAD). With this, you can pass in live camera stream as input and see the popup image or video that showcases the result of the processing.

Run the Demo Inside the Container

If you didn't restructure the files and folders in your container, then your executable should be at $HOME/omz_demos_build/intel64/Release/ and your models in subfolders of {INSTALL_DIR_MODELZOO}/model_downloader/. Find out how to run your specific DLDT app by looking into the documentation for it in Open Model Zoo. Most likely you will run the executable with bash and pass in parameters, among which will include passing in paths to your models in the form of XML files.

For security barrier camera demo: there are three main aspects to running the app; the input, the processor, and the model. First, you decide whether to run the application on an image, a video, or a live camera stream. Then you decide which models to run, and for each model which processor (CPU, GPU, VPU) to run it on.

Note: You must use the FP16 version of the models when running on VPU/MYRIAD.

Now navigate to the location of your executable file to run it. The -i flag specifies input (for image and video, specify the path to the file; for live camera stream, simply pass the value "cam"), -d flag(s) specifies the processor (CPU, GPU, or MYRIAD), and -m flag(s) specifies the model (XML file location). Another thing worth mentioning is that if you did not enable X11 forwarding, you will have to pass the -r and -no_show flags to enable text output when running the demo app (otherwise the program will complain it is unable to open display when trying to show you visual output rseults). For more information on the available options, run ./security_barrier_camera_demo -h.

An example of a command you can run if you did X11 forwarding:

Note: Remember, you can replace car_1.bmp with any image, video file, or a live camera stream.

cd security-camera
./security_barrier_camera_demo -i car_1.bmp -d CPU -d_va CPU -d_lpr CPU -m models/vehicle-license-plate-detection-barrier-0106.xml -m_va models/vehicle-attributes-recognition-barrier-0039.xml -m_lpr models/license-plate-recognition-barrier-0001.xml

An example of a command you can run if you didn't do X11 forwarding:

cd security-camera
./security_barrier_camera_demo -i car_1.bmp -d CPU -d_va CPU -d_lpr CPU -m models/vehicle-license-plate-detection-barrier-0106.xml -m_va models/vehicle-attributes-recognition-barrier-0039.xml -m_lpr models/license-plate-recognition-barrier-0001.xml -r -no_show

Note: If you want to run the application on an image or video file you have stored on host, you can use the -v flag to mount a folder from host, e.g. -v /home/usrnm/media:/security_camera/media

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.