Containerize DLDT App Build Process: Linux*

Published: 09/06/2019  

Last Updated: 11/11/2019

This tutorial walks through the steps to containerize a sample DLDT application, such as those found in the Open Model Zoo. Specifically, this goes through the process of building all software within the container.

The following instructions are generalized for theoretically any DLDT application. We specifically tested this process with the Security Barrier Camera demo app. For certain steps, side instructions and notes will be provided to guide you through the process of building the Security Barrier Camera demo app.

Note: Since DLDT is open-source and still in development, the new commits that are rolled out may not work with this tutorial exactly. To guarantee this process works, you should run it on on the same versions that this process was tested on:

  • DLDT 2019 branch, commit c585b530c1b969930df61252057ccea2f72dfc76
  • Open Model Zoo: 2019 branch (should be latest commit but just for reference, commit f70b8743cbef6482391912d466e839783293c19b)

Step 1: Install Docker*

sudo apt-get update
sudo apt-get install docker.io

Note: If you are working behind a corporate proxy, then you will have to configure certain settings in order for Docker* to work behind the proxy.

Step 2: Write the Dockerfile

The following 5 steps should all be done within the Docker* container, i.e. automated with the Dockerfile. (It's like you are building these steps into a script).

For this guide, we used Ubuntu* 18.04 in the docker container.

FROM ubuntu:18.04

1. Install Dependencies

This is dependent on the application you are trying to containerize and the tools you will need to for that process. You will likely need to install packages like cmake and git. For example:

RUN apt-get update && apt-get install -y --no-install-recommends git

Furthermore, if you want to run the app on GPU (e.g. security barrier camera demo), you will need OpenCL™ support, i.e. download the Intel OpenCL drivers.

For Security Barrier Camera Demo: we installed git, sudo, wget, cpio, lsb-release, build-essential, and cmake.

2. Build DLDT Inference Engine

Clone the DLDT repository from GitHub and follow these instructions to build the Inference Engine. The following is what we put in our dockerfile

Note: Again, you should make sure to use the version of DLDT specifically mentioned above

ARG DOWNLOAD_LINK_DLDT=https://github.com/opencv/dldt.git
ARG INSTALL_DIR=$HOME/security_camera
RUN mkdir -p $INSTALL_DIR && cd $INSTALL_DIR && \
    git config --global http.sslVerify "false" && \
    git clone $DOWNLOAD_LINK_DLDT && \
    cd $INSTALL_DIR/dldt/inference-engine && \
    git checkout c585b530c1b969930df61252057ccea2f72dfc76 && \
    git submodule init && \
    git submodule update --recursive && \
    ./install_dependencies.sh
RUN mkdir $INSTALL_DIR/dldt/inference-engine/build && \
    cd $INSTALL_DIR/dldt/inference-engine/build && \
    cmake -DCMAKE_BUILD_TYPE=Release .. && \
    make

We also need to install OpenCV for the demos to build correctly. We used OpenCV-4.1.2. The OpenCV download link will need to be modified to use an updated version.

RUN cd $INSTALL_DIR/dldt/inference-engine/temp \
&& wget https://github.com/opencv/opencv/archive/4.1.2/opencv-4.1.2.tar.gz \
&& tar xf opencv-4.1.2.tar.gz \
&& rm opencv-4.1.2.tar.gz \
&& cd opencv-4.1.2 \
&& mkdir build \
&& cd build \
&& cmake -DCMAKE_INSTALL_PREFIX=/usr   \n\
      -DCMAKE_BUILD_TYPE=Release       \n\
      -DENABLE_CXX11=ON                \n\
      -DBUILD_PERF_TESTS=OFF           \n\
      -DWITH_XINE=ON                   \n\
      -DBUILD_TESTS=OFF                \n\
      -DENABLE_PRECOMPILED_HEADERS=OFF \n\
      -DCMAKE_SKIP_RPATH=ON            \n\
      -DBUILD_WITH_DEBUG_INFO=OFF      \n\
      -Wno-dev  ..                     \
&& make

3. Set Environment Variables

To build the sample DLDT apps from Open Model Zoo, you need to set environment variables that point to DLDT libraries so that the build script can find them when building the demo applications. Remember to set the variable INSTALL_DIR in your Dockerfile to the path to the DLDT repo. You may need to change the OpenCV_DIR based on the version of OpenCV being used.

ENV InferenceEngine_DIR=$INSTALL_DIR/dldt/inference-engine/build
ENV OpenCV_DIR=$INSTALL_DIR/dldt/inference-engine/temp/opencv-4.1.2/build
ENV LD_LIBRARY_PATH=$INSTALL_DIR/dldt/inference-engine/bin/intel64/Release/lib/:$INSTALL_DIR/dldt/inference-engine/thirdparty/clDNN/common/intel_ocl_icd/6.3/linux/Release/bin/x64

4. Build Demos

Now we actually build the sample DLDT apps from Open Model Zoo. This will build all the demos, from which you can pick any to work with.

ARG DOWNLOAD_LINK_OPEN_MODEL_ZOO=https://github.com/opencv/open_model_zoo.git
RUN cd $INSTALL_DIR && \
    git clone -b 2019 --single-branch $DOWNLOAD_LINK_OPEN_MODEL_ZOO && \
    export InferenceEngine_DIR=$INSTALL_DIR/dldt/inference-engine/build && \
    cd open_model_zoo/demos && \
    ./build_demos.sh

This script saves the demos to the directory $HOME/omz_demos_build/intel64/Release/.

Note: among these you can find the executable security_barrier_camera_demo.

5. Download Machine Learning Models

Included in the Open Model Zoo are pre-trained models that are used to run the demos. You can click into each of the demos from the Open Model Zoo page to find out which models each demo uses. To download the models while building the docker image, however, you will have to install python as well as python modules "requests" and "pyyaml". I.e:

RUN apt-get update && \
    apt-get install -y python3-pip python3-dev && \
    cd /usr/local/bin && \
    ln -s /usr/bin/python3 python && \
    pip3 install --upgrade pip && \
    python3 -m pip install requests pyyaml

Then, you can use the script provided at model_downloader/downloader.py to download the models. Each model you specify will download an XML file and a BIN file.

For security barrier camera app: we downloaded the vehicle attributes recognition, license plate detection, and license plate recognition models in both the FP32 and FP16 versions.

RUN cd $INSTALL_DIR/open_model_zoo/model_downloader \
&& ./downloader.py --name vehicle-license-plate-detection-barrier-0106,vehicle-license-plate-detection-barrier-0106-fp16,vehicle-attributes-recognition-barrier-0039,vehicle-attributes-recognition-barrier-0039-fp16,license-plate-recognition-barrier-0001,license-plate-recognition-barrier-0001-fp16

To simplify the steps to run the demo, you can copy the files to the security-camera directory.

RUN cd $INSTALL_DIR \
&& cp $HOME/omz_demos_build/intel64/Release/security_barrier_camera_demo . \
&& mkdir models && cd models \
&& cp $INSTALL_DIR/open_model_zoo/model_downloader/Security/object_detection/barrier/0106/dldt/* . \
&& cp $INSTALL_DIR/open_model_zoo/model_downloader/Security/object_attributes/vehicle/resnet10_update_1/dldt/* . \
&& cp $INSTALL_DIR/open_model_zoo/model_downloader/Security/optical_character_recognition/license_plate/dldt/* .

Here is also a test image of a car to run with the demo.

RUN wget https://software.intel.com/sites/default/files/managed/36/68/car_1.bmp -P $INSTALL_DIR 

Step 3: Build and Run the Docker Container

The following three steps can be generalized, with some tweaking, for any demo application.

Build the Docker Image

docker build -t security-camera .

After this finishes running (may take up to an hour), you will have built your docker image that contains all of the software and applications you need inside.

Run the Container

Option 1: No X11 Forwarding

Without X11 forwarding, you will not be able to pass in visual input or receive visual output.

docker run -it security-camera

For security barrier camera demo: you won't be able to pass in live camera stream as input nor receive visual output as results after running the demo. Instead, you will have to pass in extra options when running the demo to enable text output. See the third step, Run the demo inside the container. Also, there are different flags that you have to pass in for running the app on the different processors CPU, GPU, and VPU:

  • Running on CPU: no extra options necessary
  • Running on GPU: pass in --privileged flag and (if you haven't already) install Intel OpenCL drivers.
  • Running on VPU (MYRIAD): pass in --privileged -v /dev:/dev --network=host --device /dev/video0

Option 2: With X11 Forwarding

If you enable X11 forwarding, you will be able to pass in visual input and receive visual output.

xhost +local:root
sudo docker run -it --privileged --env="DISPLAY=:0" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /dev/dri:/dev/dri -v /dev:/dev --network=host --device /dev/video0 security-camera /bin/bash

For security barrier camera demo: you can use this same command for the purposes of running the demo on CPU, GPU, and VPU (MYRIAD). With this, you can pass in live camera stream as input and see the popup image or video that showcases the result of the processing.

Run the Demo Inside the Container

If you didn't restructure the files and folders in your container, then your executable should be at $HOME/omz_demos_build/intel64/Release/ and your models in subfolders of {INSTALL_DIR_MODELZOO}/model_downloader/. Find out how to run your specific DLDT app by looking into the documentation for it in Open Model Zoo. Most likely you will run the executable with bash and pass in parameters, among which will include passing in paths to your models in the form of XML files.

For security barrier camera demo: there are three main aspects to running the app; the input, the processor, and the model. First, you decide whether to run the application on an image, a video, or a live camera stream. Then you decide which models to run, and for each model which processor (CPU, GPU, VPU) to run it on.

Note: You must use the FP16 version of the models when running on VPU/MYRIAD.

Now navigate to the location of your executable file to run it. The -i flag specifies input (for image and video, specify the path to the file; for live camera stream, simply pass the value "cam"), -d flag(s) specifies the processor (CPU, GPU, or MYRIAD), and -m flag(s) specifies the model (XML file location). Another thing worth mentioning is that if you did not enable X11 forwarding, you will have to pass the -r and -no_show flags to enable text output when running the demo app (otherwise the program will complain it is unable to open display when trying to show you visual output rseults). For more information on the available options, run ./security_barrier_camera_demo -h.

An example of a command you can run if you did X11 forwarding:

cd security-camera
./security_barrier_camera_demo -i car_1.bmp -d CPU -d_va CPU -d_lpr CPU -m models/vehicle-license-plate-detection-barrier-0106.xml -m_va models/vehicle-attributes-recognition-barrier-0039.xml -m_lpr models/license-plate-recognition-barrier-0001.xml

An example of a command you can run if you didn't do X11 forwarding:

cd security-camera
./security_barrier_camera_demo -i car_1.bmp -d CPU -d_va CPU -d_lpr CPU -m models/vehicle-license-plate-detection-barrier-0106.xml -m_va models/vehicle-attributes-recognition-barrier-0039.xml -m_lpr models/license-plate-recognition-barrier-0001.xml -r -no_show

Note: If you want to run the application on an image or video file you have stored on host, you can use the -v flag to mount a folder from host, e.g. -v /home/usrnm/media:/security_camera/media

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.