Edge Video Analytics Microservice

Version: 2022.1   Published: 12/14/2021  

Last Updated: 10/24/2022

Overview

Video Analytics refers to transforming video streams into insights through video processing, inference, and analytics operations. It is used in a wide range of business domains such as healthcare, retail, entertainment and industrial. The algorithms used for video analytics perform object detection, classification, identification, counting, and tracking on the input video stream.

This use case features interoperable containerized microservices for developing and deploying optimized video analytics pipelines built using Intel® DL Streamer as inferencing backend. The pre-built container images provided by the package allow developers to replace the deep learning models and pipelines used in the container with their own deep learning models and pipelines. The microservices can be deployed independently or with Edge Insights for Industrial (EII) software stack to perform video analytics on the edge devices. 

Developers can save development and deployment time by using the pre-built Docker* image and by simply configuring the video analytics pipelines in the well-known JSON format.

Select Configure & Download to download the microservice and the software listed below.

Configure & Download

Icon for Video Analytics

  • Time to Complete: 45 minutes
  • Programming Language: Python* 3
  • Available Software:
    • Edge Video Analytics Microservice
    • Deep Learning Streamer (Intel® DL Streamer) Pipeline Server
    • Intel® Distribution of OpenVINO™ toolkit 2021 Release

 

Target System Requirements

  • One of the following processors:
    • 6th to 11th generation Intel® Core™ processors
    • 1st to 3rd generation of Intel® Xeon® Scalable processors
    • Intel Atom® processor with Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2) 
  • At least 8 GB RAM.
  • At least 64 GB hard drive.
  • An Internet connection.
  • Ubuntu* 20.04 LTS Kernel 5.4†

 

Refer to OpenVINO™ Toolkit System Requirements for supported GPU and VPU processors.

† Use Kernel 5.8 for 11th generation Intel® Core™ processors.


How It Works 

Edge Video Analytics Microservice

This is a Python* microservice used for deploying optimized video analytics pipelines and is provided as a Docker image in the package. The pipelines run by the microservice are defined in GStreamer* using Intel® DL Streamer Pipeline Server for inferencing. The Docker image uses Intel® DL Streamer Pipeline Server as a library. The microservice can be started in one of two modes – Edge Insights Industrial (EII) to deploy with EII software stack or Edge Insights Video (EVA) to deploy independent of the EII stack. 

Edge Video Analytics (EVA) Mode: Provides the same RESTful APIs as Intel® DL Streamer Pipeline Server to discover, start, stop, customize, and monitor pipeline execution and supports MQTT and Kafka message brokers for publishing the inference results. For REST API definition, refer to the RESTful Microservice interface

Edge Insights for Industrial (EII) Mode: Supports EII Configuration Manager for pipeline execution and EII Message Bus for publishing of inference results, making it compatible with Edge Insights for Industrial software stack.

 

Architecture Diagram

Figure 1: Architecture Diagram


Edge Video Analytics Microservice Resources

The following configuration files, scripts and tools used with the Edge Video Analytics Microservice are included in the Edge Video Analytics Resources zip file:     

  • docker-compose.yml file configures, creates, and starts containers for Edge Video Analytics Microservice and MQTT broker.
  • pipelines and model_list: folders contain the pipeline and model definition files included in the Edge Video Analytics Microservice Docker image. These files can be modified and used with the Edge Video Analytics Microservice Docker image by volume mounting.
  • tools/model_downloader tool downloads a model from openvinotoolkit/open_model_zoo.
  • mosquitto folder includes a file for configuring MQTT broker required to view the inference results.

 

Container Engines and Orchestration

The package uses Docker and Docker Compose for automated container management. 

  • Docker is a container framework widely used in enterprise environments. It allows applications and their dependencies to be packaged together and run as a self-contained unit.
  • Docker Compose is a tool for defining and running multi container Docker Applications. 

Get Started    

Step 1: Install the Microservice   

Select Configure & Download to download the microservice and then follow the steps below to install it.  

Configure & Download

  1. Open a new terminal, go to the downloaded folder and unzip the downloaded package:  
    unzip video_analytics.zip

     

  2.  Go to the video_analytics/ directory:
    cd video_analytics
      
  3. Change permission of the executable edgesoftware file: 
    chmod 755 edgesoftware
     
  4. Run the command below to install the microservice: 
    sudo ./edgesoftware install

     

  5. During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download.Screenshot of Product Key

    Figure 2: Product Key

     

  6. When the installation is complete, you see the message “Installation of package complete” and the installation status for each module.Screenshot of Installation Complete

    Figure 3: Installation Complete



     

  7. To verify the installation, list the Docker images that are downloaded using the following command:

    sudo docker images

     

  8. If the installation was successful, you will see results similar to:     

    Screenshot of Installation Successful

    Figure 4: Installation Successful



     

NOTE: 
Installation failure logs will be available at path: /var/log/esb-cli/video_analytics_<Version>  where <version> is the Edge Video Analytic Microservice version downloaded.
The Edge Video Analytic Microservice Docker image can be downloaded directly from Docker Hub using the following command:
docker pull intel/edge_video_analytics_microservice:<version> 

 

Step 2: Run the Edge Video Analytics Microservice

In this step, you will run the Edge Video Analytic Microservice with the sample object detection, object classification, object tracking, face detection, emotion recognition, action recognition, and ambient audio detection pipelines which are already included in the Docker image.

  1. Add file permissions: 
    • Go to the working directory Edge_Video_Analytics_Resources with the command:
      cd video_analytics/Video_Analytics_<version>/Edge_Video_Analytics_Resources/

       

    • Add file permissions to run the scripts with the commands: 

      sudo chmod +x docker/run.sh
      sudo chmod +x tools/model_downloader/model_downloader.sh 

  2. Download models:

    • Download the required models from Open Model Zoo in the working directory by running the command:
      sudo ./tools/model_downloader/model_downloader.sh --model-list models_list/models.list.yml

      The list of models can be viewed by opening the models_list/models.list.yml file.

    • Check that the download is successful by browsing to the models directory:

      Screenshot of Models Directory

      Figure 5: Models Directory


       

  3. Run the microservices:   

    • Start the application with the command: sudo docker-compose up 

    • Check for a success message in the terminal. 

      Screenshot of Success Message

      Figure 6: Success Message

       

    • Open a new terminal and check that the containers are running using the following command:

      sudo docker ps -–format 'table{{.Image}}\t{{.Status}}\t{{.Names}}' 


      The command output should show the two Docker containers edge_video_analytics_microservice and eclipse-mosquitto with the status Up

      Screenshot of Containers Running

      Figure 7: Containers Running


       

  4. Get the list of models and pipelines available in the container using REST request: 

    • Open a new terminal.

    • Run the following command to get the list of models available in the microservice:

      curl --location -X GET 'http://localhost:8080/models'
    • Run the following command to get the list of pipelines available in the microservice. Pipelines are displayed as a name/version tuple. The name reflects the action and version supplies more details of that action.

      curl --location -X GET 'http://localhost:8080/pipelines' 


      NOTE: If these steps fail due to proxy issues, refer to the Troubleshooting section.

  5. Send a REST request to run the object detection pipeline: 

    • Create the REST request by replacing <SYSTEM_IP_ADDRESS> in the curl command below with the IP address of your system. (If you are in the PRC region, refer to the second option below.)
      Refer to Customizing Video Analytics Pipeline Requests for understanding the POST REST request format.

      curl --location -X POST 'http://localhost:8080/pipelines/object_detection/person_vehicle_bike' \
      --header 'Content-Type: application/json' \
      --data-raw '{
        "source": {
            "uri": "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4?raw=true",
            "type": "uri"
        },
        "destination": {
            "metadata": {
              "type": "mqtt",
              "host": "<SYSTEM_IP_ADDRESS>:1883",
              "topic": "vaserving"
            },
          "frame": {
            "type": "rtsp",
            "path": "vasserving"
          }
        }
      }'
      

       

    • For PRC Users, use the link shown below and replace <SYSTEM_IP_ADDRESS> in the curl command below with the IP address of your system.
      Refer to Customizing Video Analytics Pipeline Requests for understanding the POST REST request format.

      curl --location -X POST 'http://localhost:8080/pipelines/object_detection/person_vehicle_bike' \
      --header 'Content-Type: application/json' \
      --data-raw '{
        "source": {
            "uri": "file:/// home/pipeline-server/resources/classroom.mp4",
            "type": "uri"
        },
        "destination": {
            "metadata": {
              "type": "mqtt",
              "host": "<SYSTEM_IP_ADDRESS>:1883",
              "topic": "vaserving"
            },
          "frame": {
            "type": "rtsp",
            "path": "vasserving"
          }
        }
      }' 


      NOTE: For versions lower than 0.7.2, use the following for uri:   file:///app/resources/classroom.mp4
       

    • Open a new terminal and run the modified curl command. The REST request will return a pipeline instance ID, which can be used to query the state of the pipeline.
      For the first run, pipeline instance (for example, a6d67224eacc11ec9f360242c0a86003) will be returned by the request and subsequent requests are returned with incremented numbers.

  6. Check the results:

    • The original terminal window that showed the microservices logs now shows the logs of newly sent request with the pipeline instance, state and the RTSP stream link. The pipeline states will show Queued, Running and Completed.
       

      Screenshot of Check Results

      Figure 8: Check Results


       

    • Check the state of the pipeline by sending GET requests using the pipeline instance ID: 

      curl --location -X GET 'http://localhost:8080/pipelines/object_detection/person_vehicle_bike/<Instance ID>/status'

      Where <Instance ID> is the pipeline instance.
      The response received from the GET request is similar to the following: 

      {
        "avg_fps": 27.014388149183596,
        "elapsed_time": 2.2934277057647705,
        "id": a6d67224eacc11ec9f360242c0a86003,
        "start_time": 1641468940.5196402,
        "state": "RUNNING"
      } 


      NOTE: If the pipeline has already ended, then the state will be shown as "COMPLETED".
       

    • Open the output RTSP stream in VLC player to view the inferencing results overlaid on the input video after replacing <SYSTEM_IP_ADDRESS> in the following URL with your system IP address: rtsp:// <SYSTEM_IP_ADDRESS>:8554/vasserving

      Note that the pipeline should be in the RUNNING state to view the output video. If the pipeline has already ended, start it again with the curl command and then view the video. 

      Screenshot of Inferencing Results

      Figure 9: Inferencing Results


       

  7. ​Alternate method to send REST requests:
    The REST request can also be sent through Postman application, using the same curl commands as in the previous steps. 
    To install the Postman application, perform the steps below:

    • Go to Postman Downloads

    • Extract the installation file in a desired location.  

    • After extraction, double-click the application and sign in with your existing account or create a new account to sign in. 

 

Optional: Test with Video Analytics Test Module

After you have completed the installation, you can test your setup by running the Video Analytics test module. 

For details, refer to Intel® Edge Software Device Qualification (Intel® ESDQ) for Video Analytics

 


Tutorials

Tutorial 1: Run Object Detection Pipeline on GPU

This tutorial will demonstrate how to change the inference device for running the inference on GPU. 
You will run the same object_detection pipeline again, but this time you will use the integrated GPU for detection inference by setting the detection-device parameter. 

  1. Complete all the steps in the Get Started section above.
  2. On your target device, check the output of the following command and get the device group:
    stat -c '%g' /dev/dri/render*
  3. Add the device group in the docker-compose.yml file: 
        group_add:
          - 109

     
  4. Create the REST request by replacing the <SYSTEM_IP_ADDRESS> in the curl command below with the IP address of your system.
    Please note the parameters section in the REST body.
     
    curl --location -X POST 'http://localhost:8080/pipelines/object_detection/person_vehicle_bike' \
    --header 'Content-Type: application/json' \
    --data-raw '{
      "source": {
          "uri": "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4?raw=true",
          "type": "uri"
      },
      "destination": {
          "metadata": {
            "type": "mqtt",
            "host": "<SYSTEM_IP_ADDRESS>:1883",
            "topic": "vaserving"
          },
        "frame": {
          "type": "rtsp",
          "path": "vasserving"
        }
      },
        "parameters": {
            "detection-device": "GPU"
       }
    }' 

    NOTE: The GPU inference plug-in dynamically builds OpenCL kernels when it is first loaded resulting in a ~30s delay before inference results are produced.
     

  5. Open a new terminal and run the curl command. The REST request will return a pipeline instance ID (for example, a6d67224eacc11ec9f360242c0a86003) which can be used to query the state of the pipeline. 

  6. Check the results as explained in the Run the microservices step above. 

 

Tutorial 2: Run Edge Video Analytics Microservice with USB Camera Input

In this tutorial, you will change the input source of the object detection pipeline to a USB camera.

  1. Complete all the steps in the Get Started section above.
  2. On Ubuntu, list all available video devices by running the following command: ls /dev/video*
  3. Create the curl request to use the video device as a source. For example, if the output of the command is /dev/video0, then make changes to the curl command source as below:
     
    curl --location -X POST 'http://localhost:8080/pipelines/object_detection/person_vehicle_bike' \
    --header 'Content-Type: application/json' \
    --data-raw '{
      "source": {
          "device": "/dev/video0",
          "type": "webcam"
      },
      "destination": {
          "metadata": {
            "type": "mqtt",
            "host": "<SYSTEM_IP_ADDRESS>:1883",
            "topic": "vaserving"
          },
        "frame": {
          "type": "rtsp",
          "path": "vasserving"
        }
      }
    }' 

     

  4. Open a new terminal and run the curl command. The REST request will return a pipeline instance ID (for example, a6d67224eacc11ec9f360242c0a86003) which can be used to query the state of the pipeline.

  5. Check the results as explained in the Run the microservices step above. 

 

Tutorial 3: Run New Pipeline with New Model in Edge Video Analytics Microservice

In this tutorial you will create a new pipeline based on the person_detection pipeline, which will use the model person-detection-retail-0013. The pipeline and model folders will be volume mounted to the edge video analytics microservice without needing to rebuild the Docker image. You can follow the same steps for creating any new pipeline.

  1. Complete all the steps in the Get Started section above.
  2. Download a new model. 
    To download a new model from Open Model Zoo, add the model details in the models.list.yml present in the path Edge_Video_Analytics_Resources/models_list.
    For person-detection-retail-0013, add an entry as shown below to the end of file models.list.yml.
     
    - model: person-detection-retail-0013
      alias: object_detection
      version: person_detection
      precision: [FP16,FP32] 


    Then, run the below commands in a new terminal to download the models.
     

    cd video_analytics/Video_Analytics_<version>/Edge_Video_Analytics_Resources/
    # Where <version> is the Package version downloaded
    
    sudo ./tools/model_downloader/model_downloader.sh --model-list models_list/models.list.yml
    


    Once the command execution is complete, the downloaded model person-detection should be available under the models/object_detection directory.
    If you are not using the models from the Open Model Zoo, copy your models directly in the models directory.

    Screenshot of Download New Model

    Figure 10: Download New Model

     

  3. Create a new pipeline
    You will create a new pipeline under pipelines/object_detection. Since you are basing it on the existing person_vehicle_bike pipeline, make a copy of the person_vehicle_bike folder, rename it to person_detection and then update the pipeline.json to use the new model.
    In the same terminal (from path Edge_Video_Analytics_Resources), execute the command:

    sudo cp -r pipelines/object_detection/person_vehicle_bike/ pipelines/object_detection/person_detection


    Edit the template and description section of pipeline.json file in the newly copied pipelines/object_detection/person_detection/pipeline.json with the below lines which specifies the model path as models/object_detection/person_detection to use the model person-detection-retail-0013 for object detection.

    Define the template as: 

    	"template": [
    		"uridecodebin name=source",
    		" ! gvadetect model={models[object_detection][person_detection][network]} name=detection",
    		" ! gvametaconvert name=metaconvert ! gvametapublish name=destination",
    		" ! appsink name=appsink"
    	], 


    Define the description as: 

    "description": "Person Detection based on person-detection-retail-0013",


    Please refer to Defining Media Analytics Pipelines for understanding the pipeline template and defining your own pipeline.

  4. Run Microservice with added Model and Pipeline
    Start the application with the command:

    sudo docker-compose up 



    This step will volume mount the models and pipelines directories. You can check that the containers are running using the sudo docker ps command in a separate terminal.
     

  5. Check the new pipeline and model
    Open a new terminal and run the following command to get the list of models available:

    curl --location -X GET 'http://localhost:8080/models'


    Run the following command to get the list of pipelines available. Pipelines are displayed as a name/version tuple. The name reflects the action and version supplies more details of that action.

    curl --location -X GET 'http://localhost:8080/pipelines'

    You should see the newly added pipeline and model in the lists.
     

  6. Run pipeline for object_detection/person_detection
    Open a new terminal and enter the below curl command which is for object_detection/person_detection pipeline given as below:

    curl --location -X POST 'http://localhost:8080/pipelines/object_detection/person_detection' \
    --header 'Content-Type: application/json' \
    --data-raw '{
      "source": {
          "uri": "https://github.com/intel-iot-devkit/sample-videos/raw/master/store-aisle-detection.mp4?raw=true",
          "type": "uri"
      },
      "destination": {
          "metadata": {
            "type": "mqtt",
            "host": "<SYSTEM_IP_ADDRESS>:1883",
            "topic": "vaserving"
          },
        "frame": {
          "type": "rtsp",
          "path": "vasserving"
        }
      }
    }' 

    Where <SYSTEM_IP_ADDRESS> is the IP address of the system.

    The above curl command returns the pipeline instance. Use this pipeline instance to get the status by updating the <pipeline instance> in the below command:

    curl --location --request GET 'http://localhost:8080/pipelines/object_detection/person_detection/<pipeline instance>/status'  

     

    Screenshot of Run Pipeline for Person Detection

    Figure 11: Run Pipeline for Person Detection



    The microservice logs will show that the object_detection/person_detection pipeline instance was created, the pipeline state changed to RUNNING and ended.
     

    Pipeline Logs

    Figure 12: Pipeline Logs

     

Tutorial 4: Install Edge Video Analytics Helm Chart 

Note: This tutorial is supported from release 2022.1 onwards. 

Note: This tutorial assumes you have a Kubernetes cluster available. 

In this tutorial you will deploy the Edge Video Analytics Microservice on a Kubernetes cluster using helm charts. The helm chart packages Edge Video Analytics Microservice and MQTT broker for a sample deployment. To download the helm chart, use the customized download option. 

  1. Complete all the steps in the Get Started Guide. When the installation is complete, you see the message “Installation of package complete” and the installation status for each module. 
    Screenshot of installing the helm chart. Each module is listed with the "Success" following them.
    Figure 13: Successful Installation
  2. Copy the models, pipelines and resources folder to /opt/intel/evam/ on the Kubernetes worker nodes. 
  3. Once the above prerequisites are completed, install the chart as follows: helm install evam ./evam-chart-0.7.2 
    Screenshot of installing the helm chart with command.
    Figure 14: Chart Installation
  4. Get the IP addresses of the pods running in the cluster: 
    kubectl get pods -o wide
Screenshot of retrieving the IP addresses with the command.
Figure 15: Retrieving IP Address

 

  1. Update the pipeline request command to trigger a pipeline. Replace EVAM_POD_IP, MQTT_POD_IP with the IP addresses above.  
    curl --location -X POST 'http://<EVAM_POD_IP>:8080/pipelines/object_detection/person_vehicle_bike' \ 
    --header 'Content-Type: application/json' \ 
    --data-raw '{ 
      "source": { 
          "uri": "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4?raw=true", 
          "type": "uri" 
      }, 
      "destination": { 
          "metadata": { 
            "type": "mqtt", 
            "host": "<MQTT_POD_IP>:1883", 
            "topic": "vaserving" 
          } 
      } 
    }'  

    Note: For PRC users, change the pipeline request URI with these steps to file:///app/resources/classroom.mp4.

  2. Get the inference output on MQTT subscriber:  
    docker run -it --entrypoint mosquitto_sub  eclipse-mosquitto:latest --topic vaserving -p 1883 -h <MQTT_POD_IP>
    Screenshot of getting the inference output with the command.
    Figure 16: Inference Output

     

 


Summary and Next Steps 

By following this guide, you learned how to perform video analytics for your use case using the Edge Video Analytics Microservice. 


Learn More 

To continue learning, see the following guides and software resources: 

 


Troubleshooting    

  • Make sure you have an active internet connection during the full installation. If you lose Internet connectivity at any time, the installation might fail.
  • If installation fails while downloading and extracting steps of pulling the images from Intel registry, then re-run the installation command sudo ./edgesoftware install by providing the product key.
    Screenshot of Troubleshooting: Product Key
     
  • Make sure you are using a fresh Ubuntu* installation. Earlier software, especially Docker and Docker Compose, can cause issues.
  • In a proxy environment, if single user proxy is set (i.e. in .bashrc file) then some of the component installation may fail or installation hangs. Make sure you have set the proxy in /etc/environment.

  • If your system is in a proxy network, add the proxy details in the environment section in the docker-compose.yml file.

    HTTP_PROXY=http://<IP:port>/
    HTTPS_PROXY=http://<IP:port>//
    NO_PROXY=localhost,127.0.0.1 


    Enter the proxy details in docker-compose.yml as shown below:

    Screenshot of Troubleshooting: Proxy Settings
     

  • If proxy details are missing, then it fails to get the required source video file for running the pipelines and installing the required packages inside the container.
    Screenshot of Troubleshooting: Proxy Missing


    Run the command sudo -E docker-compose up (Refer to Step 8 of Install Edge Video Analytics Microservice).

  • In custom mode, Helm CLI installation might fail if the target device is behind a proxy. To fix it, update the snap proxy details. 

    sudo snap set system proxy.http=http://<IP:port>// 
    sudo snap set system proxy.https=http://<IP:port>// 

     

  • Stop/remove the containers if it shows conflict errors. 
    To remove the edge_video_analytics_microservice and mqtt_broker, use either of the options below: 

    • Either, run the following commands:

      sudo docker stop edge_video_analytics_microservice mqtt_broker
      sudo docker rm edge_video_analytics_microservice mqtt_broker

       

    • Or run sudo docker-compose down from the path: video_analytics/Video_Analytics_<VERSION>/Edge_Video_Analytics_Resources

      Screenshot of Troubleshooting: Remove Containers

 

Support Forum 

If you're unable to resolve your issues, contact the Support Forum.  

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.