Multi-Camera Detection of Social Distancing Reference Implementation

Published: 05/04/2022  

Last Updated: 05/24/2022

Overview

Create an end-to-end video analytics pipeline to detect people and calculate the social distance between people from multiple input video feeds. Multi-Camera Detection of Social Distancing demonstrates how to use the Video Analytics Microservice in an application and store the data to InfluxDB*. This data can be visualized on a Grafana* dashboard.

Select Configure & Download to download the reference implementation and the software listed below.  

Configure & Download

Screenshot of social distancing


Time to Complete

Programming
Language

Available Software

45 minutes

Python* 3.6


Video Analytics Microservice
Video Analytics Serving
Intel® Distribution of OpenVINO™ toolkit 2021 Release

 


Target System Requirements

  • One of the following processors:
    • 6th to 11th Generation Intel® Core™ processors
    • 1st to 3rd generation of Intel® Xeon® Scalable processors
    • Intel Atom® processor with Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2)
  • At least 8 GB RAM.
  • At least 64 GB hard drive.
  • An Internet connection.
  • Ubuntu* 18.04.3 LTS Kernel 5.0†
  • Ubuntu* 20.04 LTS Kernel 5.4†

 

Refer to OpenVINO™ Toolkit System Requirements for supported GPU and VPU processors.

† Use Kernel 5.8 for 11th generation Intel® Core™ processors.

 

How It Works

This is a reference implementation that demonstrates how to use the Video Analytics Microservice in an application for creating a social distancing detection use case. The reference implementation consists of the pipeline and model config files that are used with the Video Analytics Microservice Docker image by volume mounting and the docker-compose.yml file for starting the containers. The results of the pipeline execution are routed to MQTT broker and can be viewed from there. The inference results are also used by the vision algorithms in the Docker image mcss-eva for population density detection, social distance calculation, etc.

Container Engines and Orchestration

The package uses Docker* and Docker Compose for automated container management.

  • Docker is a container framework widely used in enterprise environments. It allows applications and their dependencies to be packaged together and run as a self-contained unit.
  • Docker Compose is a tool for defining and running multi container Docker Applications.

 

End-to-End Video Analytics

A multi-camera surveillance solution demonstrates an end-to-end video analytics application to detect people and calculate the social distance between people from multiple input video feeds. It uses the video analytics microservice to ingest input videos and perform deep learning inference. Based on the inferenced output data published on MQTT topic, it then calculates the social distance violations and serves the inference results to a webserver. 

The steps below are performed by the application:

  • The video files for inference should be present in resources folder.
  • When the Video Analytics Microservice starts, the models, pipelines, and resources folders are mounted in the microservice. The REST APIs are available to interact with the microservice.  
  • The MCSS microservice creates and sends a POST request to the Video Analytics Microservice for starting the video analytics pipeline.
  • On receiving the POST request, the edge video analytics pipeline starts an instance of the pipeline requested and publishes the inference results to MQTT broker. The pipeline uses the person-detection-retail-0013 model from Open Model Zoo to detect people in the video streams.
  • The MCSS microservice subscribes to the MQTT messages, receives the inference output metadata, and calculates the Euclidean distance between all the people. 
  • Based on above measurements, it checks whether any people are violating i.e., are less than N pixels apart.
  • The input video frames, inference output results and social distancing violations are served to a webserver for viewing.
Solution Flow Diagram
Figure 1: Solution Flow Diagram

 

Get Started

Step 1: Access the Reference Implementation

 

Select Configure & Download to download the reference implementation.  

Configure & Download

Go to the Multi-Camera Detection of Social Distancing component directory from the terminal by running the command:

cd video_analytics/Video_Analytics_<version>/MCSD_Resources/

Where <version> is the Package version downloaded.

Step 2: Download the Input Video

The application works better with input feed in which cameras are placed at eye level angle.

Download sample video at 1280x720 resolution, rename the file by replacing the spaces with the _ character (for example, Pexels_Videos_2670.mp4), and place it in the following directory:

video_analytics/Video_Analytics_<version>/MCSD_Resources/app/resources, where <version> is the Video Analytics package version selected while downloading.

(Data set subject to this license. The terms and conditions of the dataset license apply. Intel® does not grant any rights to the data files.)

To use multiple videos or any other video, download the video files and place them under the resources directory.

 

Step 3: Download the object_detection Model

The model to download is defined in the models.list.yml present in the model_list folder.

Execute the below commands to download the required object_detection model (person-detection-retail-0013) from Open Model Zoo:

sudo chmod +x ../ Edge_Video_Analytics_Resources/tools/model_downloader/model_downloader.sh
sudo ../Edge_Video_Analytics_Resources/tools/model_downloader/model_downloader.sh --model-list models_list/models.list.yml 

You will see output similar to: 
 

Screenshot of Download Model
Figure 2: Download Model

 

Step 4:  Review the Pipeline for the Reference Implementation

The pipeline for this RI is present in the /MCSD_Resources/pipelines/ folder. This pipeline uses the person-detection-retail-0013 model downloaded in the above step. The pipeline template is defined below:

	"template": [
		"{auto_source} ! decodebin",
		" ! gvadetect model={models[object_detection][person_detection][network]} name=detection",
		" ! gvametaconvert name=metaconvert ! gvametapublish name=destination",
		" ! appsink name=appsink"
	] 

 

The pipeline uses standard GStreamer elements for input source and decoding the media files, gvadetect to detect objects, gvametaconvert to produce json from detections, and gvametapublish to publish results to MQTT destination. The model identifier for gvadetect element is updated to the new model. The model downloaded is present in the models/object_detection/person_detection directory.

Refer to Defining Media Analytics Pipelines for understanding the pipeline template and defining your own pipeline.


Step 5: Run the Application

Set the HOST_IP environment variable with the command:

export HOST_IP=$(hostname -I | cut -d' ' -f1)

 

Run the application by executing the command:

sudo -E docker-compose up -d


This will volume mount the pipelines and models folders to the edge video analytics microservice.

To check the application logs, run the command:

sudo docker logs -f mcss-eva

 

Step 6: View the Output

To view the output streams, open the browser with url <System IP:5000>. 

Screenshot of View Output Stream
Figure 3: View Output Stream

 

NOTE: To rerun the application, execute the command: sudo docker restart mcss-eva

NOTE: In case of failure in proxy enabled network and failure in the sudo docker-compose up command, please refer to the Troubleshooting section of this document.


Summary and Next Steps

By following this guide, you learned how to perform video analytics for your use case using the video analytics microservice.

As a next step, follow the Video Analytics tutorials. 


Learn More

To continue learning, see the following guides and software resources:

 

Troubleshooting

  •  Make sure you have an active internet connection during the full installation. If you lose Internet connectivity at any time, the installation might fail.
  • If installation fails while downloading and extracting steps of pulling the images from Intel registry, then re-run the installation command sudo ./edgesoftware install by providing the product key.
    Screenshot of Troubleshooting: Product Key
     
  • Make sure you are using a fresh Ubuntu* installation. Earlier software, especially Docker and Docker Compose, can cause issues.
  • In a proxy environment, if single user proxy is set (i.e. in .bashrc file) then some of the component installation may fail or installation hangs. Make sure you have set the proxy in /etc/environment. 

  • If your system is in a proxy network, add the proxy details in the environment section in the docker-compose.yml file.

    HTTP_PROXY=http://<IP:port>/
    HTTPS_PROXY=http://<IP:port>//
    NO_PROXY=localhost,127.0.0.1 


    Enter the proxy details in docker-compose.yml as shown below:
    Screenshot of Troubleshooting: Proxy Settings

     

  • If proxy details are missing, then it fails to get the required source video file for running the pipelines and installing the required packages inside the container.
    Screenshot of Troubleshooting: Proxy Missing

    Run the command sudo -E docker-compose up 

  • Stop/remove the containers if it shows conflict errors. 
    To remove the edge_video_analytics_microservice and mqtt_broker, use either of the options below: 

    • Either, run the following commands:

      sudo docker stop edge_video_analytics_microservice mqtt_broker
      sudo docker rm edge_video_analytics_microservice mqtt_broker

       

    • Or run sudo docker-compose down from the path: video_analytics/Video_Analytics_<VERSION>/Edge_Video_Analytics_Resources

      Screenshot of Troubleshooting: Remove Containers

 

 

Support Forum 

If you're unable to resolve your issues, contact the Support Forum.  

 

 

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.