Multi-Camera Detection of Social Distancing
Overview
Social distancing is one of the most effective non-pharmaceutical ways to prevent the spread of disease. This tutorial gives a solution to prevent the spread of disease by using computer vision inference in the
Intel® Distribution of OpenVINO™ toolkit
to measure distance between people and store data to
InfluxDB*
. This data can be visualized on a Grafana* dashboard.
How It Works
A multi-camera surveillance solution demonstrates an end-to-end analytics pipeline to detect people and calculates social distance between people from multiple input feeds. Frames are transformed, scaled and normalized into BGR images which can be fed to the inference engine in the
Intel® Distribution of OpenVINO™ toolkit
. The steps below are performed for the inference.
- Apply Intel's person detection model, i.e., person-detection-retail-0013 to detect people from all the video streams.
- Compute Euclidean distance between all the people from the above step.
- Based on above measurements, check whether any people are violating N pixels apart.
- Store total violations count of social distancing data in InfluxDB.
- Visualize the stored data of InfluxDB on Grafana dashboard.

Get Started
Step 1: Install
The Multi-Camera Detection of Social Distancing component will be installed with the
package and will be available in the target system.
Go to the Multi-Camera Detection of Social Distancing component directory from the terminal by running the command:
cd $HOME/edge_insights_vision/Edge_Insights_for_Vision_<version>/RI_MultiCamera_Social_Distancing/mcss-covid19/
Where
<version>
is the
Edge Insights for Vision
version selected while downloading.
Step 2: Download the Input Video
The application works better with input feed in which cameras are placed at eye level angle.
Please download
sample video at 1280x720 resolution and place it in the
$HOME/edge_insights_vision/Edge_Insights_for_Vision_<version>/RI_MultiCamera_Social_Distancing/mcss-covid19/resources
directory.
Where
<version>
is the
Edge Insights for Vision
version selected while downloading.
(Data set subject to this
license. The terms and conditions of the dataset license apply. Intel® does not grant any rights to the data files.)
To use any other video, specify the path
INPUT1
in the
run.sh
file inside the application directory.
The application also supports multi-video as input. The appropriate code with comments is available in the
run.sh
file inside the application directory.
INPUT1="${PWD}/../resources/<name_of_video_file>.mp4 "MIN_SOCIAL_DIST1=<appropriate_minimum_social_distance_for_input1>
(Optional) Test with USB Camera
To test with a USB camera, specify the camera index in the
run.sh
file.
On Ubuntu, to list all available video devices, run the following command:
ls /dev/video*
For example, if the output of the command is
/dev/video0
, then make changes to the following variables such as
INPUT1
and
MIN_SOCIAL_DIST1
in the
run.sh
file inside the application folder.
INPUT1 = /dev/video0 MIN_DIST1=<appropriate_minimum_social_distance_for_input1>
Step 3: Initialize Environment Variables
Run the following command to initialize
OpenVINO™
environmental variables:
source /opt/intel/openvino_2021/bin/setupvars.sh
Run the Application
Instructions in this tutorial are provided for three hardware configurations (CPU, GPU, and
Intel®
Vision Accelerator). Configure the application by modifying the
DEVICE1
parameter.
The
Intel®
Vision Accelerator is not supported in
RHEL*
8.2.
- Change to the application directory:cd application
- Inside therun.shfile, change the following parameters (if required):PERSON_DETECTOR="${PWD}/../intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml "DEVICE1="<device>"where<device>can beCPU,GPU, orHDDL(Intel® Vision Accelerator).
- Change the permissions for therun.shfile and run the script:chmod +x run.sh ./run.sh
- Application parameters can be changed as per the requirements in therun.shfile.
- The Intel® Vision Accelerator is not supported forRHEL*8.2 throughOpenVINO™.
- Initialization of GPU and Intel® Vision Accelerator might take some time for the inference to start.

Data Visualization on Grafana
The application must be running in parallel to view the results in Grafana.
- Navigate tolocalhost:3000on your browser.If browser shows Unable to connect, then make sure Grafana service status is active using the commandsudo service grafana-server status. If service is not active, then start the service by running the commandsudo service grafana-server startin the terminal.
- Login with user asadminand password asadmin.
- Go toConfigurationand selectData Sources.
- Click+ Add data source, selectInfluxDB, and provide the following details:Name:Mcss CovidURL:http://localhost:8086Auth: ChooseSkip TLS VerifyInfluxDB details:Database:McssCovidHTTPMethod:GET
- ClickSave and Test.
- Click+icon on the left side of the window, then selectImport.
- ChooseUpload.json Fileand import themcss-covid19/resources/multi_cam.jsonfile.
- Click onImport.
- Click onMulti Camera Covid-19 Solutiondashboard to view real time violation data.
Summary and Next Steps
This application successfully leverages
Intel® Distribution of OpenVINO™ toolkit
plugins for detecting and measuring distance between the people and storing data to InfluxDB. It can be extended further to provide support for feed from network stream (RTSP camera) and the algorithm can be optimized for better performance.