Documentation

  • N/A
  • 2021.4
  • 07/22/2021
  • Public Content

Multi-Camera Detection of Social Distancing

Overview

Social distancing is one of the most effective non-pharmaceutical ways to prevent the spread of disease. This tutorial gives a solution to prevent the spread of disease by using computer vision inference in the
Intel® Distribution of OpenVINO™ toolkit
to measure distance between people and store data to
InfluxDB*
. This data can be visualized on a Grafana* dashboard.

How It Works

A multi-camera surveillance solution demonstrates an end-to-end analytics pipeline to detect people and calculates social distance between people from multiple input feeds. Frames are transformed, scaled and normalized into BGR images which can be fed to the inference engine in the
Intel® Distribution of OpenVINO™ toolkit
. The steps below are performed for the inference.
  • Apply Intel's person detection model, i.e., person-detection-retail-0013 to detect people from all the video streams.
  • Compute Euclidean distance between all the people from the above step.
  • Based on above measurements, check whether any people are violating N pixels apart.
  • Store total violations count of social distancing data in InfluxDB.
  • Visualize the stored data of InfluxDB on Grafana dashboard.

Get Started

Step 1: Install
The Multi-Camera Detection of Social Distancing component will be installed with the package and will be available in the target system.
Go to the Multi-Camera Detection of Social Distancing component directory from the terminal by running the command:
cd $HOME/edge_insights_vision/Edge_Insights_for_Vision_<version>/RI_MultiCamera_Social_Distancing/mcss-covid19/
Where
<version>
is the
Edge Insights for Vision
version selected while downloading.
Step 2: Download the Input Video
The application works better with input feed in which cameras are placed at eye level angle.
Please download sample video at 1280x720 resolution and place it in the
$HOME/edge_insights_vision/Edge_Insights_for_Vision_<version>/RI_MultiCamera_Social_Distancing/mcss-covid19/resources
directory.
Where
<version>
is the
Edge Insights for Vision
version selected while downloading.
(Data set subject to this license. The terms and conditions of the dataset license apply. Intel® does not grant any rights to the data files.)
To use any other video, specify the path
INPUT1
in the
run.sh
file inside the application directory.
The application also supports multi-video as input. The appropriate code with comments is available in the
run.sh
file inside the application directory.
INPUT1="${PWD}/../resources/<name_of_video_file>.mp4 "MIN_SOCIAL_DIST1=<appropriate_minimum_social_distance_for_input1>
(Optional) Test with USB Camera
To test with a USB camera, specify the camera index in the
run.sh
file.
On Ubuntu, to list all available video devices, run the following command:
ls /dev/video*
For example, if the output of the command is
/dev/video0
, then make changes to the following variables such as
INPUT1
and
MIN_SOCIAL_DIST1
in the
run.sh
file inside the application folder.
INPUT1 = /dev/video0 MIN_DIST1=<appropriate_minimum_social_distance_for_input1>
Step 3: Initialize Environment Variables
Run the following command to initialize
OpenVINO™
environmental variables:
source /opt/intel/openvino_2021/bin/setupvars.sh

Run the Application

Instructions in this tutorial are provided for three hardware configurations (CPU, GPU, and Intel® Vision Accelerator). Configure the application by modifying the
DEVICE1
parameter.
The Intel® Vision Accelerator is not supported in
RHEL*
8.2.
  1. Change to the application directory:
    cd application
  2. Inside the
    run.sh
    file, change the following parameters (if required):
    PERSON_DETECTOR="${PWD}/../intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml "DEVICE1="<device>"
    where
    <device>
    can be
    CPU
    ,
    GPU
    , or
    HDDL
    (Intel® Vision Accelerator).
  3. Change the permissions for the
    run.sh
    file and run the script:
    chmod +x run.sh ./run.sh
  1. Application parameters can be changed as per the requirements in the
    run.sh
    file.
  2. The Intel® Vision Accelerator is not supported for
    RHEL*
    8.2 through
    OpenVINO™
    .
  3. Initialization of GPU and Intel® Vision Accelerator might take some time for the inference to start.

Data Visualization on Grafana

The application must be running in parallel to view the results in Grafana.
  1. Navigate to
    localhost:3000
    on your browser.
    If browser shows Unable to connect, then make sure Grafana service status is active using the command
    sudo service grafana-server status
    . If service is not active, then start the service by running the command
    sudo service grafana-server start
    in the terminal.
  2. Login with user as
    admin
    and password as
    admin
    .
  3. Go to
    Configuration
    and select
    Data Sources
    .
  4. Click
    + Add data source
    , select
    InfluxDB
    , and provide the following details:
    Name:
    Mcss Covid
    URL:
    http://localhost:8086
    Auth: Choose
    Skip TLS Verify
    InfluxDB details:
    Database:
    McssCovid
    HTTPMethod:
    GET
  5. Click
    Save and Test
    .
  6. Click
    +
    icon on the left side of the window, then select
    Import
    .
  7. Choose
    Upload.json File
    and import the
    mcss-covid19/resources/multi_cam.json
    file.
  8. Click on
    Import
    .
  9. Click on
    Multi Camera Covid-19 Solution
    dashboard to view real time violation data.

Summary and Next Steps

This application successfully leverages
Intel® Distribution of OpenVINO™ toolkit
plugins for detecting and measuring distance between the people and storing data to InfluxDB. It can be extended further to provide support for feed from network stream (RTSP camera) and the algorithm can be optimized for better performance.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.