Run Inference of a Face Detection Model Using OpenCV* API with Added Time Date "Stamp" in the Filename

Published: 11/05/2019  

Last Updated: 11/05/2019

Run Inference of a Face Detection Model Using OpenCV* API

Guidance and instructions for the Install OpenVINO™ toolkit for Raspbian* OS article, includes a face detection sample. This sample outputs a file for the result. This can be very useful to:

  • Run inference on a target machine from a host, using ssh
  • Validate an OpenCV* installation is working

Run the OpenCV deep learning module with the Inference Engine back-end with this Python* sample, which works with the pre-trained Face Detection model.

Requirements

  • Intel® Distribution of OpenVINO™ toolkit installation r3
  • OpenCV* 4.x.x installation
  • Face detection model
  • An image with a face (face-1.jpg)
  • Optional: Intel® Neural Compute Stick 2 (Intel® NCS 2)

Note: This original openvinotoolkit sample  "openvino_fd_myriad.py" does use the MYRIAD flag which requires an Intel® Neural Compute Stick. If you want to use this on the CPU, replace MYRIAD with CPU at the # Specify target device section:
net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD) to net.setPreferableTarget(cv.dnn.DNN_TARGET_CPU) 

Download the pre-trained Face Detection model or copy it from a development environment (host) machine.

To download the .bin file with weights:

wget --no-check-certificate https://download.01.org/opencv/2019/open_model_zoo/R3/20190905_163000_models_bin/face-detection-adas-0001/FP16/face-detection-adas-0001.bin

To download the .xml file with the network topology:

wget --no-check-certificate https://download.01.org/opencv/2019/open_model_zoo/R3/20190905_163000_models_bin/face-detection-adas-0001/FP16/face-detection-adas-0001.xml

This sample does indicate a static file output name. Each time the sample is run, the file is overwritten.

Developers who want to record or document all attempts may find adding the date/time beneficial. Adding a Time Date Stamp to the output file name will enable developers to keep multiple attempts or a series of frames. 

For more information about reading and writing images and video, visit the OpenCV documentation.

Edits to the Face Detection Sample  "openvino_fd_myriad.py"

  1. Added import datetime and import time
  2. Added variable for date_string
  3. Added date_string variable to the output name
  4. Output the file

Revised Code

Below is the revised code. For the purposes of this example, change the filename to face.py and place the input image and models in the same directory. The directory structure would look like:

|--- face-detection-walking
     |--- face.py
     |--- face-1.jpg 
     |--- bin
     |--- xml
     |--- results-output-here.jpg

The open source input image (face-1.jpg) is a result of extracting images from the list of open source sample videos for inference. Learn how to extract images from the Open Source Computer Vision Sample Videos.


Figure 1. Image face-1.jpg

If you use a different image for testing, remember to change the input in the code to the name of that image file:

frame = cv.imread('face-1.jpg')

The result is an output frame with time/date information in the filename. This sample ends at "seconds". 

import datetime
import time
import cv2 as cv
# Load the model.
net = cv.dnn.readNet('face-detection-adas-0001.xml',
                     'face-detection-adas-0001.bin')
# Specify target device
net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD)
# Read an image - used an extracted image the open source video: face-detection for this example
frame = cv.imread('face-1.jpg')
if frame is None:
    raise Exception('Image not found!')
# Prepare input blob and perform an inference.
blob = cv.dnn.blobFromImage(frame, size=(672, 384), ddepth=cv.CV_8U)
net.setInput(blob)
out = net.forward()
# Draw detected faces on the frame.
for detection in out.reshape(-1, 7):
    confidence = float(detection[2])
    xmin = int(detection[3] * frame.shape[1])
    ymin = int(detection[4] * frame.shape[0])
    xmax = int(detection[5] * frame.shape[1])
    ymax = int(detection[6] * frame.shape[0])
    if confidence > 0.5:
        cv.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))
# Save the frame to an image file:
# Create a date_string variable with the format
date_string = time.strftime("%Y-%m-%d-%H:%M:%S")
# create imageName variable that takes adds the date_string to the output file name
imageName = 'test' + date_string +'.png'
# output with name and frame
cv.imwrite( imageName, frame );

Pro Tip: If the OpenVINO environment is not initialized with . /opt/intel/openvino/bin/setupvars.sh - an error may display: Traceback (most recent call last):
  File "face.py", line 6, in <module>
    'face-detection-adas-0001.bin')
cv2.error: OpenCV(4.1.0) /io/opencv/modules/dnn/src/dnn.cpp:2670: error: (-2:Unspecified error) Build OpenCV with Inference Engine to enable loading models from Model Optimizer. in function 'readFromModelOptimizer'

In a terminal, with OpenVINO environment initialized, run the script:

python3 face.py

The format for the filename is:

test("%Y-%m-%d-%H:%M:%S").png

Example: Date/Time output file.

test2019-10-09-09:17:20.png

date time stamp face detection opencv
Figure 2.  Image face-1.jpg (formerly named 1.jpg) Original and Output with date/time stamp

The output files will not overwrite and will have date stamps on each output.

Example: Date/Time output file with time zone. Time zone does work by using %Z.

date_string = time.strftime("%Y-%m-%d-%H:%M:%S:%Z")

Results with the filename:

test2019-10-09-09:17:46:MST.png

added time zone information to filename with opencv
Figure 3.  Results with time/date stamp including zone in filename

Additional Resources

Visit the following resources for more information about open source free culture videos for inference, pre-trained models and the OpenCV* project.

Open Source Videos for Inference

Open Model Zoo

OpenCV* project

Extract Images from Video Article

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.