Single and Multi-object Detection with Hardware Acceleration
This tutorial uses the sample application called "Object Detection YOLO* V3 Python* Demo." Object Detection YOLO V3 Python Demo uses OpenCV, a component of OpenVINO™, to display a frame with detections that are rendered as bounding boxes and labels, if provided. By default, this sample application displays:
- OpenCV time: Frame decoding, time to render the bounding boxes, labels, and results.
- Detection time: Inference time for the object detection network. Detection time is reported in Sync mode only.
- Wall clock time: Combined application-level performance.
Instructions in this tutorial are provided for three hardware configurations so you can choose those that fit your system's configuration, whether your system uses:
For each configuration, the sample demonstrates two detection types:
- Single detection uses a basic data set to perform one-by-one person detection.
- Multi-detection uses an advanced data set to perform multi-object detection, such as a person and a car.
While running the sample applications, you will gain familiarity with the Intel® Distribution of OpenVINO™ toolkit
Single and Multi-Object Detection with Hardware Acceleration on a CPU
Run these steps on the target system.
Step 1: Initialize the Intel® Distribution of OpenVINO™ toolkit Environment
- Open a terminal window.
- Go to the sample application directory in which the Object Detection YOLO V3 Python demo is located:
cd $HOME/Downloads/YOLOv3
- Initialize the OpenVINO™ environment:
source /opt/intel/openvino_2021/bin/setupvars.sh
Leave the terminal window open for the next step.
Step 2: Run the Single Detection Application on the CPU
- Run the Object Detection YOLO V3 Python Demo sample application:
python3 object_detection_demo.py -i $HOME/Downloads/YOLOv3/Sample_videos/one-by-one-person-detection.mp4 -m $HOME/Downloads/YOLOv3/tensorflow-yolo-v3/FP32/frozen_darknet_yolov3_model.xml -t 0.1 -at yolo
Success is indicated by an image that shows a single individual in a bounding box. At the left side of the image you see the inference time. You might not clearly see some bounding boxes and detections if scene components are the same color as the bounding box or text.
- Press the tab key on your keyboard to change asynchronous mode options.
- Press the esc key to exit the demo.
Leave the terminal window open for the next step.
Step 3: Run the Multi-Detection Application on the CPU
- Run the Object Detection YOLO V3 Python Demo sample application:
python3 object_detection_demo.py -i $HOME/Downloads/YOLOv3/Sample_videos/person-bicycle-car-detection.mp4 -m $HOME/Downloads/YOLOv3/tensorflow-yolo-v3/FP32/frozen_darknet_yolov3_model.xml -t 0.1 -at yolo
Success is indicated by an image that shows one or more objects and/or people. At the left side of the image you see the inference time. You might not clearly see some bounding boxes and detections if scene components are the same color as the bounding box or text.
- Press the tab key on your keyboard to change asynchronous mode options.
- Press the esc key to exit the demo.
If you want to run the sample application on a GPU or the Intel® Vision Accelerator, leave the terminal window open and begin with Step 2 of the GPU or Intel® Vision Accelerator instructions.
Single and Multi-Object Detection with Hardware Acceleration on a GPU
If you used the CPU instructions and left your terminal window open, skip ahead to Step 2.
Run these steps on the target system.
Step 1: Initialize the Intel® Distribution of OpenVINO™ toolkit Environment
- Open a terminal window.
- Go to the sample application directory in which the Object Detection YOLO V3 Python demo is located:
cd $HOME/Downloads/YOLOv3
- Initialize the OpenVINO™ environment:
source /opt/Intel/openvino_2021/bin/setupvars.sh
Leave the terminal window open for the next step.
Step 2: Run the Single Detection Application on the GPU
- Run the Object Detection YOLO V3 Python Demo sample application:
python3 object_detection_demo.py -i $HOME/Downloads/YOLOv3/Sample_videos/one-by-one-person-detection.mp4 -m $HOME/Downloads/YOLOv3/tensorflow-yolo-v3/FP32/frozen_darknet_yolov3_model.xml -d GPU -t 0.1 -at yolo
Success is indicated by an image that shows a single individual in a bounding box. At the left side of the image you see the inference time. You might not clearly see some bounding boxes and detections if scene components are the same color as the bounding box or text.
- Press the tab key on your keyboard to change asynchronous mode options.
- Press the esc key to exit the demo.
Leave the terminal window open for the next step.
Step 3: Run the Multi-Detection Application on the GPU
- Run the Object Detection YOLO V3 Python Demo sample application:
python3 object_detection_demo.py -i $HOME/Downloads/YOLOv3/Sample_videos/person-bicycle-car-detection.mp4 -m $HOME/Downloads/YOLOv3/tensorflow-yolo-v3/FP32/frozen_darknet_yolov3_model.xml -d GPU -t 0.1 -at yolo
Success is indicated by an image that shows one or more objects and/or people. At the left side of the image you see the inference time. You might not clearly see some bounding boxes and detections if scene components are the same color as the bounding box or text.
- Press the tab key on your keyboard to change asynchronous mode options.
- Press the esc key to exit the demo.
Single and Multi-Object Detection with Hardware Acceleration on an Intel® Vision Accelerator
By running the application on the Intel® Vision Accelerator, you are offloading processing of inference to the Intel® Vision Accelerator and freeing up your CPU for other applications.
If you used the CPU instructions and left your terminal window open, skip ahead to Step 2.
Run these steps on the target system.
Step 1: Initialize the Intel® Distribution of OpenVINO™ toolkit Environment
- Open a terminal window.
- Go to the sample application directory in which the Object Detection YOLO V3 Python demo is located:
cd $HOME/Downloads/YOLOv3
- Initialize the OpenVINO™ environment:
source /opt/Intel/openvino_2021/bin/setupvars.sh
Leave the terminal window open for the next step.
Step 2: Run the Single Detection Application on an Intel® Vision Accelerator
- Run the Object Detection YOLO V3 Python Demo sample application:
python3 object_detection_demo.py -i $HOME/Downloads/YOLOv3/Sample_videos/one-by-one-person-detection.mp4 -m $HOME/Downloads/YOLOv3/tensorflow-yolo-v3/FP32/frozen_darknet_yolov3_model.xml -d HDDL -t 0.1 -at yolo
Success is indicated by an image that shows a single individual in a bounding box. At the left side of the image you see the inference time. You might not clearly see some bounding boxes and detections if scene components are the same color as the bounding box or text.
- Press the tab key on your keyboard to change asynchronous mode options.
- Press the esc key to exit the demo.
Leave the terminal window open for the next step.
Step 3: Run the Multi-Detection Application on an Intel® Vision Accelerator
- Run the Object Detection YOLO V3 Python Demo sample application:
python3 object_detection_demo.py -i $HOME/Downloads/YOLOv3/Sample_videos/person-bicycle-car-detection.mp4 -m $HOME/Downloads/YOLOv3/tensorflow-yolo-v3/FP32/frozen_darknet_yolov3_model.xml -d HDDL -t 0.1 -at yolo
Success is indicated by an image that shows one or more objects and/or people. At the left side of the image you see the inference time. You might not clearly see some bounding boxes and detections if scene components are the same color as the bounding box or text.
- Press the tab key on your keyboard to change asynchronous mode options.
- Press the esc key to exit the demo.
Summary and Next Steps
In this tutorial, you learned to run inference applications on different processing units using the sample application "Object Detection YOLO V3 Python Demo." In the process, you gained familiarity with the Intel® Distribution of OpenVINO™ toolkit, which was installed as with the Edge Insights for Vision.
Go to Intel® Edge Software Manager Documentation to learn how to use the tool to manage Edge Software Packages, create and manage Containers and Virtual Machines.
As a next step, see the Multi-Camera Detection of Social Distancing Reference Implementation.