The Intel® Distribution of OpenVINO™ toolkit uses deep learning, computer vision, and hardware acceleration and comes with a variety of pre-trained models. Often, the input can be a camera. However, videos and images are also useful input data when trying samples, training or testing.
Open Source Free Culture Sample Videos
Open Source Free Culture videos are available to test existing models. For more information about these videos, visit Overview of Sample Videos Created for Inference article.
Models include flaw detection, human detection, vehicle detection, and more. Using the pre-trained models available with the Intel® Distribution of OpenVINO™ toolkit, reference implementations, and these sample videos, IoT developers can fast-track their time to production.
In some cases, image input can be used instead of a video. For example, a simple face detection sample to test if OpenCV* is working may not require a video but simply an image. Splitting frames of open source videos for inference, can provide these images.
Requirements
- GNU Linux* Debian* based distro of your choice
- FFmpeg
Use FFmpeg to Extract Images from Video
There are different ways to extract frames from video.
The directory/file structure is as follows:
|--- face-detection-walking
|--- face-demographics-walking-and-pause.mp4
|--- output-files.jpg
Run the ffmpeg command within the directory with the video.
Extract one frame each second from the video:
ffmpeg -i face-demographics-walking-and-pause.mp4 -vf fps=1 fdw%04d.jpg -hide_banner
Parameters:
|
input flag |
|
video input filename |
|
create a filtergraph to filter the stream |
|
frames per second |
|
how many frames per second 1/1 or 1 for 1 frame per second. 1/10 for 1 frame per 10 seconds |
|
naming convention of images. In this case: fdw%04d.jpg |
|
to hide the ffmpeg compile info |
Sample Output
When running the ffmpeg command for one frame per second, the results will be similar to the output images below.
Figure 1. CLI list of extracted images - every second
Below is a visual of the images extracted from the video in the file manager view.
Figure 2. File Manager list of extracted images - every second
Extract one frame every 10 seconds:
ffmpeg -i zone-worker.mp4 -vf fps=1/10 zw%04d.jpg -hide_banner
Figure 3. CLI list of extracted images - every 10 seconds
Below is a visual of the images extracted from the video in the file manager view.
Figure 4. File Manager list of extracted images - every 10 seconds
Next Steps: Use an Extracted Image for Input to Run Inference of a Face Detection Model Using OpenCV* API
Guidance and instructions for the Install OpenVINO™ toolkit for Raspbian* OS article, includes a face detection sample. This python sample outputs a file for the result. The article, Run Inference of a Face Detection Model Using OpenCV* API with Added Time Date "Stamp" in the filename offers guidance to automate a date/time stamp for output images.
Additional Resources
To use the extracted image as input to run inference of a face detection model using OpenCV* API, try one of the following:
- Intel® Neural Compute Stick 2 with Raspbian* OS python sample without date stamp
- Desktop and CPU python sample with date stamp
Visit the following websites for more information about open source videos for inference, FFmpeg and pre-trained models.