FPGA AI Suite: PCIe-based Design Example User Guide

ID 768977
Date 3/29/2024
Document Table of Contents

5.7.1. Example Running the Object Detection Demonstration Application

You must download the following items:

  • yolo-v3-tf from the OpenVINO™ Model Downloader. The command should look similar to the following command:
    python3 <path_to_installation>/open_model_zoo/omz_downloader \
       --name yolo-v3-tf \
       --output_dir <download_dir>
    From the downloaded model, generate the .bin/.xml files:
    python3 <path_to_installation>/open_model_zoo/omz_converter \
       --name yolo-v3-tf \
       --download_dir <download_dir> \
       --output_dir <output_dir> \
       --mo <path_to_installation>/model_optimizer/mo.py

    Model Optimizer generates an FP32 version and an FP16 version. Use the FP32 version.

To run the object detection demonstration application,
  1. Ensure that demonstration applications have been built with the following command:
    build_runtime.sh -build-demo
  2. Ensure that the FPGA has been configured with the Generic bitstream.
  3. Run the following command:
    ./runtime/build_Release/object_detection_demo/object_detection_demo \
       -d HETERO:FPGA,CPU \
       -i <path_to_video>/input_video.mp4 \
       -m <path_to_model>/yolo_v3.xml \
       -arch_file=$COREDLA_ROOT/example_architectures/A10_Generic.arch \
       -plugins_xml_file $COREDLA_ROOT/runtime/plugins.xml \  
       -t 0.65 \
       -at yolo