FPGA AI Suite Handbook

ID 863373
Date 11/21/2025
Public
Document Table of Contents

5.2.1.7.1. Example Running the Object Detection Demonstration Application in the PCIe* Design Example

You must download the following items:

  • yolo-v3-tf from the OpenVINO™ Model Downloader. The command should look similar to the following command:
    python3 <path_to_installation>/open_model_zoo/omz_downloader \
       --name yolo-v3-tf \
       --output_dir <download_dir>
    From the downloaded model, generate the .bin/.xml files:
    python3 <path_to_installation>/open_model_zoo/omz_converter \
       --name yolo-v3-tf \
       --download_dir <download_dir> \
       --output_dir <output_dir> \
       --mo <path_to_installation>/model_optimizer/mo.py

    Model Converter generates an FP32 version and an FP16 version. Use the FP32 version.

To run the object detection demonstration application,
  1. Ensure that demonstration applications have been built with the following command:
    build_runtime.sh -target_de10_agilex -build-demo
  2. Ensure that the FPGA has been configured with the Generic bitstream.
  3. Run the following command:
    ./runtime/build_Release/object_detection_demo/object_detection_demo \
       -d HETERO:FPGA,CPU \
       -i <path_to_video>/input_video.mp4 \
       -m <path_to_model>/yolo_v3.xml \
       -arch_file=$COREDLA_ROOT/example_architectures/AGX7_Generic.arch \
       -plugins $COREDLA_ROOT/runtime/plugins.xml \  
       -t 0.65 \
       -at yolo
Tip: High-resolution video input, such as when using HD camera as input, imposes considerable decoding overhead on the inference engine that can potentially lead to reduced system throughput. Use the the -input_resolution=<width>x<height> option that is included in the demonstration application to adjust the input resolution to a level that balances video quality with system performance.