Intel® FPGA AI Suite: Getting Started Guide

ID 768970
Date 4/05/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

6.12.2. Preparing a COCO Validation Dataset and Annotations

Use the publicly available COCO 2017 validation images as input to the model and the COCO 2017 annotations as the ground-truth.

You can download the images from the following URL: http://images.cocodataset.org/zips/val2017.zip.

You can download the annotations from the following URL: http://images.cocodataset.org/annotations/annotations_trainval2017.zip.

  1. Build the runtime with the following commands (add the -de10_agilex agument to build_runtime.sh if targeting the Terasic* DE10-Agilex Development Board):
    cd $COREDLA_WORK/runtime
    rm -rf build_Release
    ./build_runtime.sh
  2. Download and extract both .zip files into the coco-images directory:
    cd $COREDLA_WORK/runtime
    mkdir coco-images
    cd coco-images
    wget http://images.cocodataset.org/zips/val2017.zip
    wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
    unzip annotations_trainval2017.zip
    unzip val2017.zip
  3. The dla_benchmark application allows only plain text ground truth files, so use the convert_annotations.py script to setup the groundtruth directory as follows:
    cd $COREDLA_WORK/runtime
    mkdir groundtruth
    python3 ../dla_benchmark/convert_annotations.py annotations/instances_val2017.json \
      groundtruth