Optimize an R-FCN FP32 Inference Model Package with TensorFlow*

Published: 10/23/2020  

Last Updated: 06/15/2022

Download Command

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_3_0/rfcn-fp32-inference.tar.gz

Description

This document has instructions for running R-FCN FP32 inference using Intel® Optimization for TensorFlow*.

The COCO validation dataset is used in these R-FCN quick start scripts. The inference quick start scripts use raw images, and the accuracy quick start scripts require the dataset to be converted into the TensorFlow* records format. See the COCO dataset for instructions on downloading and preprocessing the COCO validation dataset.

Quick Start Scripts

Script name Description
fp32_inference Runs inference on a directory of raw images for 500 steps and outputs performance metrics.
fp32_accuracy Processes the TensorFlow* records to run inference and check accuracy on the results.

Bare Metal

To run on bare metal, the following prerequisites must be installed in your environment:

For more information, see the documentation on prerequisites in the TensorFlow models Repository.

Download and untar the R-FCN FP32 inference model package:

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_3_0/rfcn-fp32-inference.tar.gz
tar -xvf rfcn-fp32-inference.tar.gz

In addition to the general model zoo requirements, R-FCN uses the object detection code from the TensorFlow Model Garden. Clone this Repository with the SHA specified below and apply the patch from the R-FCN FP32 inference model package to run with TF2.

git clone https://github.com/tensorflow/models.git tensorflow-models
cd tensorflow-models
git checkout 6c21084503b27a9ab118e1db25f79957d5ef540b
git apply ../rfcn-fp32-inference/models/object_detection/tensorflow/rfcn/inference/tf-2.0.patch

You must also install the dependencies and run the protobuf compilation described in the object detection installation instructions from the Model Garden for TensorFlow repository.

Once your environment is set up, navigate back to the directory that contains the R-FCN FP32 inference model package, set environment variables pointing to your dataset and output directories, and then run a quick start script.

To run inference with performance metrics:

DATASET_DIR=<path to the coco val2017 directory>
OUTPUT_DIR=<directory where log files will be written>

quickstart/fp32_inference.sh

To get accuracy metrics:

DATASET_DIR=<path to the COCO validation TF record directory>
OUTPUT_DIR=<directory where log files will be written>

quickstart/fp32_accuracy.sh

Documentation and Sources

Get Started​
Main GitHub*
Readme
Release Notes
Get Started Guide

Code Sources
Report Issue


License Agreement

LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.


Related Containers and Solutions

R-FCN FP32 Inference TensorFlow* Container

View All Containers and Solutions 🡢

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.