This document has instructions for running ResNet34* SSD FP32 inference using Intel® Optimization for TensorFlow*.
The ResNet34 SSD accuracy scripts (fp32_accuracy.sh and fp32_accuracy_1200.sh) use the COCO validation dataset in the TF records format. See the COCO dataset document for instructions on downloading and preprocessing the COCO validation dataset.
Quick Start Scripts
|fp32_accuracy||Runs an accuracy test using data in the TF records format with an input size of 300x300.|
|fp32_accuracy_1200||Runs an accuracy test using data in the TF records format with an input size of 1200x1200.|
|fp32_inference||Runs inference with a batch size of 1 using synthetic data with an input size of 300x300. Prints out the time spent per batch and total samples/second.|
|fp32_inference_1200||Runs inference with a batch size of 1 using synthetic data with an input size of 1200x1200. Prints out the time spent per batch and total samples/second.|
|multi_instance_batch_inference_1200||Uses numactl to run inference (batch_size=1) with one instance per socket. Uses synthetic data with an input size of 1200x1200. Waits for all instances to complete, then prints a summarized throughput value.|
|multi_instance_online_inference_1200||Uses numactl to run inference (batch_size=1) with 4 cores per instance. Uses synthetic data with an input size of 1200x1200. Waits for all instances to complete, then prints a summarized throughput value.|
To run on bare metal, the following prerequisites must be installed in your environment:
- Python* 3
- GNU Wget
- Intel® Optimization for Tensorflow*
- Horovod* 0.19.1
- NumPy 1.27.4
- Pillow 7.1.0
- TensorFlow add-ins 0.11.0
The TensorFlow* models and benchmarks repos are used by ResNet34 SSD FP32 inference. Clone those at the Git SHAs specified below and set the
TF_MODELS_DIR environment variable to point to the directory where the models repo was cloned.
git clone --single-branch https://github.com/tensorflow/models.git tf_models git clone --single-branch https://github.com/tensorflow/benchmarks.git ssd-resnet-benchmarks cd tf_models export TF_MODELS_DIR=$(pwd) git checkout f505cecde2d8ebf6fe15f40fb8bc350b2b1ed5dc cd ../ssd-resnet-benchmarks git checkout 509b9d288937216ca7069f31cfb22aaa7db6a4a7 cd ..
After installing the prerequisites and cloning the models and benchmarks repos, download and untar the model package. Set environment variables for the path to your
DATASET_DIR (for accuracy testing only -- inference benchmarking uses synthetic data) and an
OUTPUT_DIR where log files will be written, then run a quick start script.
DATASET_DIR=<path to the dataset (for accuracy testing only)> OUTPUT_DIR=<directory where log files will be written> wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_3_0/ssd-resnet34-fp32-inference.tar.gz tar -xzf ssd-resnet34-fp32-inference.tar.gz cd ssd-resnet34-fp32-inference quickstart/<script name>.sh
Documentation and Sources
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.
Related Containers and Solutions
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.