# Optimize a ResNet101* Int8 Inference Model Package with TensorFlow*

Published: 12/09/2020

Last Updated: 06/15/2022

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_3_0/resnet101-int8-inference.tar.gz

## Description

This document has instructions for running ResNet101* int8 inference using Intel® Optimization for TensorFlow*.

Download and preprocess the ImageNet dataset using the instructions here. After running the conversion script you should have a directory with the ImageNet dataset in the TF records format.

Set the DATASET_DIR to point to this directory when running ResNet101*.

#### Quick Start Scripts

Script name Description
int8_online_inference Runs online inference (batch_size=1).
int8_batch_inference Runs batch inference (batch_size=128).
int8_accuracy Measures the model accuracy (batch_size=100).

#### Bare Metal

To run on bare metal, the following prerequisites must be installed in your environment:

Download and untar the model package. Set environment variables for the path to your DATASET_DIR and an OUTPUT_DIR where log files will be written, then run a quick start script.

DATASET_DIR=<path to the dataset>
OUTPUT_DIR=<directory where log files will be written>

tar -xzf resnet101-int8-inference.tar.gz
cd resnet101-int8-inference

quickstart/<script name>.sh

## Documentation and Sources

Code Sources
Report Issue

LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.

## Related Containers and Solutions

ResNet101* Int8 Inference TensorFlow* Container

View All Containers and Solutions 🡢

#### Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.