Use Jupyter* Notebooks to Compare TensorFlow* Performance

ID 679256
Updated 11/18/2020
Version Latest
Public

author-image

By

Pull Command

docker pull intel/intel-optimized-tensorflow:tf-2.3.0-imz-2.2.0-jupyter-performance

Description

This is a container with Jupyter* Notebooks and pre-installed environments for analyzing the performance benefit from using Intel® Optimization for TensorFlow* with oneAPI Deep Neural Network Library (oneDNN). There are two different analysis types:

  • For the "Stock vs. Intel® Optimizations for TensorFlow*" analysis type, users can understand the performance benefit between stock and Intel Optimization for TensorFlow
  • For the "FP32 vs. Bfloat16 vs. int8" analysis type, users can understand the performance benefit among different data types on Intel Optimization for TensorFlow
Analysis Type Notebook Notes
Stock vs. Intel® Optimizations for TensorFlow* 1. benchmark_perf_comparison Compare performance between stock and Intel Optimization for TensorFlow among different models
^ 2. benchmark_perf_timeline_analysis Analyze the performance benefit from oneDNN among different layers by using TensorFlow* Timeline
FP32 vs. Bfloat16 vs. int8 1. benchmark_data_types_perf_comparison Compare Model Zoo for Intel® Architecture benchmark performance among different data types on Intel Optimization for TensorFlow
^ 2. benchmark_data_types_perf_timeline_analysis Analyze the BFloat16/Int8 data type performance benefit from oneDNN among different layers by using TensorFlow* Timeline

How to Run the Jupyter* Notebooks

  1. Launch the container with:

    docker run \
        -d \
        -p 8888:8888 \
        --env LISTEN_IP=0.0.0.0 \
        --privileged \
        intel/intel-optimized-tensorflow:tf-2.3.0-imz-2.2.0-jupyter-performance

    Most of the notebook functionality works without a real dataset (by using synthetic data), but if you want to mount a dataset, use an option like:

    -v <host path to dataset>:<container path to dataset>

    If your machine is behind a proxy, you will need to pass proxy arguments to the run command. For example:

    --env http_proxy="http://proxy.url:proxy_port" --env https_proxy="https://proxy.url:proxy_port"
  2. Display the container logs with docker logs, copy the Jupyter service URL, and then paste it into a browser window.

  3. Click the first notebook file (benchmark_perf_comparison.ipynb or benchmark_data_types_perf_comparison) from an analysis type.

    Note: For "Stock vs. Intel Optimizations for TensorFlow" analysis type, please change your Jupyter* notebook kernel to either "stock-tensorflow" or "intel-tensorflow"

    Note: For "FP32 vs. Bfloat16 vs. int8" analysis type, please select "intel-tensorflow" as your Jupyter Notebook kernel.

  4. Run through every cell of the notebook one by one.

    NOTE: For "Stock vs. Intel Optimizations for TensorFlow" analysis type, in order to compare between stock and Intel Optimization for TensorFlow results, users need to run all cells before the comparison section with both stock-tensorflow and intel-tensorflow kernels.

  5. Click the second notebook file (benchmark_perf_timeline_analysis.ipynb or benchmark_data_types_perf_timeline_analysis) from an analysis type.

  6. Run through every cell of the notebook one by one to get the analysis result.

    Note: There is no requirement for the Jupyter kernel when users run the second notebook to analyze performance in detail.

Documentation and Sources

Get Started​
Docker* Repository
Main GitHub*
Readme
Release Notes
Get Started Guide

Code Sources
Dockerfile
Report Issue


License Agreement

LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.


Related Containers and Solutions

BERT Large FP32 Inference TensorFlow* Container
ResNet50 FP32 Inference TensorFlow* Container
ResNet50 Int8 Inference TensorFlow* Container
ResNet50v1.5 FP32 Inference TensorFlow* Container
ResNet50v1.5 Int8 Inference TensorFlow* Container
ResNet50v1.5 BFloat16 Inference TensorFlow* Container
ResNet50v1.5 FP32 Training TensorFlow* Container