Description
This document provides links to step-by-step instructions on how to leverage the latest reference model docker containers to run optimized open-source Deep Learning Training and Inference workloads using PyTorch* and TensorFlow* frameworks on Intel® Xeon® Scalable processors.
Note: The containers below are finely tuned to demonstrate best performance on Intel® Extension for PyTorch* and Intel® Optimized TensorFlow* and are not intended for use in production.
Use cases
The tables below link to documentation on how to run each use case using docker containers. These containers were validated on a host running Linux.
Generative AI
Framework | Model | Precisions | Mode | Dataset |
---|---|---|---|---|
PyTorch | GPT-J | FP32,BF32,BF16,FP16,INT8-FP32 | Inference | LAMBADA |
PyTorch | Llama 2 7B,13B | FP32,BF32,BF16,FP16,INT8-FP32 | Inference | LAMBADA |
PyTorch | ChatGLM | FP32,BF32,BF16,FP16,INT8-FP32 | Inference | LAMBADA |
PyTorch | LCM | FP32,BF32,BF16,FP16,INT8-FP32,INT8-BF16 | Inference | COCO 2017 |
PyTorch | Stable Diffusion | FP32,BF32,BF16,FP16,INT8-FP32,INT8-BF16 | Inference | COCO 2017 |
Image Recognition
Framework | Model | Precisions | Mode | Dataset |
---|---|---|---|---|
PyTorch | ResNet 50 | FP32,BF32,BF16 | Training | ImageNet 2012 |
PyTorch | ResNet 50 | FP32,BF32,BF16,INT8 | Inference | ImageNet 2012 |
PyTorch | Vision Transformer | FP32,BF32,BF16,INT8-FP32,INT8-BF16 | Inference | ImageNet 2012 |
Object Detection
Framework | Model | Precisions | Mode | Dataset |
---|---|---|---|---|
PyTorch | Mask R-CNN | FP32,BF32,BF16 | Training | COCO 2017 |
PyTorch | Mask R-CNN | FP32,BF32,BF16 | Inference | COCO 2017 |
PyTorch | SSD-ResNet34 | FP32,BF32,BF16 | Training | COCO 2017 |
PyTorch | SSD-ResNet34 | FP32,BF32,BF16,INT8 | Inference | COCO 2017 |
PyTorch | YOLO v7 | FP32,BF32,BF16,FP16,INT8 | Inference | COCO 2017 |
Language Modeling
Framework | Model | Precisions | Mode | Dataset |
---|---|---|---|---|
PyTorch | BERT large | FP32,BF32,BF16,INT8 | Inference | SQuAD1.0 |
PyTorch | RNN-T | FP32,BF32,BF16,INT8 | Inference | LibriSpeech |
PyTorch | RNN-T | FP32,BF32,FP16 | Training | LibriSpeech |
PyTorch | DistilBERT base | FP32,BF32,BF16,INT8-BF16,INT8-BF32 | Inference | SST-2 |
TensorFlow | BERT large | FP32,BF16 | Training | SQuAD and MRPC |
TensorFlow | BERT large | FP32,BF32,BF16,INT8 | Inference | SQuAD |
Recommendation
Framework | Model | Precisions | Mode | Dataset |
---|---|---|---|---|
PyTorch | DLRM | FP32,BF32,BF16 | Training | Criteo Terabyte |
PyTorch | DLRM | FP32,BF32,BF16,INT8 | Inference | Criteo Terabyte |
PyTorch | DLRM v2 | FP32,BF16,FP16,INT8 | Inference | Criteo Terabyte |
Documentation and Sources
Get Started Code Sources
Main GitHub* PyTorch Dockerfiles
Release Notes TensorFlow Dockerfiles
License Agreement
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.
View All Containers and Solutions 🡢