Overview
Develop a solution using an end-to-end computer vision workflow with training on Habana® Gaudi®, post-training quantization using Post-training Optimization Tool (POT) and Inference using OpenVINO™ toolkit.
Select Configure & Download to download the reference implementation and the software listed below.
- Time to Complete: 25 - 30 minutes
- Programming Language: Python*
- Available Software: OpenVINO™ toolkit, Docker*, Helm*, Kubernetes*
Target System Requirements
Training on Amazon Web Services* (AWS) EC2 DL1 Instance
- Ubuntu* 20.04
- Intel® Xeon® Platinum 8275CL processor @ 3.00GHz (96 vCPUs)
- HPUs: 8
- Usable RAM: 784 GB
- Disk Size: 500 GB
Inference
Device configuration for the cluster. One node Kubernetes* cluster has comparable configurations to the below:
- Ubuntu 20.04
- Intel® Xeon® Platinum 8375C processor @ 2.90GHz (8 vCPUs)
- Usable RAM: 32 GB
- Disk Size: 50 GB
How It Works
The repository contains the model scripts and recipe for training a U-Net 2D model to achieve state of the art accuracy using Image Segmentation with Medical Decathlon dataset and followed by inferencing with OpenVINO™ toolkit on Intel® hardware.
This AI workflow demonstrates the following:
- U-Net 2D model training using Amazon EC2 DL1 instances which uses Gaudi® Processor from Habana® Labs (an Intel® company).
- U-Net 2D model optimization and inference using OpenVINO™ toolkit on Amazon M6i Intel® CPU instances powered by 3rd Generation Intel® Xeon® Scalable processors (code named Ice Lake).
This reference implementation provides an AWS* cloud-based generic AI workflow, which showcases U-Net-2D model-based image segmentation with medical decathlon dataset.
The reference implementation is available for use by Docker containers and Helm chart.
Get Started
Prerequisites
Follow the steps on GitHub* to install the prerequisites.
Install the Reference Implementation
Select Configure & Download to download the reference implementation.
Train a Model
Follow the steps on GitHub to train a model using Habana® Gaudi® Processor on AWS*.
You can choose to run training using Docker* containers or with Helm* chart using Kubernetes*.
Run Optimization and Inference on a Model
Follow the steps on GitHub to perform optimization and inference using OpenVINO™ toolkit.
You can choose to run inference using Docker* containers or with Helm* chart using Kubernetes*.
Learn More
To continue learning, see the following guides and software resources:
- HabanaAI/Model-References repository
- Model used - NVIDIA* nnU-Net repository
- IntelAI/unet repository
- Habana Model Performance Data page
- Habana Developer Resources
- OpenVINO™ documentation
- OpenVINO™ toolkit download