Developer Tools and Software for Intel® Data Center GPU Max Series
Drive breakthrough acceleration for HPC and AI workloads with the combined power of Intel® Data Center GPU Max Series and Intel® Xeon® Scalable processors—powered by oneAPI and Intel® AI developer tools.
Unleash the Power of Intel Data Center GPU Max Series through Software
Convenient Software Suites for AI and HPC
Accelerate AI and HPC innovation with Intel's portfolio of compilers, libraries, and tools. Intel provides the software you need to solve the world's most demanding technical challenges.
The Intel® oneAPI Base Toolkit is a starting point for heterogeneous development across CPUs, GPUs, and FPGAs. It is open source, based on open standards, and features an industry-leading C++ compiler that implements SYCL*, an evolution of C++ for heterogeneous computing. A range of performance libraries provide portable acceleration. Enhanced profiling, design assistance, and debug tools are also included.
AI Tools include additional components for data scientists and AI developers with optimizations for popular AI frameworks to run on Intel Data Center GPU Max Series for training and inference.
The Intel® HPC Toolkit delivers what developers need to build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization. Use it to build code with Intel C++ and Fortran compilers, scale with Intel® MPI library, and analyze MPI application behavior.
Get Started
GPU drivers must be installed first in order for the toolkits to be used on Intel Data Center GPU Max Series:
- Linux
- Windows: not supported
Intel® oneAPI Base Toolkit
Use this core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures.
Accelerate HPC with Intel® HPC Toolkit
- Optimize code and tune performance with Intel Fortran Compilers, C++, and SYCL, as well as oneAPI libraries, analysis, and porting tools.
- oneAPI compilers activate Intel® Xe Matrix Extensions (Intel® XMX) for acceleration.
- Intel® MPI Library activates Intel® Xe Link for faster direct GPU-to-GPU communications.
Boost Deep Learning Training and Inference with AI Tools
- Intel® oneAPI Deep Neural Network Library (oneDNN) in the Intel oneAPI Base Toolkit uses Intel XMX to accelerate AI training and inference.
- Streamline AI visual inferencing and deploy quickly using the Intel® Distribution of OpenVINO™ toolkit.
- Intel® Extension for PyTorch* and Intel® Extension for TensorFlow* accelerate use of popular deep learning frameworks for Intel CPUs and GPUs.
Create Multiarchitecture Code Efficiently with Code Migration Tools
Migrate CUDA* code to C++ with SYCL for easy portability across multiple vendors’ architectures, including Intel® Data Center GPUs. The Intel® DPC++ Compatibility Tool, based on open source SYCLomatic, automates most of the process.
More Resources
AI Inference and Training Workflows
Intel Data Center GPU Max Series is ideal for AI inference and training workflows. AI Tools provide optimized extensions for AI frameworks such as TensorFlow* and PyTorch*. Optimize and deploy AI inference with the Intel® Distribution of OpenVINO™ toolkit.
Get Started
The following Linux containers are part of the Intel® AI Reference Models project provided to quickly replicate the complete software environment that demonstrates the best-known performance of each of these target model or dataset combinations.
PyTorch Model Containers
ResNet* 50 Version 1-5 int8 Inference
(ImageNet 2012 dataset)
ResNet 50 Version 1-5 Bfloat16 Training
(ImageNet 2012 dataset)
(Stanford Question Answering [SQuAD] dataset)
(MLCommons dataset)
TensorFlow Model Containers
ResNet 50 Version 1-5 int8, FP16,and FP32 Inference
(ImageNet 2012 dataset)
ResNet 50 Version 1-5 Bfloat16 Training
(ImageNet 2012 dataset)
BERT Large FP16 and FP32 Inference
(SQuAD dataset)
(Synthetic dataset)
Additional Video and Coding Tutorials (Not Containerized)
Introduction to Intel Extension for PyTorch*
Large Language Models (LLMs)
Intel® Extension for PyTorch* Large Language Model (LLM) Feature Get Started
Hugging Face Transformers
A broad set of more than 85 Hugging Face transformer training and inference models
Hugging Face Transformer Models
Install and Build Intel XPU Back End for NVIDIA Triton* Inference Server
High-Performance Computing
Intel Data Center GPU Max Series is built for high-performance computing. Intel® HPC Toolkit is an add-on to the Intel® oneAPI Base Toolkit. Both work together to deliver what developers need to build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization.
Get Started
A wide variety of HPC applications and open source projects are tested on Intel Data Center GPU Max Series. Many are already optimized, and more optimizations are becoming available. Intel's combination of compilers, optimized libraries, porting tools, and contributions to open source projects helps you to quickly start your scientific discoveries.
The following recipes are a subset of HPC workloads enabled for Intel® Data Center GPU Max Series.
Additional Video and Coding Tutorials
- Quicky Migrate Existing CUDA Code to SYCL
- Migrating the MonteCarloMultiGPU Sample from CUDA to SYCL
- Port Thermal Solver Code
- Offload Fortran Workloads
- Offload Fortran Workloads to Intel® GPUs Using OpenMP*
- Accelerating Lower-Upper (LU) Factorization Using Fortran, Intel® oneAPI Math Kernel Library & OpenMP to Intel GPUs
Success Stories
Intel® oneAPI Tools Help Prepare Code for Aurora
The Aurora Supercomputer from Argonne National Laboratory (built on Intel® architecture and HPE Cray supercomputer) will be one of the first exascale systems in the US.
Convergence of HPC, AI & Big Data Analytics in the Exascale Era
"We're seeing encouraging early application performance results on our development systems using Intel Max Series GPU accelerators – applications built with Intel's oneAPI compilers and libraries. For leadership-class computational science, we value the benefits of code portability from multivendor, multiarchitecture programming standards such as SYCL and Python AI frameworks such as PyTorch accelerated by Intel libraries. We look forward to the first exascale scientific discoveries from these technologies on the Aurora system next year."
— Timothy Williams, deputy director, Argonne Computational Science Division
Zuse Institute Berlin (ZIB) Ported easyWAVE Tsunami Simulation Application
Learn how porting from CUDA to oneAPI delivered performance on CPUs, GPUs, and FPGAs.
Chasing Exascale: TACC’s Frontera Uses oneAPI to Accelerate Scientific Insights
Dr. Dan Stanzione of Texas Advanced Computing Center (TACC) discusses advancing HPC to exascale with oneAPI and Intel multiarchitecture to scale workloads on the Frontera supercomputer.
1 Note All information provided is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
Intel® Developer Cloud
Intel® Data Center GPU Max Series is Available Now
Build and optimize oneAPI multiarchitecture applications using the latest optimized Intel oneAPI and AI Tools, and test your workloads across Intel CPUs and GPUs. No hardware installations, software downloads, or configuration necessary. Free for 120 days with extensions possible.