PyTorch Prerequisites for Intel GPUs
These prerequisites let you compile and build PyTorch* 2.5 with optimizations for Intel Data Center or Client GPUs.
Developers compiling and building PyTorch 2.4 (for Intel Data Center GPUs only) should instead follow the prerequisite instructions for PyTorch 2.4.
Important: Read the Release Notes for the latest information and known issues about system requirements and support packages.
The following sections provide prerequisite instructions for using:
- Intel Data Center GPUs such as Intel® Data Center GPU Max Series platforms, when building PyTorch 2.4 or 2.5, or
- Intel Client GPUs or CPUs with integrated GPUs such as Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, when building PyTorch 2.5.
Build for Intel Data Center GPUs
These Intel Data Center GPUs are supported:
- Intel® Data Center GPU Max Series platforms (formerly codename Ponte Vecchio or PVC)
These Linux releases are supported:
- Red Hat* Enterprise Linux* 9.2
- SUSE Linux Enterprise Server* 15 SP5
- Ubuntu* Server 22.04 (>= 5.15 LTS kernel)
The following instructions show how to install:
-
Intel Data Center GPU Drivers along with compute and media runtimes and development packages
-
Intel GPU dependencies for PyTorch development package, which collects a subset of oneAPI components needed for building and running PyTorch.
-
Profiling Tools Interfaces for Intel GPU (PTI for GPU) package, which provides Kineto (part of the PyTorch profiler) with the interfaces it needs to collect timing data from Intel GPUs for performance metrics during training and inference.
Step 1: Install Intel Data Center GPU Drivers
The Data Center GPU Installation Instructions describe software installation for Intel® Data Center GPU Max Series systems, along with compute and media runtimes and development packages.
These general installation instructions install the Long Term Support (LTS) version of the Intel GPU drivers. However, you'll need to use the Intel GPU driver's rolling (also referred to as "rolling stable") release stream since this is where new hardware enablement first appears for early adopters who want to evaluate new features.
Important: Follow the instructions to configure the GPU driver's installation repository to the rolling release stream (and not the LTS stream) as described in the GPU driver installation instructions.
-
Install the Intel GPU Drivers
Use the instructions in the Linux OS-specific tabs within the Data Center GPU installation instructions for installing the Intel GPU drivers, based on the Linux distribution you're using. Be sure to follow all the instructions including selecting the right release stream and adding your user to the render node group
-
Optional GPU Hardware Verification
Optionally, follow these instructions to verify expected Intel GPU hardware is working.
Step 2: Install Intel Support Packages
After the Intel GPU drivers are installed, choose one of these ways to install the two Intel support packages: either using a Linux package manager: APT, YUM, or Zypper, or using offline installation scripts. Note that installing the development support package assumes you don't already have existing oneAPI components installed. You should uninstall them if you do.
For RPM-based distributions such as Red Hat Enterprise Linux Server, YUM is the usual choice. You’ll need to configure YUM to install software packages that aren’t available in the default repositories. These instructions show how to add access to the appropriate Intel repository, along with the public key used to authenticate the downloaded packages.
- Create an Intel YUM repository information file and move it to the YUM configuration directory:
tee > /tmp/intel-for-pytorch-gpu-dev.repo << EOF [intel-for-pytorch-gpu-dev] name=Intel for Pytorch GPU dev repository baseurl=https://yum.repos.intel.com/intel-for-pytorch-gpu-dev enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB EOF sudo mv /tmp/intel-for-pytorch-gpu-dev.repo /etc/yum.repos.d
- Use YUM to install the two Intel support packages:
sudo yum install intel-for-pytorch-gpu-dev-0.5 intel-pti-dev
For SUSE Linux Enterprise Server distributions, Zypper is the usual choice. You’ll need to configure Zypper to install software packages that aren’t available in the default repositories. These instructions show how to add access to the appropriate Intel repository, along with the public key used to authenticate the downloaded packages.
- The Zypper package manager uses the same
rpm
packages used by YUM, so add the Intel YUM repository:sudo zypper addrepo https://yum.repos.intel.com/intel-for-pytorch-gpu-dev intel-for-pytorch-gpu-dev
- If Zypper was unable to automatically import the Intel repository's public key, use RPM to manually import the key:
rpm --import https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
- Use Zypper to install the two Intel support packages:
sudo zypper install intel-for-pytorch-gpu-dev-0.5 intel-pti-dev
For Debian-based Linux distributions such as Ubuntu Server, APT is the usual choice. You’ll need to configure APT to install software packages that aren’t available in the default repositories. These instructions show how to add access to the appropriate Intel repository, along with the public key used to authenticate the downloaded packages.
- Make sure the necessary tools to add repository access are available:
sudo apt update sudo apt install -y gpg-agent wget
- Download the Intel APT repository’s public key and put it into the
/usr/share/keyrings
directory:wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB \ | gpg --dearmor > /tmp/intel-for-pytorch-gpu-dev-keyring.gpg sudo mv /tmp/intel-for-pytorch-gpu-dev-keyring.gpg /usr/share/keyrings
- Configure the APT client to add the Intel repository and its key:
echo "deb [signed-by=/usr/share/keyrings/intel-for-pytorch-gpu-dev-keyring.gpg] https://apt.repos.intel.com/intel-for-pytorch-gpu-dev all main" > /tmp/intel-for-pytorch-gpu-dev.list sudo mv /tmp/intel-for-pytorch-gpu-dev.list /etc/apt/sources.list.d
- Update the APT client package list and repository index:
sudo apt update
- Use APT to install the two Intel support packages:
sudo apt install intel-for-pytorch-gpu-dev-0.5 intel-pti-dev
Instead of using a package manager, you can install the two Intel development packages using offline installation scripts. Each installation script is a file containing all the needed files together with a script that extracts and installs the development package.
IMPORTANT: Use sudo
to install files in system directories so they're available globally. Without sudo, files are installed in the current user's home directory.
- Make sure the necessary tools are available:
sudo apt update sudo apt install -y wget
- Download and install the Intel GPU dependencies for PyTorch development installation script:
cd /tmp wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/884eaa22-d56f-45dc-9a65-901f1c625f9e/l_intel-for-pytorch-gpu-dev_p_0.5.3.36_offline.sh sh ./l_intel-for-pytorch-gpu-dev_p_0.5.3.36_offline.sh
- Download and install the Profiling Tools Interfaces for GPU installation script:
cd /tmp wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/884eaa22-d56f-45dc-9a65-901f1c625f9e/l_intel-pti-dev_p_0.9.0.38_offline.sh sh ./l_intel-pti-dev_p_0.9.0.38_offline.sh
Step 3: Set Up oneAPI Environment Variables
Before you use any oneAPI component installed by the PyTorch development bundle, use this command to configure environment variables, important folders, and command settings.
source /opt/intel/oneapi/pytorch-gpu-dev-0.5/oneapi-vars.sh
If that command fails, you may have installed the oneAPI components in your home directory, so try using this command instead:
source ~/intel/oneapi/pytorch-gpu-dev-0.5/oneapi-vars.sh
Similarly, activate the PTI component using this command:
source /opt/intel/oneapi/pti/latest/env/vars.sh
Consider adding these commands to your ~/.bashrc
file so they run every time you log in or create a new shell session.
Build for Intel Client GPUs
These Intel Client GPUs and CPUs with integrated GPUs are supported for PyTorch 2.5 (and are not supported for PyTorch 2.4):
- Intel® Core™ Ultra processor family with Intel® Graphics (Codename Meteor Lake)
- Intel® Arc™ Graphics family (Codename DG2)
These OS releases are supported:
- Windows 11 Family, Windows 10 (22H2)
- Windows Subsystem for Linux (WSL2) running Ubuntu 24.04 (WSL2 support is experimental)
- Note: Linux (e.g., Ubuntu) is not supported.
The following instructions show how to install prerequisite components depending on your OS:
-
Intel Client GPU Drivers along with compute and media runtimes and development packages
-
Intel GPU dependencies for PyTorch development package that collects a subset of oneAPI components needed for building and running PyTorch.
-
Profiling Tools Interfaces for Intel GPU (PTI for GPU) package, which provides Kineto (part of the PyTorch profiler) with the interfaces it needs for collecting timing data from Intel GPUs for performance metrics during training and inference.
Install for Windows 10/11
Step 1: Install Intel Client GPU Drivers
Follow the instructions in the Intel® & Iris® Xe Graphics - Windows documentation to download and run the installer to update your WHQL Certified graphics driver to version 31.0.101.5522 or higher. That document also has links to version-specific release notes about this driver.
Step 2: Install Intel Support Package
Click on the following two links to download the PyTorch dev support package and PTI support package installers. (The PTI support package enables PyTorch profiling on Intel GPUs.) Then double-click on the downloaded exe files to run it and follow the instructions to install:
Step 3: Set Up oneAPI Environment
Before you use any oneAPI component installed by the PyTorch development bundle, use these two commands to configure environment variables, important folders, and command settings for the dev bundle and PTI support packages:
"C:\Program Files (x86)\Intel\oneAPI\pytorch-gpu-dev-0.5\oneapi-vars.bat"
"C:\Program Files (x86)\Intel\oneAPI\pti\latest\env\vars.bat"
These commands must be run every time you log in or create a new shell session.
Install for WSL2
Support for Windows Subsystem for Linux (WSL2) is experimental.
Step 1: Install Intel Client GPU Drivers
When using WSL2 (running Ubuntu 24.04) , the GPU drivers are installed in the Windows OS and runtime components such as Level-Zero are installed within the WSL2 Linux environment.
-
Follow the instructions in the Intel® & Iris® Xe Graphics - Windows documentation to download and run the Windows installer to update your WHQL Certified graphics driver to version 31.0.101.5522 or higher. That document also has links to version-specific release notes about this driver.
-
Install the necessary runtime packages from Canonical (in WSL2):
sudo apt-get update sudo apt-get install -y intel-ocloc intel-opencl-icd libze1 libze-intel-gpu1
Step 2: Install Intel Support Packages
- Download the Intel APT repository’s public key and put it into the
/usr/share/keyrings
directory:wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB \ | gpg --dearmor > /tmp/intel-for-pytorch-gpu-dev-keyring.gpg sudo mv /tmp/intel-for-pytorch-gpu-dev-keyring.gpg /usr/share/keyrings
- Configure the APT client to add the Intel repository and its key:
echo "deb [signed-by=/usr/share/keyrings/intel-for-pytorch-gpu-dev-keyring.gpg] https://apt.repos.intel.com/intel-for-pytorch-gpu-dev all main" > /tmp/intel-for-pytorch-gpu-dev.list sudo mv /tmp/intel-for-pytorch-gpu-dev.list /etc/apt/sources.list.d
- Update the APT client package list and repository index:
sudo apt update
- Use APT to install the two Intel support packages:
sudo apt install intel-for-pytorch-gpu-dev-0.5 intel-pti-dev
Step 3: Set Up oneAPI Environment Variables
Before you use any oneAPI component installed by the PyTorch development bundle, use this command to configure environment variables, important folders, and command settings
source /opt/intel/oneapi/pytorch-gpu-dev-0.5/oneapi-vars.sh
Activate the PTI component using this command:
source /opt/intel/oneapi/pti/latest/env/vars.sh
Consider adding these commands to your ~/.bashrc
file so they run every time you log in or create a new shell session.
Where to go next?
After installing Intel GPU drivers and the two support packages, as shown above, you're ready to return to and continue following the upstream PyTorch instructions in the PyTorch Building from Source: Install Dependencies section.
Support
For support questions, look through or post a question to this oneAPI developer support forum.