Intel® MPI Library Release Notes for Linux* OS

ID 914349
Updated 4/28/2026
Version
Public

author-image

By

Overview

Intel® MPI Library for Linux OS* is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v4.1 (MPI-4.1).

To receive technical support and updates, you need to register your product copy. See Technical Support below.

Key Features

This release of the Intel(R) MPI Library supports the following major features:

  • MPI-1, MPI-2.2, MPI-3.1, MPI 4.0, MPI 4.1 and MPI 5.0 (technical preview)
  • Interconnect independence
  • C, C++, Fortran 77, Fortran 90, and Fortran 2008 language bindings
  • Amazon* AWS/EFA, Google* GCP support
  • Intel® GPU pinning support
  • Intel® and Nvidia* GPU buffers support
  • PMIx Support

Product Contents

  • The Intel® MPI Library Runtime Environment (RTE) contains the tools you need to run programs including scalable process management system (Hydra), supporting utilities, and shared (.so) libraries.
  • The Intel® MPI Library Development Kit (SDK) includes all of the Runtime Environment components and compilation tools: compiler wrapper scripts, include files and modules, static (.a) libraries, debug libraries, and test codes.

You can redistribute the library under conditions specified in the License.

Intel® MPI Library 2021 Update 18.0 - oneAPI 2026.0

What's New

  • Expanded MPI 5.0 standard technical preview with ABI compatibility for Fortran
  • Introduced technical preview of Intel® Arc™ Pro B-Series GPU support (single node)
  • Added full support for Clearwater Forest processors and optimized single node performance
  • Added technical preview capability for non-mpiexec based job launching to support AI frameworks and applications
  • Enhanced multithreaded communication scalability with expanded thread split mode support
  • Reduced job startup latency for single node workloads by defaulting to SHM fabric
  • Added new tunable Alltoall algorithms accessible via I_MPI_ADJUST_ALLTOALL (values 14–18)
  • Addressed a CPU affinity issue by updating the default I_MPI_HYDRA_BRANCH_COUNT value for slurm
  • Improved compatibility with Intel’s latest compiler toolchains while maintaining backward compatibility 
  • Integrated libfabric v2.4.0 for more reliable communication
  • Updated IMB to version 2021.11, with multiple stability and compatibility fixes
  • Clarified versioning with explicit patch numbers (e.g., 2021.18.0)
  • Bug fixes

Known Issues and Limitations

  • On system with Intel® integrated GPUs, the integrated GPUs must be hidden via ZE_AFFINITY_MASK if I_MPI_OFFLOAD=1 is set.
  • With the mlx provider, if there are hangs seen with inject function (mlx_tagged_inject), please update UCX to v1.20.0, which has the fix for this bug.  
  • When you run an MPI program on a single node and do not enable process spawning (I_MPI_SPAWN is not set to 1), the MPI runtime automatically enables Intel MPI shared memory for communication (I_MPI_FABRICS=shm). However, shared‑memory mode is incompatible with the MPI Spawn APIs. If your application uses MPI_Comm_spawn (or related spawn functions) while I_MPI_FABRICS=shm is active, the program will terminate with an error. To avoid this issue, you must explicitly enable spawning by setting: I_MPI_SPAWN=1 before running your application.
  • When using thread split functionality with the new implicit mode, ensure that the application follows the semantics of the MPI_THREAD_SPLIT communication model and set I_MPI_THREAD_MAX=<N> to the number of threads per rank, along with I_MPI_THREAD_SPLIT=1 and I_MPI_THREAD_SPLIT_MODE=implicit.
  • If vars.sh sourced from another script with no explicit parameters, it will inherit parent script options and may process matching ones. 
  • stdout and stderr redirection may cause problems with LSF's blaunch.
    • verbose option may cause a crash with LSF's blaunch. Please do not use -verbose option or set -bootstrap=ssh.
  • Application may hang with LSF job manager in finalization if the number of nodes is more than 16. The workaround is setting -bootstrap=ssh or -branch-count=-1.
  • Incorrect process pinning with I_MPI_PIN_ORDER=spread. Some of the domains may share common sockets.
  • Nonblocking MPI-IO operations on NFS filesystem may work incorrectly for files larger than 2 GB.Some MPI-IO features may not be working on NFS v3 mounted w/o "lock" flag.
  • HBW memory policies applied to window segments for RMA operations are not yet supported.
  • To use the cxi provider, set FI_PROVIDER=cxi, and FI_PROVIDER_PATH and I_MPI_OFI_LIBRARY to point to cxi enabled libfabric. On machines with CXI < 2.0, also set FI_UNIVERSE_SIZE=1024 to bypass a CXI bug that causes a crash otherwise. In case you experience hangs when running with the CXI provider, or see messages about Cassini Event Queue overflow, try increasing the FI_CXI_DEFAULT_CQ_SIZE cvar to values ranging from 16384 to 131072. This is a known issue with the CXI provider. When using 4th Generation Intel® Xeon® Scalable Processors nodes in SNC4 mode, the default CPU pinning (and in turn the nic assignment) is not correct for multiples of 6 ranks and the default GPU pinning is not correct for multiples of 8 ranks. In such cases, it is recommended to explicitly specify CPU, GPU, and NIC pinning using cvars.
  • Starting with Slurm* version 23.11, users may encounter an error if the I_MPI_HYDRA_BOOTSTRAP=ssh environment variable is set after node allocation. To address this issue, consider the following workarounds:
    Set I_MPI_HYDRA_BOOTSTRAP=ssh before node allocation. For instance, you can set it in your .bashrc file.
    Set I_MPI_HYDRA_BOOTSTRAP=ssh after node allocation then unset I_MPI_HYDRA_BOOTSTRAP_EXTRA_ARGS.
    You may also refer to Slurm documentation: https://slurm.schedmd.com/mpi_guide.html

Removals

Starting from Intel® MPI Library 2019, the deprecated obsolete symbolic links and directory structure have finally been removed. If your application still depends on the old directory structure and file names, you can restore them using the script.

 

Intel® MPI Library 2021 Update 18

  • The OFI PSM2 provider, used for Omni Path 100-series HFIs, will be removed from future IMPI packages. It is included in 2021.18.0. Future versions will instead include the OFI OPX provider providing similar functionality and compatible with OPA100, CN5000 and future Cornelis network products. Customers who still require PSM2 functionality on OPA100 should follow the instructions outlined in the Cornelis online CN5000 Performance Tuning Guide.  Please contact Cornelis support directly if you require additional support.
  • The OFI PSM3 provider will no longer be included in future IMPI packages. It is included in 2021.18.0. Should customers need this provider in the future, it will remain accessible through the open‑source libfabric project.

System Requirements

Introduction

This document provides details about hardware, operating system, and software prerequisites for the Intel® MPI Library.

Hardware Requirements

  • Systems based on the Intel® 64 architecture, in particular:
    • Intel® Core™ processor family or higher
    • Intel® Xeon® Scalable processor family
    • Intel® Xeon® 6 processor family
  • 1 GB of RAM per rank (2 GB recommended)
  • 1 GB of free hard disk space

Supported Accelerators 

Intel® Data Center GPU Max Series
Intel® Arc™ Pro B-Series GPU
Nvidia* GPUs (minimal tested version is NVIDIA* Tesla P100)

Drivers and Libraries

oneAPI Level Zero API 1.0.
CUDA* Version 11.3 or higher.

Software Requirements

Linux* OS

  • Red Hat* Enterprise Linux* 8, 9, 10
  • SUSE* Linux Enterprise Server* 15 SP4, 15 SP5, 15 SP6, 15 SP7
  • Fedora* 41, 42
  • Rocky Linux* 9
  • Amazon* Linux 2025, 2023, 2022

Compilers

  • GNU*: C, C++, Fortran 77 3.3 or newer, Fortran 95 4.8.0 or newer
  • Intel® C++/Fortran Compiler 17.0 or newer

Debuggers

  • Rogue Wave* Software TotalView* 6.8 or newer
  • Allinea* DDT* 1.9.2 or newer
  • GNU* Debuggers 7.4 or newer

Batch Systems

  • Platform* LSF* 6.1 or newer
  • Altair* PBS Pro* 7.1 or newer
  • Torque* 1.2.0 or newer
  • Parallelnavi* NQS* V2.0L10 or newer
  • NetBatch* v6.x or newer
  • SLURM* 1.2.21 or newer
  • Univa* Grid Engine* 6.1 or newer
  • IBM* LoadLeveler* 4.1.1.5 or newer
  • Platform* Lava* 1.0

Fabric Software

Supported Languages

  • For GNU* compilers: C, C++, Fortran 
  • For Intel® compilers: C, C++, Fortran 

Clustered File Systems

  • IBM Spectrum Scale* (GPFS*)
  • LustreFS*
  • PanFS*
  • NFS* v3 or newer

Linux General Purpose Intel® GPUs (GPGPU) Driver:

For all Intel® GPUs, see this article, https://dgpu-docs.intel.com/, and follow the directions for your device.

Legal Information

Intel® technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

Technical Support

Every purchase of an Intel® Software Development Product includes a year of support services, which provides Priority Support at our Online Service Center web site.

In order to get support you need to register your product in the Intel® Registration Center. If your product is not registered, you will not receive Priority Support.

Additional Resources

Intel® MPI Library

 

1