Intel® MPI Library
Deliver flexible, efficient, and scalable cluster messaging.
One Library with Multiple Fabric Support
Intel® MPI Library is a multifabric message-passing library that implements the open source MPICH specification. Use the library to create, maintain, and test advanced, complex applications that perform better on HPC clusters based on Intel® and compatible processors.
- Develop applications that can run on multiple cluster interconnects that you choose at run time.
- Quickly deliver maximum end-user performance without having to change the software or operating environment.
- Achieve the best latency, bandwidth, and scalability through automatic tuning.
- Reduce the time to market by linking to one library and deploying on the latest optimized fabrics.
Download as Part of the Toolkit
Intel MPI Library is included in the Intel® oneAPI HPC Toolkit. Get the toolkit to analyze, optimize, and deliver applications that scale.
Download the Stand-Alone Version
A stand-alone download of Intel MPI Library is available. You can download binaries from Intel or choose your preferred repository.
Develop in the Cloud
Build and optimize oneAPI multiarchitecture applications using the latest optimized Intel® oneAPI and AI tools, and test your workloads across Intel® CPUs and GPUs. No hardware installations, software downloads, or configuration necessary. Free for 120 days with extensions possible.
OpenFabrics Interface* (OFI) Support
This optimized framework exposes and exports communication services to HPC applications. Key components include APIs, provider libraries, kernel services, daemons, and test applications.
Intel MPI Library uses OFI to handle all communications.
- Enables a more streamlined path that starts at the application code and ends with data communications
- Allows tuning for the underlying fabric to happen at runtime through simple environment settings, including network-level features like multirail for increased bandwidth
- Helps you deliver optimal performance on extreme scale solutions based on Mellanox InfiniBand* and Cornelis Networks*
As a result, you gain increased communication throughput, reduced latency, simplified program design, and a common communication infrastructure.
This library implements the high-performance MPI 3.1 standard on multiple fabrics. This lets you quickly deliver maximum application performance (even if you change or upgrade to new interconnects) without requiring major modifications to the software or operating systems.
- Thread safety allows you to trace hybrid multithreaded MPI applications for optimal performance on multicore and manycore Intel® architectures.
- Improved start scalability is through the mpiexec.hydra process manager, which is:
- a process management system for starting parallel jobs
- designed to natively work with multiple network protocols such as ssh, rsh, pbs, slurm, and sge
- Built-in cloud support for Amazon Web Services*, Microsoft Azure*, and Google* Cloud Platform
Performance and Tuning Utilities
Two additional functionalities help you achieve top performance from your applications.
The library provides an accelerated, universal, multifabric layer for fast interconnects via OFI, including for these configurations:
- Transmission Control Protocol (TCP) sockets
- Shared memory
- Interconnects based on Remote Direct Memory Access (RDMA), including Ethernet and InfiniBand
It accomplishes this by dynamically establishing the connection only when needed, which reduces the memory footprint. It also automatically chooses the fastest transport available.
- Develop MPI code independent of the fabric, knowing it will run efficiently on whatever network you choose at run time.
- Use a two-phase communication buffer-enlargement capability to allocate only the memory space required.
Application Binary Interface Compatibility
An application binary interface (ABI) is the low-level nexus between two program modules. It determines how functions are called and also the size, layout, and alignment of data types. With ABI compatibility, applications conform to the same set of runtime naming conventions.
Intel MPI Library offers ABI compatibility with existing MPI-1.x and MPI-2.x applications. So even if you are not ready to move to the new 3.1 standard, you can take advantage of the library’s performance improvements by using its runtimes, without recompiling.
Stay in the Know with All Things CODE
Sign up to receive the latest trends, tutorials, tools, training, and more to
help you write better code optimized for CPUs, GPUs, FPGAs, and other
accelerators—stand-alone or in any combination.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.