Get Started Guide

  • 2021.4
  • 09/27/2021
  • Public Content

Get Started with Intel® MPI Library for Intel® oneAPI on
Linux*
OS

The Intel® MPI Library enables you to create, maintain, and test advanced applications that have performance advantages on high-performance computing (HPC) clusters based on Intel® processors.
The Intel MPI Library is available as a standalone product and as part of the Intel® oneAPI HPC Toolkit.The Intel MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, version 3.1 (MPI-3.1) specification. Use the library to develop applications that can run on multiple cluster interconnects.
The Intel MPI Library has the following features:
  • Scalability up to 340k processes
  • Low overhead enables analysis of large amounts of data
  • MPI tuning utility for accelerating your applications
  • Interconnect independence and flexible runtime fabric selection
The product consists of the following main components:
  • Compilation tools, including compiler drivers such as mpiicc and mpifort
  • Include files and modules
  • Shared (.so) and static (.a) libraries, debug libraries, and interface libraries
  • Process Manager and tools to run programs
  • Test code
  • Documentation provided as a separate package or available from the Intel Developer Zone
Intel MPI Library also includes Intel® MPI Benchmarks, which enable you to measure MPI operations on various cluster architectures and MPI implementations. For details, see the
Intel® MPI Benchmarks User Guide
. Source code is available in the GitHub repository.

Key Features

The Intel MPI Library has the following major features:
  • MPI-1, MPI-2.2 and MPI-3.1 specification conformance
  • Interconnect independence
  • C, C++, Fortran* 77,
    Fortran 90, and Fortran 2008
    language bindings

Prerequisites

Before you start using Intel MPI Library, complete the following steps:
1. Source the
setvars.sh
script to set the environment variables for the Intel MPI Library. The script is located in the installation directory (by default,
/opt/intel/oneapi
).
2. Create a
hostfile
text file that lists the nodes in the cluster using one host name per line. For example:
clusternode1clusternode2
3. Make sure the passwordless SSH connection is established among all nodes of the cluster. It ensures the proper communication of MPI processes among the nodes.
After completing these steps, you are ready to use the Intel MPI Library.
For detailed system requirements, see the “System Requirements” section in Release Notes.
For modulefile instructions, see Use Modulefiles with Linux*.

Building and Running MPI Programs

Compiling an MPI Program

1. Make sure you have a compiler in your PATH. To check this, run the
which
command on the desired compiler. For example:
$ which icc /opt/intel/oneapi/compiler/<version>.<update>/linux/bin/icc
2. Compile a test program using the appropriate compiler driver. For example:
$ mpiicc -o myprog <install-dir>/test/test.c

Running an MPI Program

Use the previously created hostfile and run your program with the
mpirun
command as follows:
$ mpirun -n <# of processes> -ppn <# of processes per node> -f ./hostfile ./myprog
For example:
$ mpirun -n 2 -ppn 1 -f ./hostfile ./myprog
The test program above produces output in the following format:
Hello world: rank 0 of 2 running on clusternode1Hello world: rank 1 of 2 running on clusternode2
This output indicates that you properly configured your environment and Intel MPI Library successfully ran the test MPI program on the cluster.

Troubleshooting

If you encounter problems when using Intel MPI Library, go through the following general procedures to troubleshoot them:
  • Check system requirements, known issues and limitations in the
    Release Notes
    .
  • Check hosts accessibility. Run a simple non-MPI application (for example,
    hostname utility
    ) on the problem hosts with
    mpirun
    . This check helps you reveal an environmental problem
    (for example, SSH is not configured properly)
    , or connectivity problem (for example, unreachable hosts).
  • Run the MPI application with debug information enabled. To enable the debug information, set the environment variable I_MPI_DEBUG=6. You can also set a different debug level to get more detailed information. This action helps you find the problem component.
See more details in the “Troubleshooting” section of the Developer Guide.

More Resources

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.