A newer version of this document is available. Customers should click here to go to the newest version.
Error Message: Bad Termination
Error Message: No such file or Directory
Error Message: Permission Denied
Error Message: Fatal Error
Error Message: Bad File Descriptor
Error Message: Too Many Open Files
Problem: High Memory Consumption Readings
Problem: MPI Application Hangs
Problem: Password Required
Problem: Cannot Execute Binary File
Problem: MPI limitation for Docker*
Running an MPI/OpenMP* Program
To run a hybrid MPI/OpenMP* program, follow these steps:
- Make sure the thread-safe (debug or release, as desired) Intel® MPI Library configuration is enabled (release is the default version). To switch to such a configuration, source vars.sh with the appropriate argument. See Selecting Library Configuration for details. For example:
$ source vars.sh release
- Set the I_MPI_PIN_DOMAIN environment variable to specify the desired process pinning scheme. The recommended value is omp:
$ export I_MPI_PIN_DOMAIN=omp
This sets the process pinning domain size to be equal to OMP_NUM_THREADS. Therefore, if for example OMP_NUM_THREADS is equal to 4, each MPI process can create up to four threads within the corresponding domain (set of logical processors). If OMP_NUM_THREADS is not set, each node is treated as a separate domain, which allows as many threads per MPI process as there are cores.
NOTE:For pinning OpenMP* threads within the domain, use the Intel® compiler KMP_AFFINITY environment variable. See the Intel compiler documentation for more details. - Run your hybrid program as a regular MPI program. You can set the OMP_NUM_THREADS and I_MPI_PIN_DOMAIN variables directly in the launch command. For example:
$ mpirun -n 4 -genv OMP_NUM_THREADS=4 -genv I_MPI_PIN_DOMAIN=omp ./myprog
See Also
Intel® MPI Library Developer Reference, section Tuning Reference > Process Pinning > > Interoperability with OpenMP*.
Parent topic: Running Applications