Developer Guide

Developer Guide for Intel® oneAPI Math Kernel Library Linux*

ID 766690
Date 12/16/2022
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Linking with oneMKL Cluster Software

The Intel® oneAPI Math Kernel Library ScaLAPACK, Cluster FFT, and Cluster Sparse Solver support MPI implementations identified in theIntel® oneAPI Math Kernel Library Release Notes.

To link a program that calls ScaLAPACK, Cluster FFT, or Cluster Sparse Solver, you need to know how to link a message-passing interface (MPI) application first.

Use mpi scripts to do this. For example, mpicc or mpif77 are C or FORTRAN 77 scripts, respectively, that use the correct MPI header files. The location of these scripts and the MPI library depends on your MPI implementation. For example, for the default installation of MPICH3, /opt/mpich/bin/mpicc and /opt/mpich/bin/mpif90 are the compiler scripts and /opt/mpich/lib/libmpi.a is the MPI library.

Check the documentation that comes with your MPI implementation for implementation-specific details of linking.

To link with ScaLAPACK, Cluster FFT, and/or Cluster Sparse Solver, use the following general form:

<MPI linker script> <files to link>                          \
-L <MKL path> [-Wl,--start-group] [<MKL cluster library>]        \
<BLACS> <MKL core libraries> [-Wl,--end-group]

where the placeholders stand for paths and libraries as explained in the following table:

<MKL cluster library>

One of libraries for ScaLAPACK or Cluster FFT and appropriate architecture and programming interface (LP64 or ILP64). Available libraries are listed in Appendix C: Directory Structure in Detail. For example, for the LP64 interface, it is -lmkl_scalapack_lp64 or -lmkl_cdft_core. Cluster Sparse Solver does not require an additional computation library.

<BLACS>

The BLACS library corresponding to your architecture, programming interface (LP64 or ILP64), and MPI used. Available BLACS libraries are listed in Appendix C: Directory Structure in Detail. Specifically, choose one of -lmkl_blacs_intelmpi_lp64 or -lmkl_blacs_intelmpi_ilp64.

<MKL core libraries>

Processor optimized kernels, threading library, and system library for threading support, linked as described in Listing Libraries on a Link Line.

<MPI linker script>

A linker script that corresponds to the MPI version.

For example, if you are using Intel MPI, want to statically link with ScaLAPACK using the LP64 interface, and have only one MPI process per core (and thus do not use threading), specify the following linker options:

-L$MKLPATH -I$MKLINCLUDE -Wl,--start-group $MKLPATH/libmkl_scalapack_lp64.a $MKLPATH/libmkl_blacs_intelmpi_lp64.a $MKLPATH/libmkl_intel_lp64.a $MKLPATH/libmkl_sequential.a $MKLPATH/libmkl_core.a -static_mpi -Wl,--end-group -lpthread -lm

NOTE:

Grouping symbols -Wl,--start-group and -Wl,--end-group are required for static linking.

TIP:

Use the Using the Link-line Advisor to quickly choose the appropriate set of <MKL cluster Library>, <BLACS>, and <MKL core libraries>.

Product and Performance Information

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.

Notice revision #20201201