User Guide

Contents

Analyze MPI Applications

With
Intel® Advisor
, you can analyze parallel tasks running on a cluster to examine performance of your MPI application.
To start MPI jobs, use an MPI launcher such as
mpirun
,
mpiexec
,
srun
,
aprun
. You can use the
Intel Advisor
with the Intel® MPI Library or other MPI implementations only through the command line interface, but you can view the result using the standalone GUI, as well as the command line. The examples provided in this section use
mpirun
with the
advisor
command line interface (CLI) to spawn processes across the cluster and collect data about the application.
Use the MPI implementation that passes communication information using
environment variables
. The implementation needs to operate with the
Intel Advisor
process (
advisor
) being between the launcher process and the application process.
Intel Advisor
does not
work on an MPI implementation that tries to pass communication information from its immediate parent process.
To analyze your MPI application:
You can analyze your application as one of the following:
  • If you have a directory shared between remote and local systems: Collect data remotely to the shared directory. In this case, you do not need to move a project between the systems.
  • If you do not have a shared directory: Collect data to a directory on a remote system (for example, on a cluster), generate a snapshot (optionally), and copy the snapshot or a project to your local system to view the result. If you generate a snapshot, you do not need to configure the search paths for a project.

Get Preconfigured Command Lines

You can generate pre-configured command lines for collecting results for the Intel MPI Library launcher or a custom launcher using
Intel Advisor
graphical user interface (GUI). In this case, you do not need to type each command with all options and paths to a project directory and an application executable manually.
See Generate Command Lines for details.

Use Intel® MPI Library

With the Intel MPI Library, you can analyze a single MPI rank or several ranks of your MPI application with the
Intel Advisor
. This can help you to decrease analysis overhead.
Recommended MPI ranks to analyze are rank 1 and higher, because rank 0 might include time for configuration and not be a good representative for the general MPI application performance.
MPI Command Syntax
To collect performance data for an MPI application with
Intel Advisor
using the
mpirun
launcher of the Intel MPI Library, use the following command syntax:
mpirun
-gtool
"advisor --collect=
<analysis-type>
--search-dir src:r=
<source-dir>
[--no-auto-finalize] --project-dir=
<project-dir>
:
<rank-set>
" -n
<N>
<application-name>
[
<application-options>
]
where:
  • -gtool
    allows you to run
    Intel Advisor
    analyses for the specified MPI ranks only. This option is available for Intel MPI Library 5.0.2 or higher.
  • <analysis-type>
    is an
    Intel Advisor
    analysis
    to run:
    survey
    ,
    tripcounts
    ,
    map
    .
    dependencies
    ,
    projection
    .
  • <source-dir>
    is the path to the directory where application sources are stored. Specify it if you disabled autofinalization.
  • <project-dir>
    is the path/name of the project directory where the analysis results are saved. Specify the same project directory when running various
    Intel Advisor
    collections for the selected process.
  • <ranks-set>
    is a set of MPI ranks to analyze. Each rank corresponds to an MPI
    process
    and is used to identify the result data. Separate ranks with a comma or use a dash "
    -
    " to set a range of ranks. Use with
    -gtool
    option only. Do not specify if you want to analyze all ranks.
  • <N>
    is the number of MPI processes to launch.
  • disables result finalization on the target system to decrease overhead. The results are finalized when you import them or open in
    Intel Advisor
    GUI.
    Do not
    use this option if you do not use a shared directory and plan to copy results from cluster with snapshot. See Temporarily Disable Finalization for details.

Analyze a Single Rank of MPI Application with Intel MPI Library

Prerequisite
: Set up environment variables to enable
Intel Advisor
CLI.
In the commands below:
  • Data is collected remotely to a
    shared
    directory.
  • The analyses are performed for an application running in
    four
    processes.
  • Path to an application executable is
    ./mpi_sample
    .
    Note
    : In the commands below, make sure to replace the application path and name
    before
    executing a command. If your application requires additional command line options, add them after the executable name.
  • Path to an
    Intel Advisor
    project directory is
    ./advi_results
    .
This example shows how to run a Survey, Trip Counts, and Roofline analyses for the rank 1 of the MPI application with the
gtool
option of the Intel MPI Library.
  1. Collect survey data for rank 1 into the shared
    ./advi_results
    project directory on a target system.
    mpirun -gtool "advisor --collect=survey --project-dir=./advi_results:1" -n 4 ./mpi_sample
  2. Run the Trip Counts analysis with FLOP collection for rank 1 on the target system.
    mpirun -gtool "advisor --collect=tripcounts --flop --project-dir=./advi_results:1" -n 4 ./mpi_sample
    After you collect the Survey, Trip Counts, and FLOP data, you also get the Roofline report for your application.
  3. If you did not collect data to a shared location and need to copy the data to the local system to view the results, do it now.
  4. On the local system, view the results with your preferred method. You can view only one process data at a time.

Analyze Multiple Ranks of MPI Application with Intel MPI Library

Prerequisite
: Set up environment variables to enable
Intel Advisor
CLI.
In the commands below:
  • Data is collected remotely to a
    shared
    directory.
  • The analyses are performed for an application running in
    four
    processes.
  • Path to an application executable is
    ./mpi_sample
    .
    Note
    : In the commands below, make sure to replace the application path and name
    before
    executing a command. If your application requires additional command line options, add them after the executable name.
  • Path to an
    Intel Advisor
    project directory is
    ./advi_results
    .
Analyze a Set of Ranks
This example shows how to run a Survey, Trip Counts, and Roofline analyses for a
set
of ranks of the MPI application with the
gtool
option of the Intel MPI Library.
  1. Collect survey data for ranks 1, 2, and 4 into the shared
    ./advi_results
    project directory on a target system.
    mpirun -gtool "advisor --collect=survey --project-dir=./advi_results:1-2,4" -n 4 ./mpi_sample
  2. Run the Trip Counts analysis with FLOP collection for ranks 1, 2, and 4 on a target system.
    mpirun -gtool "advisor --collect=tripcounts --flop --project-dir=./advi_results:1-2,4" -n 4./mpi_sample
    After you collect the Survey, Trip Counts, and FLOP data, you also get the Roofline report for your application.
  3. If you did not collect data to a shared location and need to copy the data to the local system to view the results, do it now.
  4. On the local system, view the results with your preferred method. You can view only one process data at a time.
Analyze All Ranks
This example shows how to run a Survey, Trip Counts, and Roofline analyses for
all
ranks of the MPI application with the
gtool
option of the Intel MPI Library.
  1. Collect survey data for all ranks into the shared
    ./advi_results
    project directory on a target system.
    mpirun -gtool "advisor --collect=survey --project-dir=./advi_results" -n 4 ./mpi_sample
  2. Run the Trip Counts analysis with FLOP collection for all ranks on a target system.
    mpirun -gtool "advisor --collect=tripcounts --flop --project-dir=./advi_results" -n 4./mpi_sample
    After you collect the Survey, Trip Counts, and FLOP data, you also get the Roofline report for your application.
  3. If you did not collect data to a shared location and need to copy the data to the local system to view the results, do it now.
  4. On the local system, view the results with your preferred method. You can view only one process data at a time.

Use Non-Intel MPI Library

With non-Intel MPI library implementation, you can only analyze all ranks of your MPI application with Intel Advisor. This might increase analysis overhead.
MPI Command Syntax
To collect performance data for an MPI application with
Intel Advisor
using the
mpirun
launcher, use the following command syntax:
mpirun -n
<N>
"advisor --collect=
<analysis-type>
--search-dir src:r=
<source-dir>
--trace-mpi
[--no-auto-finalize] --project-dir=
<project-dir>
"
<application-name>
[
<application-options>
]
where:
  • <N>
    is the number of MPI processes to launch.
  • <analysis-type>
    is an
    Intel Advisor
    analysis
    to run:
    survey
    ,
    tripcounts
    ,
    map
    .
    dependencies
    ,
    projection
    .
  • <source-dir>
    is the path to the directory where application sources are stored. Specify it if you disabled autofinalization.
  • <project-dir>
    is the path/name of the project directory where the analysis results are saved. Specify the same project directory when running various
    Intel Advisor
    collections for the selected process.
  • enables analyzing non-Intel MPI library implementations. This option is required for non-Intel MPI implementation.
  • disables result finalization on the target system to decrease overhead. The results are finalized when you import them or open in
    Intel Advisor
    GUI.
    Do not
    use this option if you do not use a shared directory and plan to copy results from cluster with snapshot. See Temporarily Disable Finalization for details.

Analyze an MPI Application with Non-Intel MPI Library

Prerequisite
: Set up environment variables to enable
Intel Advisor
CLI.
In the commands below:
  • Data is collected remotely to a
    shared
    directory.
  • The analyses are performed for an application running in
    four
    processes.
  • Path to an application executable is
    ./mpi_sample
    .
    Note
    : In the commands below, make sure to replace the application path and name
    before
    executing a command. If your application requires additional command line options, add them after the executable name.
  • Path to an
    Intel Advisor
    project directory is
    ./advi_results
    .
This example shows how to run a Survey, Trip Counts, and Roofline analyses for
all 4
ranks of the MPI application.
  1. Collect survey data for all ranks into the shared
    ./advi_results
    project directory on a target system.
    mpirun -n 4 "advisor --collect=survey --project-dir=./advi_results"./mpi_sample
  2. Run the Trip Counts analysis with FLOP collection on the target system.
    mpirun -n 4 "advisor --collect=tripcounts --flop --project-dir=./advi_results" ./mpi_sample
    After you collect the Survey, Trip Counts, and FLOP data, you also get the Roofline report for your application.
  3. If you did not collect data to a shared location and need to copy the data to the local system to view the results, do it now.
  4. On the local system, view the results with your preferred method. You can view only one process data at a time.
For all analysis types and MPI libraries: When using a shared partition on Windows* OS, specify the network paths to the project and executable location or use the MPI options
mapall
or
map
to specify these locations on the network drive.
For example:
mpiexec -gwdir \\<host1>\mpi -hosts 2 <host1> 1 <host2> 1 advisor --collect=survey --project-dir=\\<host1>\mpi\advi_results -- \\<host1>\mpi\mpi_sample.exe
advisor --import-dir=\\<host1>\mpi\advi_results --project-dir=\\<host1>\mpi\new_advi_results --search-dir src:=\\<host1>\mpi --mpi-rank=
1
advisor --report=survey --project-dir=\\<host1>\mpi\new_advi_results
or:
mpiexec -mapall -gwdir z:\ -hosts 2 <host1> 1 <host2> 1 advisor --collect=survey --project-dir=z:\advi_results -- z:\mpi_sample.exe
or:
mpiexec -map z:\\<host1>\mpi -gwdir z:\ -hosts 2 <host1> 1 <host2> 1 advisor --collect=survey --project-dir=z:\advi_results -- z:\mpi_sample.exe

View Results

Intel Advisor
saves collection results into subdirectories for each rank analyzed under the project directory specified with
--project-dir
. The subdirectories are named as
rank.
<n>
, where the numeric suffix
<n>
corresponds to an MPI rank analyzed. You can only view results for one rank at a time.
To view the performance results collected for a specific rank, you can do one of the following.
View Results in GUI
From the Intel Advisor GUI, open a result project file
*.advixeproj
that resides in the
<project-dir>
/rank.
<n>
directory.
You can also open the GUI from command line:
advisor-gui ./advi_results/rank.1
If you used
--no-auto-finalize
when collecting data, make sure to set paths to application binaries and sources
before
viewing the result so that
Intel Advisor
can finalize it properly.
View Results in Command Line
Run the Intel Advisor action to print the result summary in a terminal:
advisor --report=
<analysis-type>
--project-dir=
<project-dir>
--mpi-rank=
<n>
where:
  • <analysis-type>
    is the
    Intel Advisor
    analysis you want to print the results for.
  • <project-dir
    > is the same project directory as you used for data collection.
  • <n>
    is the number of MPI rank you want to view results for.
View Results in a File
You can save results for a specified rank to a TXT, CSV, or a XML file. For example, save the results to a
advisor_result.csv
file, run the following command:
advisor --report=
<analysis-type>
--format=csv --report-output=advisor_result.csv --project-dir=
<project-dir>
--mpi-rank=
<n>
where:
  • <analysis-type>
    is the
    Intel Advisor
    analysis you want to print the results for.
  • <project-dir
    > is the same project directory as you used for data collection.
  • <n>
    is the number of MPI rank you want to view results for.
  • --format
    specified the file format to save the results to. In the command above, it is CSV.

Additional MPI Resources

For more details on analyzing MPI applications, see the
Intel MPI Library
and online MPI documentation on the
Intel® Developer Zone
at https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html
Hybrid applications:
Intel MPI Library
and OpenMP* on the
Intel Developer Zone
at https://www.intel.com/content/www/us/en/developer/articles/technical/hybrid-applications-mpi-openmp.html

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.