User Guide


Analyze MPI Applications

Intel® Advisor
, you can analyze parallel tasks running on a cluster to examine performance of your MPI application. Use the Intel® MPI
to invoke the
command and spawn MPI processes across the cluster.
You can analyze MPI applications
through the command line interface, but you can view the result through the standalone GUI, as well as the command line.


Consider the following when running collections for an MPI application:
  • Analysis data can be saved to a shared partition or to local directories on the cluster.
  • Only one processes' data can be viewed at a time.
  • Intel® Advisor
    saves collection results into a subdirectory under the
    Intel Advisor
    project directory. If you wish to collect and then view (in a separate session) data for more than one process, specify a new project directory when running new collection.
  • Specify one and the same project directory when running various
    Intel Advisor
    collections for the selected process.

MPI Implementations Support

You can use the
Intel Advisor
with the
Intel® MPI Library
and other MPI implementations, but be aware of the following details:
  • You may need to adjust the command examples in this section to work for non-Intel MPI implementations. For example, adjust commands provided for process ranks to limit the number of processes in the job.
  • An MPI implementation needs to operate in cases when there is the
    Intel Advisor
    process (
    ) between the launcher process (
    ) and the application process. This means that the communication information should be passed using environment variables, as most MPI implementations do.
    Intel Advisor
    does not work on an MPI implementation that tries to pass communication information from its immediate parent process.

Get Intel® MPI Library Commands

You can use
Intel Advisor
to generate the command line for collecting results on multiple MPI ranks. To do that,
  1. In
    Intel Advisor
    user interface, go to
    Project Properties
    Analysis Target
    tab and select the analysis you want to generate the command line for. For example, go to
    Survey Analysis Types
    Survey Hotspots Analysis
    to generate command line for the Survey analysis.
  2. Set properties to configure the analysis, if required.
  3. Select the
    Use MPI Launcher
  4. Specify the MPI run parameters, ranks to profile, if required (for Intel MPI Library only), then copy the command line from
    Get command line
    text box to your clipboard.

Intel® MPI Library Command Syntax

Use the
option of
with Intel® MPI Library 5.0.2 and higher:
$ mpiexec –gtool "advisor --collect=<analysis-type> --project-dir=<project-dir>:<ranks-set>" -n <N> <application-name> [myApplication-options]
  • <analysis-type>
    is one of the
    Intel Advisor
    • survey
      runs the target process and collects basic information about the hotspots.
    • tripcounts
      collects data on the loop trip counts.
    • dependencies
      collects information possible dependencies in your application, requires one of the following:
      • Loop ID(s) as an additional parameter (
        ). Find the loop ID in the Survey report (
        ) or using the
        Command Line
        link in the
        Intel® Advisor
        Workflow tab
      • Loop source location(s) in the format
      • Annotations in the source code
    • map
      collects information about memory access patterns for the selected loops. Also requires loop
      source locations
      for the analysis.
    • suitability
      checks suitability of the parallel site that you want to insert into your target application. Requires
      to be added into the source code of your application, and also requires recompilation in Debug mode.
    • projection
      models your application performance on an accelerator.
  • <ranks-set>
    is the set of MPI ranks to run the analysis for. Separate ranks with a comma, or use a dash "-" to set a range of ranks. Use
    to analyze all the ranks.
  • <N>
    is the number of MPI processes to launch.
option of
allows you to select MPI ranks to run analyses for. This can decrease overhead.

Generic MPI Command Syntax

with the
command to spawn processes across the cluster and collect data about the application.
Each process has a rank associated with it. This rank is used to identify the result data.
To collect performance or dependencies data for an MPI program with
Intel Advisor
, the general form of the
command is:
$ mpiexec -n <N> "advisor --collect=<analysis-type> --project-dir=<project-dir> --search-dir src:r=<source-dir>" myApplication [myApplication-options]
  • <N>
    is the number of MPI processes to launch.
  • <project-dir>
    specifies the path/name of the project directory.
  • <analysis_type>
    , or
  • <source-dir>
    is the path to the directory where annotated sources are stored.
This command profiles all MPI ranks.
For details about analyzing the MPI application with the Offload Modeling perspective of the
Intel Advisor
, see Model MPI Application Performance on GPU.

Control Collection with an MPI_Pcontrol Function

By default,
Intel Advisor
analyzes performance of a whole application. In some cases, you may want to focus on the most time consuming section or disable collection for the initialization or finalization phases.
Intel Advisor
supports the MPI region control with the
function. This function allows you to enable and disable collection for specific application regions in the source code.
The region control affects only MPI and OpenMP* metrics, while the other metrics are collected for the entire application.
To use the function, add it to the your application source code as follows:
  • To pause data collection, add
    before the code region that you want to disable the collection for.
  • To resume data collection, add
    where you want the collection to start again.
  • To skip the initialization phase:
    1. Add the
      function right after initialization. Build the application.
    2. Run the desired
      Intel Advisor
      analyses with the
According to the MPI standard,
accepts other numbers as arguments. For the
Intel Advisor
, only the
are relevant.
You can also use
to mark specific code regions. Use
at the beginning of the region, and
at the end of the region, where
is 5 and higher.

View Results

As a result of collection,
Intel Advisor
creates a number of result directories in the directory specified with
. The nested result directories are named as
rank.0, rank.1, ... rank.n
, where the numeric suffix
corresponds to the MPI process rank.
To view the performance or dependency results collected for a specific rank, you can either open a result project file (
) that resides in the
via the
Intel Advisor
GUI, or run the
Intel Advisor
CLI report:
$ advisor --report=<analysis-type> --project-dir=<project-dir>:<ranks-set>
You can view only one rank's results at a time.
Offload Modeling
you do not need to run the
command. The
reports are generated automatically after you run performance modeling. See Model MPI Application Performance on GPU for details.

Additional MPI Resources

For more details on analyzing MPI applications, see the
Intel MPI Library
and online MPI documentation on the
Intel® Developer Zone
Hybrid applications:
Intel MPI Library
and OpenMP* on the
Intel Developer Zone

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at