Intel® Advisor User Guide

ID 766448
Date 3/31/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Run Offload Modeling Perspective from Command Line

Intel® Advisor provides several methods to run the Offload Modeling perspective from command line. Use one of the following:

  • Method 1. Run Offload Modeling with a command line collection presets. Use this method if you want to use basic Intel Advisor analysis and modeling functionality, especially for the first-run analysis. This simple method allows you to run multiple analyses with a single command and control the modeling accuracy.
  • Method 2. Run Offload Modeling analyses separately. Use this method if you want to analyze an MPI application or need more advanced analysis customization. This method allows you to select what performance data you want to collect for your application and configure each analysis separately.
  • Method 3. Run Offload Modeling with Python* scripts. Use this method if you need more analysis customization. This method is moderately flexible and allows you to customize data collection and performance modeling steps.
TIP:
See Intel Advisor cheat sheet for quick reference on command line interface.

After you run the Offload Modeling with any method above, you can view the results in Intel Advisor graphical user interface (GUI), command line interface (CLI), or an interactive HTML report. For example, the interactive HTML report is similar to the following:

Prerequisites

  1. Set Intel Advisor environment variables with an automated script.

    The script enables the advisor command line interface (CLI), advisor-python command line tool, and the APM environment variable, which points to the directory with Offload Modeling scripts and simplifies their use.

  2. For SYCL, OpenMP* target, OpenCL™ applications: Set Intel Advisor environment variables to offload temporarily your application to a CPU for the analysis.
    NOTE:
    You are recommended to run the GPU-to-GPU performance modeling to analyze SYCL, OpenMP target, and OpenCL application because it provides more accurate estimations.

Optional: Generate pre-Configured Command Lines

With the Intel Advisor, you can generate pre-configured command lines for your application and hardware. Use this feature if you want to:

  • Analyze an MPI application
  • Customize pre-set Offload Modeling commands

Offload Modeling perspective consists of multiple analysis steps executed for the same application and project. You can configure each step from scratch or use pre-configured command lines that do not require you to provide the paths to project directory and an application executable manually.

Option 1. Generate pre-configured command lines with --collect=offload and the --dry-run option. The option generates:

  • Commands for the Intel Advisor CLI collection workflow
  • Commands that correspond to a specified accuracy level
  • Commands not configured to analyze an MPI application. You need to manually adjust the commands for MPI.

Note: In the commands below, make sure to replace the myApplication with your application executable path and name before executing a command. If your application requires additional command line options, add them after the executable name.

The workflow includes the following steps:

  1. Generate the command using the --collect=offload action with the --dry-run option. Specify accuracy level and paths to your project directory and application executable.

    For example, to generate the low-accuracy commands for the myApplication application, run the following command:

    • On Linux* OS:
      advisor --collect=offload --accuracy=low --dry-run --project-dir=./advi_results -- ./myApplication
    • On Windows* OS:
      advisor --collect=offload --accuracy=low --dry-run --project-dir=.\advi_results -- .\myApplication.exe

    You should see a list of commands for each analysis step to get the Offload Modeling result with the specified accuracy level (for the commands above, it is low).

  2. If you analyze an MPI