Visible to Intel only — GUID: GUID-A6E3BE7C-EAB1-4C8C-83AB-0F41406D1A28
Visible to Intel only — GUID: GUID-A6E3BE7C-EAB1-4C8C-83AB-0F41406D1A28
Run Offload Modeling Perspective from Command Line
Intel® Advisor provides several methods to run the Offload Modeling perspective from command line. Use one of the following:
- Method 1. Run Offload Modeling with a command line collection presets. Use this method if you want to use basic Intel Advisor analysis and modeling functionality, especially for the first-run analysis. This simple method allows you to run multiple analyses with a single command and control the modeling accuracy.
- Method 2. Run Offload Modeling analyses separately. Use this method if you want to analyze an MPI application or need more advanced analysis customization. This method allows you to select what performance data you want to collect for your application and configure each analysis separately.
- Method 3. Run Offload Modeling with Python* scripts. Use this method if you need more analysis customization. This method is moderately flexible and allows you to customize data collection and performance modeling steps.
After you run the Offload Modeling with any method above, you can view the results in Intel Advisor graphical user interface (GUI), command line interface (CLI), or an interactive HTML report. For example, the interactive HTML report is similar to the following:
Prerequisites
- Set Intel Advisor environment variables with an automated script.
The script enables the advisor command line interface (CLI), advisor-python command line tool, and the APM environment variable, which points to the directory with Offload Modeling scripts and simplifies their use.
- For SYCL, OpenMP* target, OpenCL™ applications: Set Intel Advisor environment variables to offload temporarily your application to a CPU for the analysis.
NOTE:You are recommended to run the GPU-to-GPU performance modeling to analyze SYCL, OpenMP target, and OpenCL application because it provides more accurate estimations.
Optional: Generate pre-Configured Command Lines
With the Intel Advisor, you can generate pre-configured command lines for your application and hardware. Use this feature if you want to:
- Analyze an MPI application
- Customize pre-set Offload Modeling commands
Offload Modeling perspective consists of multiple analysis steps executed for the same application and project. You can configure each step from scratch or use pre-configured command lines that do not require you to provide the paths to project directory and an application executable manually.
Option 1. Generate pre-configured command lines with --collect=offload and the --dry-run option. The option generates:
- Commands for the Intel Advisor CLI collection workflow
- Commands that correspond to a specified accuracy level
- Commands not configured to analyze an MPI application. You need to manually adjust the commands for MPI.
Note: In the commands below, make sure to replace the myApplication with your application executable path and name before executing a command. If your application requires additional command line options, add them after the executable name.
The workflow includes the following steps:
- Generate the command using the --collect=offload action with the --dry-run option. Specify accuracy level and paths to your project directory and application executable.
For example, to generate the low-accuracy commands for the myApplication application, run the following command:
- On Linux* OS:
advisor --collect=offload --accuracy=low --dry-run --project-dir=./advi_results -- ./myApplication
- On Windows* OS:
advisor --collect=offload --accuracy=low --dry-run --project-dir=.\advi_results -- .\myApplication.exe
You should see a list of commands for each analysis step to get the Offload Modeling result with the specified accuracy level (for the commands above, it is low).
- On Linux* OS:
- If you analyze an MPI application: Copy the generated commands to your preferred text editor and modify each command to use an MPI tool. For details about the syntax, see Analyze MPI Applications.
- Run the generated commands one by one from a command prompt or a terminal.
Option 2. If you have an Intel Advisor graphical user interface (GUI) available on your system and you want to analyze an MPI application from command line, you can generate the pre-configured command lines from GUI.
The GUI generates:
- Commands for the Intel Advisor CLI collection workflow
- Commands for a selected accuracy level if you want to run a pre-defined accuracy level or commands for a custom project configuration if you want to enable/disable additional analysis options
- Command configured for MPI application with Intel® MPI Library. You do not need to manually modify the commands for the MPI application syntax.
For detailed instructions, see Generate Pre-configured Command Lines.
Method 1. Use Collection Presets
For the Offload Modeling perspective, Intel Advisor has a special collection mode --collect=offload that allows you to run several analyses using only oneIntel Advisor CLI command. When you run the collection, it sequentially runs data collection and performance modeling steps. The specific analyses and options depend on the accuracy level you specify for the collection.
Note: In the commands below, make sure to replace the myApplication with your application executable path and name before executing a command. If your application requires additional command line options, add them after the executable name.
For example, to run the Offload Modeling perspective with the default (medium) accuracy level:
- On Linux* OS:
advisor --collect=offload --project-dir=./advi_results -- ./myApplication
- On Windows* OS:
advisor --collect=offload --project-dir=.\advi_results -- .\myApplication.exe
The collection progress and commands for each analysis executed will be printed to a terminal or a command prompt. By default, the performance is modeled for the Intel® Arc™ graphics code-named Alchemist (xehpg_512xve configuration). When the collection is finished, you will see the result summary.
Analysis Details
To change the analyses to run and their option, you can specify a different accuracy level with the --accuracy=<level> option. The default accuracy level is medium.
The following accuracy levels are available:
- low accuracy includes Survey, Characterization with Trip Counts and FLOP collections, and Performance Modeling analyses.
- medium (default) accuracy includes Survey, Characterization with Trip Counts and FLOP collections, cache and data transfer simulation, and Performance Modeling analyses.
- high accuracy includes Survey, Characterization with Trip Counts and FLOP collections, cache, data transfer, and memory object attribution simulation, and Performance Modeling analyses. For CPU applications, also includes the Dependencies analysis.
For CPU applications, this accuracy level adds a high collection overhead because it includes the Dependencies analysis. This analysis is not required if your ap