Intel® Advisor User Guide

ID 766448
Date 3/31/2023

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Explore Offload Modeling Results

Intel® Advisor provides several ways to work with the Offload Modeling results generated from the command line.

View Results in CLI

When you run the Offload Modeling perspective from command line,, the result summary is printed in a terminal or a command prompt. In this summary report, you can view:

  • Description of a baseline device where application performance was measured and a target device for which the application performance was modeled
  • Executive binary name
  • Top metrics for measured and estimated (accelerated) application performance
  • Top regions recommended for offloading to the target and performance metrics per region

For example:

Info: Selected accelerator to analyze: Intel(R) Gen11 Integrated Graphics Accelerator 64EU.
Info: Baseline Host: Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz, GPU: Intel (R) .
Info: Binary Name: 'CFD'.
Info: An unknown atomic access pattern is specified: partial_sums_16. Possible values are same, sequential. sequential will be used.

Measured CPU Time: 44.858s    Accelerated CPU+GPU Time: 16.265s
Speedup for Accelerated Code: 3.5x    Number of Offloads: 7    Fraction of Accelerated Code: 60%

Top Offloaded Regions
 Location                                               | CPU      | GPU      | Estimated Speedup | Bounded By | Data Transferred
 [loop in compute_flux_ser at euler3d_cpu_ser.cpp:226]  |  36.576s |   9.340s |             3.92x | L3_BW      |         12.091MB
 [loop in compute_step_factor_ser at euler3d_cpu_ser....|   0.844s |   0.101s |             8.37x | LLC_BW     |          4.682MB
 [loop in time_step_ser at euler3d_cpu_ser.cpp:361]     |   0.516s |   0.278s |             1.86x | L3_BW      |         10.506MB
 [loop in time_step_ser at euler3d_cpu_ser.cpp:361]     |   0.456s |   0.278s |             1.64x | L3_BW      |         10.506MB
 [loop in time_step_ser at euler3d_cpu_ser.cpp:361]     |   0.432s |   0.278s |             1.55x | L3_BW      |         10.506MB

See Accelerator Metrics reference for more information about the metrics reported.

View Results in GUI

If you run the Offload Modeling perspective from command line, an .advixeproj project is created automatically in the directory specified with --project-dir. This project is interactive and stores all the collected results and analysis configurations. You can view it in the Intel Advisor GUI.

To open the project in GUI, you can run the following command from a command prompt:

advisor-gui <project-dir>

If the report does not open, click Show Result on the Welcome pane.

If you run the Offload Modeling perspective from GUI, the result is opened automatically after the collection finishes.

You first see a Summary report that includes the most important information about measured performance on a baseline device and modeled performance on a target device, including:

  • Main metrics for the modeled performance of your program that indicates if you should offload your application to a target device.
  • Specific factors that prevent your code from achieving a better performance if executed on a target device in the Offload Bounded by.
  • Top five offloaded loops/functions that provide the highest benefit and top five not offloaded loops/functions with the reason why they were not offloaded.

View an Interactive HTML Report

When you execute Offload Modeling from CLI, Intel Advisor automatically saves two types of HTML reports in the <project-dir>/e<NNN>/report directory:

  • Interactive HTML report that represents results in the similar way as GUI and enables you to view key estimated metrics for your application: advisor-report.html
    Collect GPU Roofline data to view results for Offload Modeling and GPU Roofline Insights perspectives in a single interactive HTML report.
  • Legacy HTML report that enables you to get the detailed information about functions in a call tree, download a configuration file for a target accelerator, and view perspective execution logs: report.html.

For details about HTML reports and instructions on exporting them if you run the Offload Modeling from GUI, see Work with Standalone HTML Reports.

To explore the interactive HTML report, you can download precollected Offload Modeling reports and examine the results and structure.

An additional set of reports is generated in the <project-dir>/e<NNN>/pp<NNN>/data0 directory, including:

  • Multiple CSV reports for different metric groups, such as report.csv, whole_app_metrics.csv, bounded_by_times.csv, latencies.csv.
  • A graphical representation of the call tree showing the offloadable and accelerated regions named as
  • A graphical representation of the call tree named as program_tree.pdf, which is generated if a DOT* utility is installed on your system.
  • LOG files, which can be used for debugging and reporting bugs and issues.

These reports are light-weighted and can be easily shared as they do not require Intel Advisor GUI.

Save a Read-only Result Snapshot

A snapshot is a read-only copy of a project result, which you can view at any time using the Intel Advisor GUI. You can save a snapshot for a project using Intel Advisor GUI or CLI.

To save an active project resu