Intel® Advisor User Guide

ID 766448
Date 12/16/2022
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Model Offloading to a GPU

Find high-impact opportunities to offload/run your code and identify potential performance bottlenecks on a target graphics processing unit (GPU) by running the Offload Modeling perspective.

The Offload Modeling perspective can help you to do the following:

  • For code running on a CPU, determine if you should offload it to a target device and estimate a potential speedup before getting a hardware.
  • For code running on a GPU, estimate a potential speedup from running it on a different target device before getting a hardware.
  • Identify loops that are recommended for offloading from a baseline CPU to a target GPU.
  • Pinpoint potential performance bottlenecks on the target device to decide on optimization directions.
  • Check how effectively data can be transferred between host and target devices.

With the Offload Modeling perspective, the following workflows are available:

  • CPU-to-GPU offload modeling:
    • For C, C++, and Fortran applications: Analyze an application and model its performance on a target GPU device. Use this workflow to find offload opportunities and prepare your code for efficient offload to the GPU.