Intel® Advisor Cookbook
What You Will Learn
- Learn how to evaluate performance of an application running on graphics processing unit (GPU), pinpoint and prioritize bottlenecks, and define room for optimization usingGPU Roofline Insightsperspective from command line interface.
- Learn how to check profitability of porting a native C++ application to a target GPU from command line using a single command to run theOffload Modelingperspective.
- Learn how to check profitability of porting a DPC++ application running on a baseline GPU to a different GPU from command line with theOffload Modelingperspective.
- Learn how to analyze a native C++ application and estimate the profitability of porting the code toData Parallel C++ (with theDPC++)Offload Modelingfeature of theIntel Advisor.
- Learn how to identify loops and functions to offload to a GPU and find bottlenecks after offloading to the GPU usingOffload ModelingandGPU Roofline Insightsfeatures.
- Learn how to take advantage of Roofline Analysis to identify and address performance bottlenecks.
- Learn how to improve performance of your application by optimizing its vectorization aspects and efficiency following step-by-step recommendations suggested byIntel Advisor.
- Learn how to identify memory bottlenecks and improve performance by optimizing memory access patterns.
- Learn how to compare optimization strategies by visualizing multiple analysis results on the same chart.
- Learn how to identify vectorization issues and memory bottlenecks of an MPI application.
- Learn how to analyze your application performance on a remote system using the Intel Advisor command line interface (CLI) and visualize the results on a local system like macOS using the Intel Advisor GUI.
- Learn how to set upIntel Advisorto analyze performance data from applications in AWS EC2 instances.
- Learn how to set up and useIntel Advisorto interpret performance data for applications running on Cray systems.