Intel® Advisor User Guide

ID 766448
Date 6/24/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Parallel Programming Implementations

There are two popular approaches for adding parallelism to programs. You can use either:

  • A high-level parallel framework like Intel® oneAPI Threading Building Blocks (oneTBB) or OpenMP*. Of these parallel frameworks for native code, oneTBB supports C++ programs and OpenMP supports C, C++, or Fortran programs. For managed code on Windows* OS such as C#, use the Microsoft Task Parallel Library* (TPL).

    NOTE:
    C# and .NET support is deprecated starting Intel® Advisor 2021.1.
  • A low-level threading API like Windows* threads or POSIX* threads. In this case, you directly create and control threads at a low level. These implementations may not be as portable as high-level frameworks.

There are several reasons that Intel recommends using a high-level parallel framework:

  • Simplicity: You do not have to code all the detailed operations required by the threading APIs. For example, the OpenMP* #pragma omp parallel for (or Fortran !$OMP PARALLEL DO) and the oneTBB parallel_for() are designed to make it easy to parallelize a loop (see Reinders Ch. 3). With frameworks, you reason about tasks and the work to be done; with threads, you also need to decide how each thread will do its work.

  • Scalability: The frameworks select the best number of threads to use for the available cores, and efficiently assign the tasks to the threads. This makes use of all the cores available on the current system.

  • Loop Scalability: oneTBB and OpenMP assign contiguous chunks of loop iterations to existing threads, amortizing the threading overhead across multiple iterations (see oneTBB grain size: Reinders Ch. 3).

  • Automatic Load Balancing: oneTBB and OpenMP have features for automatically adjusting the grain size to spread work amongst the cores. In addition, when the loop iterations or parallel tasks do uneven amounts of work, the oneTBB scheduler will dynamically reschedule the work to avoid idle cores.

To implement parallelism, you can use any parallel framework you are familiar with.

The high-level parallel frameworks available for each programming language include:

Language

Available High-Level Parallel Frameworks

C

OpenMP

C++

Intel® oneAPI Threading Building Blocks (oneTBB)

OpenMP

C#

Microsoft Task Parallel Library* (Windows* OS only)

Fortran

OpenMP