• 04/11/2022
  • Public Content

Building the OpenMP* Version

To build the OpenMP* version, you will modify the sample application to use OpenMP* parallelization and then compile the modified code. You will then start the application and compare the time with the baseline performance time.
  1. Set the
    project as the startup project.
  2. For project
    , change the compiler to the Intel® C++ Compiler (
    Project > Intel Compiler > Use Intel C++
  3. For the project
    , make sure the
    compiler option is set (
    Project > Properties > Configuration Properties > C/C++ > Language [Intel C++] > OpenMP Support = Generate Parallel Code (/Qopenmp)
    ). This option is required to enable the OpenMP* extension in the compiler
  4. Open the source file
    in the project
  5. Change the following in the
    • Move the iteration-independent value of
      out of the loop.
    • Remove the validity check of
      • Exiting a loop in the middle of a parallelized loop is not permitted.
      • The iterations we save from this check will be distributed without affecting the result.
    • Add a
      #pragma omp parallel for
      to the outermost
      loop to maximize the work done per thread.
    • Check against the complete change shown in
The makefile automatically runs the sample after it is built.
Compare the time to render the image to the baseline performance time.
If you wish to explicitly set the number of threads, you can set the environment variable
is the number of threads. Alternatively, you can use the function
void omp_set_num_threads(int nthreads)
that is declared in
. Make sure to call this function before the parallel region is defined.
Options that use OpenMP* are available for both Intel and non-Intel microprocessors, but these options may perform additional optimizations on Intel® microprocessors than they perform on non-Intel microprocessors. The list of major, user-visible OpenMP* constructs and features that may perform differently on Intel versus non-Intel microprocessors includes:
  • Internal and user visible locks
  • The SINGLE construct
  • Explicit and implicit barriers
  • Parallel loop scheduling
  • Reductions
  • Memory allocation
  • Thread affinity and binding

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.