Intel® oneAPI DPC++/C++ Compiler Developer Guide and Reference

ID 767253
Date 11/07/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Intel-Supported Pragma Reference

The Intel® oneAPI DPC++/C++ Compiler supports the following pragmas to ensure compatibility with other compilers.

Pragmas Compatible with the Microsoft* Compiler

The following pragmas are compatible with the Microsoft Compiler. For more information about these pragmas, go to the Microsoft Developer Network (http://msdn.microsoft.com).

Pragma

Description

alloc_text

Names the code section where the specified function definitions are to reside.

bss_seg

Indicates to the compiler the segment where uninitialized variables are stored in the .obj file.

code_seg

Specifies a code section where functions are to be allocated.

comment

Places a comment record into an object file or executable file.

component

Controls collecting of browse information or dependency information from within source files.

const_seg

Specifies the segment where functions are stored in the .obj file.

data_seg

Specifies the default section for initialized data.

fenv_access

Informs an implementation that a program may test status flags or run under a non-default control mode.

float_control

Specifies floating-point behavior for a function.

fp_contract

Allows or disallows the implementation to contract expressions.

init_seg

Specifies the section to contain C++ initialization code for the translation unit.

message

Displays the specified string literal to the standard output device (stdout).

optimize

Specifies optimizations to be performed on functions below the pragma or until the next optimize pragma; implemented to partly support the Microsoft implementation of same pragma.

pointers_to_members

Specifies whether a pointer to a class member can be declared before its associated class definition and is used to control the pointer size and the code required to interpret the pointer.

pop_macro

Sets the value of the specified macro to the value on the top of the stack.

push_macro

Saves the value of the specified macro on the top of the stack.

region/endregion

Specifies a code segment in the Microsoft Visual Studio* Code Editor that expands and contracts by using the outlining feature.

section

Creates a section in an .obj file. Once a section is defined, it remains valid for the remainder of the compilation.

vtordisp

The on argument enables the generation of hidden vtordisp members and the off disables them.

push argument pushes the current vtordisp setting to the internal compiler stack. pop argument removes the top record from the compiler stack and restores the removed value of vtordisp.

warning

Allows selective modification of the behavior of compiler warning messages.

OpenMP* Standard Pragmas

The Intel oneAPI DPC++/C++ Compiler currently supports OpenMP* 5.0 Version TR4, and some OpenMP Version 5.1 pragmas. Supported pragmas are isted below. For more information about these pragmas, reference the OpenMP* Version 5.1 specification.

Intel-specific clauses are noted in the affected pragma description.

Pragma

Description

omp allocate

Specifies memory allocators to use for object allocation and deallocation.

omp atomic

Specifies a computation that must be executed atomically.

omp barrier

Specifies a point in the code where each thread must wait until all threads in the team arrive.

omp cancel

Requests cancellation of the innermost enclosing region of the type specified, and causes the encountering task to proceed to the end of the cancelled construct.

omp cancellation point

Defines a point at which implicit or explicit tasks check to see if cancellation has been requested for the innermost enclosing region of the type specified. This construct does not implement a synchronization between threads or tasks.

omp critical

Specifies a code block that is restricted to access by only one thread at a time.

omp declare reduction

Declares User-Defined Reduction (UDR) functions (reduction identifiers) that can be used as reduction operators in a reduction clause.

omp declare simd

Creates a version of a function that can process multiple arguments using Single Instruction Multiple Data (SIMD) instructions from a single invocation from a SIMD loop.

omp declare target

Specifies functions and variables that are created or mapped to a device.

omp declare variant

Identifies a variant of a base procedure and specifies the context in which this variant is used.

omp dispatch

Determines if a procedure variant is called for a given procedure.

omp distribute

Specifies that the iterations of one or more loops should be distributed among the initial threads of all thread teams in a league.

omp distribute parallel for

Specifies a loop that can be executed in parallel by multiple threads that are members of multiple teams.

omp distribute parallel for simd

Specifies a loop that will be executed in parallel by multiple threads that are members of multiple teams. It will be executed concurrently using SIMD instructions.

omp distribute simd

Specifies a loop that will be distributed across the primary threads of the teams region. It will be executed concurrently using SIMD instructions.

omp flush

Identifies a point at which a thread's temporary view of memory becomes consistent with the memory.

omp for

Specifies a work-sharing loop. Iterations of the loop are executed in parallel by the threads in the team.

omp for simd

Specifies that the iterations of the loop will be distributed across threads in the team. Iterations executed by each thread can also be executed concurrently using SIMD instructions.

omp interop

Identifies a foreign runtime context and identifies runtime characteristics of that context, enabling interoperability with it.

omp loop

Specifies that the iterations of the associated loops can execute in any order or concurrently.

omp masked

Specifies a structured block that is executed by a subset of the threads of the current team.

omp master (deprecated; see omp masked)

Specifies a code block that must be executed only once by the primary thread of the team.

omp ordered

Specifies a block of code that the threads in a team must execute in the natural order of the loop iterations, or as a stand-alone directive, specifies cross-iteration dependences in a doacross loop-nest.

omp parallel

Specifies that a structured block should be run in parallel by a team of threads.

omp parallel for

Provides an abbreviated way to specify a parallel region containing only a FOR construct.

omp parallel for simd

Specifies a parallel construct that contains one for simd construct and no other statement.

omp parallel sections

Specifies a parallel construct that contains only a sections construct.

omp requires

Lists the features that an implementation must support so that the program compiles and runs correctly.

omp scan

Specifies a scan computation that updates each list item in each iteration of the loop.

omp scope

Defines a structured block that is executed by all threads in a team but where additional OpenMP* operations can be specified.

omp sections

Defines a set of structured blocks that will be distributed among the threads in the team.

omp simd

Transforms the loop into a loop that will be executed concurrently using SIMD instructions.

omp single

Specifies that a block of code is to be executed by only one thread in the team.

omp target

Creates a device data environment and executes the construct on that device.

omp target data

Specifies that variables are mapped to a device data environment for the extent of the region.

omp target enter data

Specifies that variables are mapped to a device data environment.

omp target exit data

Specifies that variables are unmapped from a device data environment.

omp target parallel loop

Provides an abbreviated way to specify a target region that contains only a parallel loop construct.

omp target teams

Creates a device data environment and executes the construct on the same device. It also creates a league of thread teams with the primary thread in each team executing the structured block.

omp target teams distribute

Creates a device data environment and then executes the construct on that device. It also specifies that loop iterations will be distributed among the primary threads of all thread teams in a league created by a teams construct.

omp target teams distribute parallel for

Creates a device data environment and then executes the construct on that device. It also specifies a loop that can be executed in parallel by multiple threads that are members of multiple teams created by a teams construct.

omp target teams distribute parallel for simd

Creates a device data environment and then executes the construct on that device. It also specifies a loop that can be executed in parallel by multiple threads that are members of multiple teams created by a teams construct. The loop will be distributed across the teams, which will be executed concurrently using SIMD instructions.

omp target teams distribute simd

Creates a device data environment and then executes the construct on that device. It also specifies that loop iterations will be distributed among the primary threads of all thread teams in a league created by a teams construct. It will be executed concurrently using SIMD instructions.

omp target teams loop

Provides an abbreviated way to specify a target region that contains only a teams loop construct.

omp target update

Makes the items listed in the device data environment consistent between the device and host, in accordance with the motion clauses on the pragma.

omp task

Specifies a code block whose execution may be deferred.

omp taskgroup

Causes the program to wait until the completion of all enclosed and descendant tasks.

omp taskwait

Specifies a wait on the completion of child tasks generated since the beginning of the current task.

omp taskyield

Specifies that the current task can be suspended at this point in favor of execution of a different task.

omp teams

Creates a league of thread teams inside a target region to execute a structured block in the initial thread of each team.

omp teams distribute

Creates a league of thread teams and specifies that loop iterations will be distributed among the primary threads of all thread teams in the league.

omp teams distribute parallel for

Creates a league of thread teams and specifies that the associated loop can be executed in parallel by multiple threads that are members of multiple teams.

omp teams distribute parallel for simd

Creates a league of thread teams and specifies that the associated loop can be executed concurrently using SIMD instructions in parallel by multiple threads that are members of multiple teams.

omp teams distribute simd

Creates a league of thread teams and specifies that the associated loop will be distributed across the primary threads of the teams and executed concurrently using SIMD instructions.

omp teams loop

Provides an abbreviated way to specify a teams construct that contains only a loop construct.

omp threadprivate

Specifies a list of globally-visible variables that will be allocated private to each thread.

Pragmas Compatible with Other Compilers

The following pragmas are compatible with other compilers. For more information about these pragmas, see the documentation for the specified compiler.

Pragma

Description

poison

GCC-compatible pragma. It labels the identifiers you want removed from your program; an error results when compiling a "poisoned" identifier; #pragma POISON is also supported.

options

GCC-compatible pragma; It sets the alignment of fields in structures.

weak

GCC-compatible pragma, it declares the symbol you enter to be weak.

See Also