Visible to Intel only — GUID: GUID-6F9DB408-2CF0-4E6C-B979-ACBD12772BBA
Visible to Intel only — GUID: GUID-6F9DB408-2CF0-4E6C-B979-ACBD12772BBA
Glossary
Amdahl's law: A theoretical formula for predicting the maximum performance benefits of parallelizing application programs. Amdahl's law states that run-time execution time speedup is limited by the part of the program that is not parallelized (executes serially). To achieve results close to this potential, overhead must be minimized and all cores need to be fully utilized. See also Use Amdahl's Law and Measuring the Program.
annotation: A method of conveying information about proposed parallel execution. In the Intel® Advisor, you create annotations by adding macros or function calls. These annotations are used by Intel Advisor tools to predict parallel execution. For example, the C/C++ ANNOTATE_SITE_BEGIN(sitename) macro identifies where a parallel site begins. Later, to allow this code to execute in parallel, you replace the annotations with code needed to use a parallel framework. See also parallel framework and Annotation Types Summary.
atomic operation: An operation performed by a thread on a memory location(s) that is guaranteed not to be interfered with by other threads. See also synchronization.
chunking: The ability of a parallel framework to aggregate multiple instances of a task into groups for more efficient parallel processing. For tasks that do small amounts of computation and many iterations, task chunking can minimize task overhead. You can also restructure a single loop into an inner and outer loop (strip-mining). See also task and Enable Task Chunking.
code region: A subtree of loops/functions in a call tree. Synonym whole Loopnest.
critical section: A synchronization construct that allows only one thread to enter its associated code region at a time. Critical sections enforce mutual exclusion on enclosed regions of code. With Intel Advisor, mark critical sections by using ANNOTATE_LOCK_ACQUIRE() and ANNOTATE_LOCK_RELEASE() annotations.
data race: When multiple threads share (read/write) a memory location, if the program does not implement controls to manage the sequence of concurrent memory accesses, one thread can inadvertently overwrite data written by another thread, or otherwise read or write stale data. This can produce execution errors that are difficult to detect and reproduce, such as obtaining different calculated results when the same executable is run on different systems. To prevent data races, you can add data synchronization constructs that restrict shared memory access to one thread at a time, or you might eliminate the sharing.
data parallelism: Occurs when a single portion of code is paired with multiple portions of data, and each pairing executes as a task. For example, tasks are made by pairing a loop body with each element of an array iterated by the loop, and the tasks execute in parallel. See also Task Patterns. Contrast task parallelism.
data set: A set of data to be used as input or with an interactive application the way you interact with the application to cause