Legal Information
Getting Help and Support
Introduction
Coding for the Intel® Processor Graphics
Platform-Level Considerations
Application-Level Optimizations
Optimizing OpenCL™ Usage with Intel® Processor Graphics
Check-list for OpenCL™ Optimizations
Performance Debugging
Using Multiple OpenCL™ Devices
Coding for the Intel® CPU OpenCL™ Device
OpenCL™ Kernel Development for Intel® CPU OpenCL™ device
Mapping Memory Objects
Using Buffers and Images Appropriately
Using Floating Point for Calculations
Using Compiler Options for Optimizations
Using Built-In Functions
Loading and Storing Data in Greatest Chunks
Applying Shared Local Memory
Using Specialization in Branching
Considering native_ and half_ Versions of Math Built-Ins
Using the Restrict Qualifier for Kernel Arguments
Avoiding Handling Edge Conditions in Kernels
Using Shared Context for Multiple OpenCL™ Devices
Sharing Resources Efficiently
Synchronization Caveats
Writing to a Shared Resource
Partitioning the Work
Keeping Kernel Sources the Same
Basic Frequency Considerations
Eliminating Device Starvation
Limitations of Shared Context with Respect to Extensions
Why Optimizing Kernel Code Is Important?
Avoid Spurious Operations in Kernel Code
Perform Initialization in a Separate Task
Use Preprocessor for Constants
Use Signed Integer Data Types
Use Row-Wise Data Accesses
Tips for Auto-Vectorization
See Also
Local Memory Usage
Avoid Extracting Vector Components
Task-Parallel Programming Model Hints
Tips for Auto-Vectorization
Upon kernel compilation, the vectorization module often transforms the kernel memory access pattern from array of structures (AOS) to structure of arrays (SOA), which is SIMD friendly.
This transformation comes with a certain cost, specifically the transpose penalty. If you organize the input data in SOA instead of AOS, it reduces the transpose penalty.
For example, the following code suffers from transpose penalty:
__kernel void sum(__global float4* input, __global float* output) { int tid = get_global_id(0); output[tid] = input[tid].x + input[tid].y + input[tid].z + input[tid].w; }
While the following piece of code does not suffer from the transpose penalty:
__kernel void sum(__global float* inx, __global float* iny, __global float* inz, __global float* inw, __global float* output) { int tid = get_global_id(0); output[tid] = inx[tid] + iny[tid] + inz[tid] + inw[tid]; }
Take care when dealing with branches. Particularly, avoid data loads and stores within the statements:
if (…) {//condition x = A[i1];// reading from A … // calculations B[i2] = y;// storing into B } else { q = A[i1];// reading from A with same index as in first clause … // different calculations B[i2] = w; // storing into B with same index as in first clause }
The following code avoids loading from and storing to memory within branches:
temp1 = A[i1]; //reading from A in advance if (…) {//condition x = temp1; … // some calculations temp2 = y; //storing into temporary variable } else { q = temp1; … //some calculations temp2 = w; //storing into temporary variable } B[i2] =temp2; //storing to B once