Visible to Intel only — GUID: GUID-3A618625-AE75-4CAE-9D3F-4E05E36F736C
Legal Information
Getting Help and Support
Introduction
Coding for the Intel® Processor Graphics
Platform-Level Considerations
Application-Level Optimizations
Optimizing OpenCL™ Usage with Intel® Processor Graphics
Check-list for OpenCL™ Optimizations
Performance Debugging
Using Multiple OpenCL™ Devices
Coding for the Intel® CPU OpenCL™ Device
OpenCL™ Kernel Development for Intel® CPU OpenCL™ device
Mapping Memory Objects
Using Buffers and Images Appropriately
Using Floating Point for Calculations
Using Compiler Options for Optimizations
Using Built-In Functions
Loading and Storing Data in Greatest Chunks
Applying Shared Local Memory
Using Specialization in Branching
Considering native_ and half_ Versions of Math Built-Ins
Using the Restrict Qualifier for Kernel Arguments
Avoiding Handling Edge Conditions in Kernels
Using Shared Context for Multiple OpenCL™ Devices
Sharing Resources Efficiently
Synchronization Caveats
Writing to a Shared Resource
Partitioning the Work
Keeping Kernel Sources the Same
Basic Frequency Considerations
Eliminating Device Starvation
Limitations of Shared Context with Respect to Extensions
Why Optimizing Kernel Code Is Important?
Avoid Spurious Operations in Kernel Code
Perform Initialization in a Separate Task
Use Preprocessor for Constants
Use Signed Integer Data Types
Use Row-Wise Data Accesses
Tips for Auto-Vectorization
Local Memory Usage
Avoid Extracting Vector Components
Task-Parallel Programming Model Hints
Visible to Intel only — GUID: GUID-3A618625-AE75-4CAE-9D3F-4E05E36F736C
Comparing OpenCL™ Kernel Performance with Performance of Native Code
When comparing OpenCL™ kernel performance with native code, for example, C or Intel® Streaming SIMD Extensions (Intel® SSE) intrinsic, make sure that both versions are as similar as possible:
- Wrap exactly the same set of operations.
- Do not include program build time in the kernel execution time. You can amortize this step by program precompilation (refer to clCreateProgramFromBinary).
- Track data transfers costs separately. Also, use data mapping when possible, since this is closer to the way a data is passed in a native code (by pointers). Refer to the “Mapping Memory Objects” section for more information.
- Ensure the working set is identical for native and OpenCL code. Similarly, for correct performance comparison, access patterns should be the same (for example, rows compared to columns).
Demand the same accuracy. For example, rsqrt(x) is inherently of higher accuracy than the __mm_rsqrt_ps SSE intrinsic. To use the same accuracy in native code and OpenCL code, do one of the following:
- Equip __mm_rsqrt_ps in your native code with a couple of additional Newton-Raphson iterations to match the precision of OpenCL rsqrt.
- Use native_rsqrt in your OpenCL kernel, which maps to the rsqrtps instruction in the final assembly code.
- Use the relaxed-math compilation flag to enable similar accuracy for the whole program. Similarly to rsqrt, there are relaxed versions for rcp, sqrt, etc. Refer to the User Manual - OpenCL™ Code Builder for the full list