Legal Information Getting Help and Support Introduction Coding for the Intel® Processor Graphics Platform-Level Considerations Application-Level Optimizations Optimizing OpenCL™ Usage with Intel® Processor Graphics Check-list for OpenCL™ Optimizations Performance Debugging Using Multiple OpenCL™ Devices Coding for the Intel® CPU OpenCL™ Device OpenCL™ Kernel Development for Intel® CPU OpenCL™ device
Mapping Memory Objects Using Buffers and Images Appropriately Using Floating Point for Calculations Using Compiler Options for Optimizations Using Built-In Functions Loading and Storing Data in Greatest Chunks Applying Shared Local Memory Using Specialization in Branching Considering native_ and half_ Versions of Math Built-Ins Using the Restrict Qualifier for Kernel Arguments Avoiding Handling Edge Conditions in Kernels
Using Shared Context for Multiple OpenCL™ Devices Sharing Resources Efficiently Synchronization Caveats Writing to a Shared Resource Partitioning the Work Keeping Kernel Sources the Same Basic Frequency Considerations Eliminating Device Starvation Limitations of Shared Context with Respect to Extensions
Why Optimizing Kernel Code Is Important? Avoid Spurious Operations in Kernel Code Perform Initialization in a Separate Task Use Preprocessor for Constants Use Signed Integer Data Types Use Row-Wise Data Accesses Tips for Auto-Vectorization Local Memory Usage Avoid Extracting Vector Components Task-Parallel Programming Model Hints
Avoiding Needless Synchronization
For best results, try to avoid explicit command synchronization primitives (such as clEnqueueMarker or Barrier), also explicit synchronization commands and event tracking result in cross-module round trips, which decrease performance. The less you use explicit synchronization commands, the better the performance.
Use the following techniques to reduce explicit synchronization:
- Continue executing kernels until you really need to read the results; this idiom best expressed with in-order queue and blocking call to clEnqueueMapXXX or clEnqueueReadXXX.
- If an in-order queue expresses the dependency chain correctly, exploit the in-order queue rather than defining an event-driven string of dependent kernels. In the in-order execution model, the commands in a queue are automatically executed back-to-back, in the order of submission. This suits very well a typical case of a processing pipeline. Consider the following recommendations:
- Avoid any host intervention to the in-order queue (like blocking calls) and additional synchronization costs.
- When you have to use the blocking API, use OpenCL™ API, which is more effective than explicit synchronization schemes, based on OS synchronization primitives.
- If you are optimizing the kernel pipeline, first measure kernels separately to find the most time-consuming one. Avoid calling clFinish or clWaitForEvents frequently (for example, after each kernel invocation) in the final pipeline version. Submit the whole sequence (to the in-order queue) and issue clFinish (or wait on the event) once. This reduces host-device round trips.
- Consider OpenCL 2.0 “enqueue_kernel” feature that allows a kernel to independently enqueue to the same device, without host interaction. Notice that this approach is useful not just for recursive kernels, but also for regular non-recursive chains of the lightweight kernels. Refer to the See Also section below.
Did you find the information on this page useful?