Legal Information Getting Help and Support Introduction Coding for the Intel® Processor Graphics Platform-Level Considerations Application-Level Optimizations Optimizing OpenCL™ Usage with Intel® Processor Graphics Check-list for OpenCL™ Optimizations Performance Debugging Using Multiple OpenCL™ Devices Coding for the Intel® CPU OpenCL™ Device OpenCL™ Kernel Development for Intel® CPU OpenCL™ device
Mapping Memory Objects Using Buffers and Images Appropriately Using Floating Point for Calculations Using Compiler Options for Optimizations Using Built-In Functions Loading and Storing Data in Greatest Chunks Applying Shared Local Memory Using Specialization in Branching Considering native_ and half_ Versions of Math Built-Ins Using the Restrict Qualifier for Kernel Arguments Avoiding Handling Edge Conditions in Kernels
Using Shared Context for Multiple OpenCL™ Devices Sharing Resources Efficiently Synchronization Caveats See Also Writing to a Shared Resource Partitioning the Work Keeping Kernel Sources the Same Basic Frequency Considerations Eliminating Device Starvation Limitations of Shared Context with Respect to Extensions
Why Optimizing Kernel Code Is Important? Avoid Spurious Operations in Kernel Code Perform Initialization in a Separate Task Use Preprocessor for Constants Use Signed Integer Data Types Use Row-Wise Data Accesses Tips for Auto-Vectorization Local Memory Usage Avoid Extracting Vector Components Task-Parallel Programming Model Hints
Similarly to the regular case of multiple queues within the same context, you can wait on event objects from CPU and GPU queue (error checking is omitted):
cl_event eventObjects; //notice that kernel object itself can be the same (shared) clEnqueueNDRangeKernel(gpu_queue, kernel, … &eventObjects); //other commands for the GPU queue //… //flushing queue to start execution on the Intel® Graphics in parallel to populating to the CPU queue below //notice it is NOT clFinish or clWaitForEvents to avoid serialization clFlush(gpu_queue);//assuming NO RESOURCE or other DEPENDENCIES with CPU device clEnqueueNDRangeKernel(cpu_queue, kernel, … &eventObjects); //other commands for the CPU queue //… //now let’s flush second queue clFlush(cpu_queue); //now when both queues are flushed, let’s wait for both kernels to complete clWaitForEvents(2, eventObjects);
In this example the first queue is flushed without blocking and waiting for results. In case of blocking calls like clWaitForEvents and clFinish, the actions are serialized with respect to devices. The reason is that in this example the commands do not get into the (second) queue before clWaitForEvents andclFinish in the first queue return (assuming you are in the same thread).
For the example, when proper serialization is critical refer to the "Writing to a Shared Resource" section.
Did you find the information on this page useful?