Visible to Intel only — GUID: fue1517596292333
Ixiasoft
Visible to Intel only — GUID: fue1517596292333
Ixiasoft
7.2.1. Contiguous Memory Accesses
Consider the following code example:
__kernel void sum ( __global const float * restrict a,
__global const float * restrict b,
__global float * restrict c )
{
size_t gid = get_global_id(0);
c[gid] = a[gid] + b[gid];
}
The load operation from array a uses an index that is a direct function of the work-item global ID. By basing the array index on the work-item global ID, the offline compiler can direct contiguous load operations. These load operations retrieve the data sequentially from the input array, and sends the read data to the pipeline as required. Contiguous store operations then store elements of the result that exits the computation pipeline in sequential locations within global memory.
The following figure illustrates an example of the contiguous memory access optimization:
Contiguous load and store operations improve memory access efficiency because they lead to increased access speeds and reduced hardware resource needs. The data travels in and out of the computational portion of the pipeline concurrently, allowing overlaps between computation and memory accesses. If possible, use work-item IDs that index consecutive memory locations for load and store operations that access global memory. Sequential accesses to global memory increase memory efficiency because they provide an ideal access pattern.