Developer Guide


Partitioning Buffers Across Memory Channels of the Same Memory Type

By default, the
Intel® oneAPI
configures each global memory type in a burst-interleaved manner. Usually, the burst-interleaving configuration leads to the best load balancing between the memory channels. However, there might be situations where it is more efficient to partition the memory into non-interleaved regions. For additional information, refer to Global Memory Accesses Optimization.
The Figure 1 illustrates the differences between burst-interleaved and non-interleaved memory partitions.
To manually partition some or all of the available global memory types, perform the following tasks:
  1. Create a buffer with
    specifying channel ID in its
    • Specify
      with value 1 to allocate the buffer in the lowest available memory channel (default).
    • Specify
      with value 2 or greater to allocate the buffer in the higher available memory channel.
    Here is an example buffer definition:
    range<1> num_of_items{N}; buffer<T, 1> bufferA(, num_of_items, {property::buffer::mem_channel{1}}); buffer<T, 1> bufferB(, num_of_items, {property::buffer::mem_channel{2}}); buffer<T, 1> bufferC(, num_of_items, {property::buffer::mem_channel{3}});
  2. Compile your design kernel using the
    flag to configure the memory channels of the specified memory type as separate addresses. For more information about the use of the
    flag, refer to the Disable Burst-Interleaving of Global Memory (-Xsno-interleaving=<global_memory_type>) section.
  • Do not set more than one memory channel property on a buffer.
  • If the memory channel specified is not available on the target board, the buffer is placed in the first memory channel.

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at