Streamline Local Memory Access with SYCL* Atomics
Streamline Local Memory Access with SYCL* Atomics
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
This hands-on SYCL* workshop covers two topics that are useful for kernel programming when offloading computation to GPU devices: atomic operations and shared local memory.
The workshop first looks at SYCL atomic operations, which facilitate concurrent access to a memory location without introducing a data race. When multiple atomic operations access the same memory location, they are guaranteed not to overlap. This is critical when programming for GPU hardware where multiple operations run concurrently and update the same memory location.
This workshop also looks at how device-specific local memory can be used to get high bandwidth and low latency to access memory. The shared local memory (SLM) in GPUs is designed for this purpose. This session shows how SLM can be accessed in SYCL to optimize the code for better performance.
This webinar helps you:
- Understand how to use SYCL atomic operations to avoid data race conditions.
- Use a SYCL atomic operation to perform a reduction.
- Recognize how shared local memory access and usage improve performance.
- Use local memory to avoid the impact of repeated global memory access.
- Understand how group barriers are used to synchronize all work items.
Develop performant code quickly and correctly across hardware targets, including CPUs, GPUs, and FPGAs, with this standards-based, multiarchitecture compiler.
You May Also Like
Related On-Demand Webinar
Related Article