Bring SYCL* to Supercomputers with Celerity*

 

In the face of ever-slowing, single-thread performance growth for CPUs, the scientific and engineering communities increasingly turn to accelerator parallelization to tackle growing application workloads. Existing means of targeting distributed memory accelerator clusters impose severe programmability barriers and maintenance burdens. The Celerity* programming environment seeks to enable developers to scale C++ applications to accelerator clusters with relative ease, while leveraging and extending the SYCL* programming model. By having users provide minimal information about how data is accessed within compute kernels, Celerity automatically distributes work and data.

This video introduces the Celerity C++ API and its supporting distributed runtime system, and demonstrates that existing SYCL code can be brought to distributed memory clusters with only a small set of changes that follow established idioms. Celerity is shown to have comparable performance to more traditional approaches to distributed memory accelerator programming, such as MPI+OpenCL, with significantly lower implementation complexity.​

 

Speaker

Biagio Cosenza is an AIM assistant professor at the Department of Computer Science, University of Salerno, Italy. He is also associated with TU Berlin, leading the DFG project Celerity. He was a postdoctoral researcher at the University of Innsbruck, Austria, and received his PhD from the University of Salerno in 2011 while visiting HLRS and the University of Stuttgart. He has been recipient of several grants and scholarships (HPC-Europa2, HPC-Europa++, DAAD, ISCRA) and authored more than 40 publications. He is currently a member of the Khronos SYCL working group and a unit leader for the EuroHPC project Ligate.

 

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.