Move Towards Performance Portability of DNN Models Using SYCL*

The wide adoption of deep neural networks (DNN) is an incentive to design and manufacture powerful and specialized hardware technologies that target systems that include edge devices, the cloud, and supercomputers. From a software point of view, several AI frameworks (such as TensorFlow*, PyTorch*, ONNX* [Open Neural Network Exchange] Runtime, and Apache MXNet*) and libraries efficiently match the close-to-metal performance of these devices. From a developer perspective, this diversity quickly becomes a burden due to the emerging dependencies between development stacks and deployment hardware.

As a result, cross-platform portability remains challenging.

SYCL* is an open-standard, single-source programming paradigm that's based on C++. The SYCL programming model allows you to write heterogeneous programs using a standard C++ from the International Organization for Standardization (ISO). This allows you to use all native language features, such as template metaprogramming, Lambda expressions, or more modern features.

Speaker

Mehdi Goli is vice president of research and development at Codeplay Software*. He is responsible for leading impactful, influential, and innovative research and development projects, ensuring Codeplay remains a leading independent provider of AI and HPC enablement. Mehdi joined Codeplay in 2017 as a senior software engineer in AI parallelization. He was the team lead of Eigen, SYCL-BLAS, and the NVIDIA* back end for Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Deep Neural Network Library.

Mehdi is also a member of the technical advisory board for oneMKL and Intel® AI Analytics Toolkit. In 2015, He completed his PhD in parallel computing at Robert Gordon University, Aberdeen, during which he was a research assistant in parallel computing at the IDEAS research institute, working on the ParaPhrase project.