Try Multiple SYCL Compilers on the Intel DevCloud

Published: 11/17/2021

By James R Reinders

The Intel DevCloud is a free resource for trying out software on Intel CPUs, GPUs, and FPGAs.  While the oneAPI tools including the DPC++ compiler for SYCL are preinstalled, we are free to install additional SYCL compilers and use them as well.  I've done exactly that, and this blog shares how to use them on DevCloud.

I welcome feedback (please post on my xpublog forum).

Bookmark this page - I expect to update it in the future.

I did not write up in detail how I built ComputeCPP or hipSYCL, because I faithfully followed the instructions provided on their websites. My only issues came when I ignored their step-by-step instructions, thinking I knew better. I found they were right in each instance with what they said to do.

The "magic" involved was getting the right parameters set to use the hardware and software on DevCloud by providing the right magical paths to OpenCL, etc.

Below, I provide instructions on how you can simply use ComputeCPP and HipSYCL that I've installed and made available.  Or, you can study how I have them working, and use that to enable your own builds in your own account.  That could be useful if you want a newer version of ComputeCPP (as I write this, I did use that latest version) or enable additional hardware targets, or if you want to try to tap into alternate HipSYCL capabilities (I only succeeded at using the OpenMP backend).

Hopefully, my sharing will help you either way. I'd love to hear from anyone who makes more progress, and we can learn together.

Overview

  • DPC++ is pre-installed with many other oneAPI tools on DevCloud, and works well.

  • ComputeCPP is build (by me to support a CPPCON21 tutorial we taught) and available and appears to work well.

  • hipSYCL is built (by me to support a CPPCON21 tutorial we taught) but currently limited to only working with the OpenMP backend and CPU device.

  • no special privileges are needed to build the tools - anyone can feel free to try building any of these SYCL compilers, from open source, on DevCloud or any other system.  DPC++, ComputeCPP, and hipSYCL have installation instructions on their respective github sites

Get a DevCloud account

If you do not have a DevCloud account – signup at https://tinyurl.com/getdevcloud
It is free.  You should get an email with detailed instructions within minutes of signing up.

SYCL 2020 status

All three compilers (DPC++, ComputeCPP, hipSYCL) are in various stages of moving to the new SYCL 2020 standard. No compiler is expecting to be done in 2021, but they are each well along. Each compiler project has documented their status on their project websites. In general, SYCL 2020 is supported sufficiently to run all the introductory tutorials of which I am aware.

Selecting a compiler

issue this command:
    module use /data/oneapi_workshop/xpublog/cppcon/Modules/modulefiles
better yet – put this in your ~/.bash_profile:
    source /data/oneapi_workshop/xpublog/cppcon/james.source.this
now, you can use DPC++, ComputeCPP, or hipSYCL using the instructions that follow.

dpcpp (DPC++)

DPC++ is ready to use when you log in assuming (it’s the default for new accounts) that your .bash_profile sourced

/opt/intel/inteloneapi/setvars.sh

I hope to eventually see DevCloud move to modules all tools, for now they are just default in your path, etc.

compute++ (ComputeCPP)

To be ready to use ComputeCPP simply type:

module load computeCPP

syclcc (hipSYCL)

To be ready to use hipSYCL simply type:

module load hipSYCL

Why Modules?

Because they are all SYCL compilers, with different lib, include, and bin directories – the best I’ve figured out how to do is to make it so only one is actively usable at a time. I may be being conservation - but I find this safe, effective, and easy to understand.

The module files are easy to read to see what is being set, in case you are curious, or you want to duplicate/modify for your own needs.  Just look at the two files found a little deeper in the hierarchy under /data/oneapi_workshop/xpublog/cppcon/Modules/modulefiles.

If you have a module loaded, use “module unload computeCPP” or “module unload hipSYCL” (you can also use “module purge” since these are the only modules we are using) before you try to use DPC++ or try to load another module.  If you forget, I've set things up to prevent accidental use of the non-current compiler.

A demonstration of all three

/data/oneapi_workshop/xpublog/cppcon/hello.sh
Will copy a small hello.cpp into your current directory and proceed to compile and run it with DPC++, then ComputeCPP, and then hipSYCL.
Be sure to look in the script – to see the commands lines needed to use each.

Segmentation faults - how to fix

If you try compiling on the login nodes – you will see segmentation faults, because the time quotas for runs (like compiles) are tight.  Just be sure you are compiling on a non-login node before worrying about segmentation faults.

SYCL Academy - TIps for tutorial codes

The tutorials we used for the CPPCON21 tutorial  – my tips to make things easier:
if you didn’t already – you should put this in your ~/.bash_profile:
        source /data/oneapi_workshop/xpublog/cppcon/james.source.this

You can follow the instructions on https://github.com/codeplaysoftware/syclacademy (README.md)
I’ve already put the commands into scripts…

DPC++ short cut (run it, recommend you peak inside first!):
/data/oneapi_workshop/xpublog/cppcon/fetch-syclacademy-for-dpcpp.sh
ComputeCPP short cut (run it, recommend you peak inside first!):
/data/oneapi_workshop/xpublog/cppcon/fetch-syclacademy-for-computeCPP.sh
hipSYCL short cut (run it, recommend you peak inside first!):
/data/oneapi_workshop/xpublog/cppcon/fetch-syclacademy-for-hipSYCL.sh

I’m old school – I just go to the code source directory and start playing…
these command below, work for each exercise (except 15 which doesn’t have a ‘solution.cpp’)

For DPC++:

## I actually do this (requires my prior tips to use james.source.this): 
cd ~/syclacademy/Code_Exercises/Exercise_01_Compiling_with_SYCL
dpcpp solution.cpp
./a.out

For ComputeCPP:

## I actually do this (requires my prior tips to use james.source.this): 
cd ~/syclacademy/Code_Exercises/Exercise_01_Compiling_with_SYCL
compute++ solution.cpp -lComputeCpp -sycl-driver -std=c++17 \
-DSYCL_LANGUAGE_VERSION=2020 -no-serial-memop
./a.out

NOTE: DevCloud is not host to NVIDIA or AMD hardware at this time, so I didn't do anything to enable such code generation.  It's on my list of things to do eventually.  If you do it first - please drop me a note!

For hipSYCL:

## I actually do this (requires my prior tips to use james.source.this): 
cd ~/syclacademy/Code_Exercises/Exercise_01_Compiling_with_SYCL
syclcc -O2 -std=c++17 solution.cpp
./a.out

NOTE: I’ve not gotten hipSYCL doing everything on DevCloud (as of October 31, 2021) that it is capable of - the hipSYCL that I built isn’t running on anything other than the CPU, and only uses the CPU device on DevCloud.  I’m sure this my fault, and I hope to learn how to make this work in the future. Aksel Alpay, who helped teach at CPPCON21, told me to not worry so much - that the OpenMP was a good stable introduction and that the SPIR/V backend was still in development. I'll check back with Aksel later to see if I can update this.  If you figure it out - just drop me a note!

What is not working (that I know of)

ComputeCPP, as well as DPC++, did well in targeting CPU, GPU, and FPGA emulation.  Only DPC++ is equiped ot target Intel FPGAs.

Due to my limited understanding of setting up hipSYCL – hipSYCL is limited to only the CPU device, unless someone builds a better version.  My build of hipSYCL is configured to use its OpenMP backend.  I haven’t figured out how to get other backends (specifically SPIR/V) functioning yet.  Therefore, right now ONLY the CPU selector works for me (I suspect I didn’t build the SPIR/V support correctly – and I think I need that to get to the GPUs on DevCloud through Level 0).

There is only Intel hardware on the DevCloud, so hipSYCL’s ability to target AMD and NVIDIA, as well as ComputeCPP and DPC++ abilities to target NVIDIA, are not visible on DevCloud.

Interactive vs. batch on DevCloud

DevCloud is a shared resource, so you are encouraged to send jobs to nodes per your DevCloud instructions.
If the system is not too loaded… and you are actively working on code… then you can try grabbing a node for you, and you alone, to use interactively:

qsub -I -l nodes=1:gpu:ppn=2 -d .

 

Future topics – make suggestions!

Feedback welcome - Please share!

 

I welcome feedback (please post on my xpublog forum).

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.