OpenMP* has been around since October 1997—an eternity for any software—and has long been the industry-wide, parallel-programming model for high-performance computing (HPC).
It continues to evolve in lockstep with the ever-expanding hardware landscape. The API now supports GPUs and other accelerators.
In this session, Intel principal engineer Xinmin Tian shares three examples of how to develop code that exploits GPU resources using the latest OpenMP features, including:
- Introducing its GPU-offload support
- Providing examples of offloaded code to GPUs, including Intel® Xe products
- Taking advantage of the Intel® DevCloud for oneAPI to run code samples on the latest Intel® hardware and oneAPI software
- Sign up for an Intel DevCloud for oneAPI account—a free development sandbox with access to the latest Intel hardware and oneAPI software.
- Explore oneAPI, including developer opportunities and benefits.
- Subscribe to the podcast—Code Together is an interview series that explores the challenges at the forefront of cross-architecture development. Each bi-weekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Available wherever you get your podcasts.
Intel senior principal engineer and compiler architect, Intel Corporation
Xinmin is responsible for driving compiler OpenMP, offloading, vectorization, and parallelization technologies into current and future Intel® architecture. His current focus is on programming languages for Intel® oneAPI toolkits for CPU and Intel® Xe accelerators, compilers, and application performance tuning. Xinmin holds 27 US patents, has authored over 60 technical papers, and coauthored three books that span his expertise. Xinmin holds a PhD from the University of San Francisco.