Today’s workloads require diverse architectures—including a variety of powerful accelerators to optimize compute. To reduce complexity for developers, oneAPI is the smart path to freedom for accelerated computing with an open, standards-based, cross-architecture programming model. It unifies and simplifies coding for CPUs, GPUs, and FPGAs. Developers can choose the best hardware for the problem they are solving—without proprietary lock-in.
It includes Data Parallel C++ (or DPC++), which is an implementation of the Khronos Group* SYCL* standard for cross-architecture programming. The programming model also contains oneAPI libraries and a hardware abstraction layer. Alongside the oneAPI specification, Intel released a product implementation of oneAPI with a set of comprehensive developer toolkits to help speed up workloads on Intel® architecture. Intel’s oneAPI Base Toolkit includes a combined Data Parallel C++ and C++ compiler, optimized libraries, a DPC++ compatibility tool to assist in migrating CUDA* code to SYCL code, and advanced analysis and debug tools. Altogether the oneAPI Base Toolkit provides the foundational tools that enables developers to realize all the hardware value and confidently develop performant code for high performance cross-architecture applications.
Intel’s oneAPI compiler is designed to deliver parallel programming productivity and uncompromised performance across architectures. It builds on Intel’s decades of compiler performance leadership. The compiler is compatible with multiple programming environments and based on industry standards. The compatibility tool assists with a one-time CUDA code migration for kernels and API calls to create new SYCL code. This automates much of the code porting process.
The toolkit includes performance-optimized libraries across several domains, such as math, data analytics, deep learning, threading, video processing, and cryptography. And with advanced analysis and debug tools, developers can analyze, optimize, and debug applications across architectures. Intel® Advisor identifies which parts of the code can most profitably be offloaded for acceleration. The tool also pinpoints accelerator bottlenecks, such as memory, cache, compute, and data transfer.
Intel® VTune™ Profiler analyzes the performance of the systems, providing visibility into interaction of CPUs and accelerators. It also identifies hot spots to efficiently pinpoint portions of code requiring optimization. The Intel-enhanced GDB* debugger is capable of handling thousands of threads running simultaneously on each device in a system and provides cross-architecture debug support of multiple languages.
The Base Toolkit also includes the optimized Intel® Distribution for Python*. Along with Intel’s oneAPI Base Toolkit, Intel offers add-on toolkits that target specialized workloads for HPC, AI, IoT, and rendering, so developers can optimize them for high performance.
Get ready to lead the next evolution of coding. Focus on innovation and optimization—not rewriting code for the next new platform with oneAPI. Test drive oneAPI today in the Intel® DevCloud on your choice of Intel® architectures free of charge. Or download the Intel® toolkits today.