Intel® oneAPI Toolkit
C++ and Fortran Tools for Building and Optimizing Software across Architectures
Build and Optimize Applications across CPUs and GPUs
Intel® oneAPI Toolkit is a core set of tools and libraries for developing optimized software that scales across modern systems.
It supports development needs spanning client and edge applications through to large-scale, distributed systems.
The toolkit provides standards-based compilers, domain-specific performance libraries with drop-in acceleration, and profiling, design, and debugging tools. These capabilities help developers build, optimize, and deploy applications efficiently.
It features an industry-leading C++ compiler with support for SYCL*, among other standards-based programming models.
Download the Toolkit
Get started with this core set of tools and libraries for building, optimizing, and scaling software across diverse architectures.
Purchase with Priority Support
Submit questions, problems, and other technical support issues through direct and private interactions with Intel engineers.
"Intel's oneAPI toolkit has demonstrated powerful performance and good compatibility in GeoEast* software applications, and has provided us with important help in the further exploration of heterogeneous computing." – BGP Inc.
What's Included
Compilers
Intel® oneAPI DPC++/C++ Compiler
Compile and optimize C++ and SYCL code for CPU and GPU target architectures.
Modern Fortran compiler for performance-critical applications on CPU and GPU.
Performance Libraries
Intel® oneAPI Math Kernel Library
Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
Intel® oneAPI Deep Neural Network Library (oneDNN)
Develop fast neural networks on Intel CPUs and GPUs with performance-optimized building blocks.
A high-performance implementation of the MPI standard for building and scaling distributed applications across nodes and clusters.
Intel® Integrated Performance Primitives
Speed up performance of imaging, signal processing, data compression, cryptography, and more.
Intel® Cryptography Primitives Library
Secure, fast, lightweight building blocks for cryptography optimized for Intel CPUs.
Intel® oneAPI Threading Building Blocks
Simplify parallelism with this advanced threading and memory-management template library.
Intel® oneAPI DPC++ Library
Speed up data parallel workloads with these key productivity algorithms and functions.
Intel® oneAPI Collective Communications Library
Implement optimized communication patterns to distribute deep learning model training across multiple nodes.
Analysis and Debug Tools
Intel® VTune™ Profiler
Find and optimize performance bottlenecks across CPU and GPU systems.
Intel® Distribution for GDB*
Enable deep, system-wide debug of SYCL, C, C++, and Fortran code.
Add-ons
Some oneAPI tools and libraries are available as separate downloads and can be added as needed. These components integrate with the oneAPI toolchain but are not included in the default Intel oneAPI Toolkit installers.
Design code for efficient vectorization, threading, and offloading to accelerators.
Implement Partitioned Global Address Space (PGAS) programming for host-initiated and device-initiated operations.
Intel® oneAPI Data Analytics Library (oneDAL)
Boost machine learning and data analytics performance.
Intel® Deep Learning Essentials
Advanced developers can access tools to develop, compile, test, and optimize deep learning frameworks and libraries—such as PyTorch* and TensorFlow*—for Intel CPUs and GPUs.
- Intel® oneAPI DPC++/C++ Compiler
- Intel® oneAPI DPC++ Library (oneDPL)
- Intel® oneAPI Math Kernel Library (oneMKL)
- Intel® oneAPI Collective Communications Library (oneCCL)
- Add-on: Intel® Deep Neural Network Library (oneDNN)
Note You can download precompiled frameworks for Intel® architectures from AI Tools.
Get Started
Before You Begin
Get your system ready to install the oneAPI Kit. Check that your system meets the minimum requirements and that you have the necessary hardware that works with the oneAPI Kit.
Download the oneAPI Kit
After downloading, follow the Get Started Guide to configure your system and run your first sample.
Get Started Guide: Linux* | Windows* | Containers
Next Steps
Access samples or run your own workloads with the help of tutorials and training.
Documentation & Samples
Code Samples
Learn how to access code samples in a tool command line or IDE.
- Vector-Add
- Matrix Multiplication for Intel® Advisor
- Matrix Multiplication for Intel® VTune™ Profiler
- Sepia Filter
- 2D Finite Difference Wave Propagation (ISO2DFD)
- Matrix Multiplication with CPUs and GPUs
- 1d HeatTransfer Finite Difference Stencil Kernel - SYCL
- Particle Diffusion
- ISO3DFD Finite Difference Stencil Kernel - SYCL
Success Stories
Training
Learn the basics of SYCL for heterogeneous computing (CPU and GPU) using sample code, webinars and more.
Additional oneAPI Kit Training
Migrate from CUDA to C++ with SYCL
C++ and SYCL deliver a unified programming model, performance portability, and C++ alignment for applications using accelerators. Learn how to migrate your code to SYCL and see examples from other developers.
Intel® oneAPI Math Kernel Library (oneMKL) Essentials [self-paced]
Learn how to use oneMKL and its functions to create performant applications and speed up computations with low-level math routines.
OpenMP* Offload Basics [self-paced]
Learn the fundamentals of using OpenMP offload directives to target GPUs, as well as using Intel® C, C++, and Fortran Compilers through hands-on practice in this guided learning path.
Specifications
CPUs:
- Intel® Xeon® Scalable processors
- Intel® Xeon® processors
- Intel® Core™ Ultra processors
- Intel® Core™ processors
- Intel Atom® processors
- Other processors compatible with Intel® 64 architecture
GPUs:
- Intel® UHD Graphics for 11th generation Intel processors or newer
- Intel® Iris® Xe Graphics
- Intel® Arc™ GPUs
- Intel® Server GPU
- Intel® Data Center GPU Flex Series
- Intel® Data Center GPU Max Series
Operating systems:
- Linux*
- Windows*
Languages:
- C, C++, C++ with SYCL
- Fortran
Distributed environments:
- MPI
OpenFabrics Interface* (OFI) framework implementation supporting the following:
- InfiniBand*
- iWARP, RDMA over Converged Ethernet (RoCE)
- Amazon Web Services Elastic Fabric Adapter (AWS EFA)*
- Cornelis Networks*
- Ethernet, IP over InfiniBand (IPoIB), IP over Intel OPA
Development environments:
- Microsoft Visual Studio*, Microsoft Visual Studio Code*
- Eclipse*
Get Help
Your success is our success. Access these support resources when you need assistance.