Intel® oneAPI Components
Analyzers and Debuggers
Analyzers
Intel® Advisor
Design code for efficient vectorization, threading, and offloading to accelerators.
Intel® Cluster Checker
Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership.
Intel® Inspector
Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
Intel® Trace Analyzer and Collector
Understand message passing interface (MPI) application behavior across its full runtime.
Intel® VTune™ Profiler
Find and optimize performance bottlenecks across CPU, GPU, and FPGA systems.
Debuggers
Intel® Distribution for GDB*
Enable deep, system-wide debugging of C, C++, and Fortran code.
Intel® SoC Watch
Analyze system power and thermal behavior on Intel® platforms with this command line tool.
Intel® System Debugger
Speed up system bring-up and validation of system hardware and software using in-depth debug and trace of BIOS/UEFI, firmware, device drivers, operating system kernels, and more.
Code Migration
Intel® DPC++ Compatibility Tool
Migrate legacy CUDA* code to a multiplatform program in DPC++ code with this assistant.
Compilers
Intel® C++ Compiler Classic
Use this standards-based C++ compiler with support for OpenMP* to take advantage of more cores and built-in technologies in platforms based on Intel® Xeon® Scalable processors and Intel® Core™ processors.
Intel® Fortran Compiler & Intel Fortran Compiler Classic
Use this standards-based Fortran compiler with OpenMP* support for CPU and GPU offload.
Intel® oneAPI DPC++/C++ Compiler
Compile and optimize code for CPU, GPU, and FPGA target architectures.
FPGA Add-on
Intel® FPGA Add-On for oneAPI Base Toolkit
Program these reconfigurable hardware accelerators to speed up specialized, data-centric workloads. This add-on requires installation of the Intel® oneAPI Base Toolkit.
Frameworks
Intel® Distribution for Python*
Achieve fast math-intensive workload performance without code changes for data science and machine learning problems.
Intel® Extension for PyTorch*
This distribution provides simple front-end Python* APIs and utilities to get performance optimizations such as graph optimization and operator optimization with minor code changes.
Intel® Extension for Scikit-learn*
Seamlessly speed up your scikit-learn* applications on Intel® CPUs and GPUs across single-node and multi-nodes. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver, while achieving the speed up for your machine learning algorithms. The toolkit also includes stock scikit-learn to provide a comprehensive Python environment installed with all required packages. The extension supports up to the last four versions of scikit-learn, and provides flexibility to use with your existing packages.
Intel® Neural Compressor
Provide a unified, low-precision inference interface across multiple deep learning frameworks optimized by Intel with this open source Python library.
Intel® Optimization for PyTorch*
In collaboration with Facebook*, this popular deep learning framework is now directly combined with many optimizations from Intel to provide superior performance on Intel® architecture. This package provides the binary version of latest PyTorch* release for CPU and further adds extensions and bindings from Intel with oneAPI Collective Communications Library (oneCCL) for efficient distributed training.
Intel® Optimization for TensorFlow*
In collaboration with Google*, TensorFlow* has been directly optimized for Intel® architecture using the primitives of oneAPI Deep Neural Network Library (oneDNN) to maximize performance. This package provides the latest TensorFlow binary version compiled with CPU enabled settings (--config=mkl).
Intel® Optimization for XGBoost*
This open source, machine learning framework includes optimizations contributed by Intel. It runs on Intel® hardware through Intel® software acceleration powered by oneAPI libraries. No code changes are required.
Analytics, AI and ML Libraries
GStreamer Video Analytics Plug-ins
Use the GStreamer framework and build efficient, scalable video analytics applications with optimized plug-ins for video decode, encode, and inference.
Intel® Distribution of Modin*
Scale data preprocessing across multi-nodes using this intelligent, distributed DataFrame library with an identical API to pandas. This library integrates with OmniSci* in the back end for accelerated analytics. Available via Anaconda*.
Intel® oneAPI Deep Neural Network Library
Develop fast neural networks on Intel® CPUs and GPUs with performance-optimized building blocks. Included in the oneAPI specification.
Model Zoo for Intel® Architecture
Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source, machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
XGBoost Optimized for Intel® Architecture
In collaboration with the XGBoost community, Intel has been directly upstreaming many optimizations to provide superior performance on Intel CPUs. This well-known, machine learning package for gradient-boosted decision trees now includes seamless, drop-in acceleration for Intel architectures to significantly speed up model training and improve accuracy for better predictions.
Performance Libraries
Intel® Integrated Performance Primitives
A secure, fast, and lightweight library of building blocks to speed up performance of imaging, signal processing, data compression, cryptography, and more.
Intel® MPI Library
Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture.
Intel® oneAPI Collective Communications Library
Implement optimized communication patterns to distribute deep learning model training across multiple nodes.
Intel® oneAPI Data Analytics Library
Boost machine learning and data analytics performance.
Intel® oneAPI Deep Neural Network Library
Develop fast neural networks on Intel® CPUs and GPUs with performance-optimized building blocks.
Intel® oneAPI DPC++ Library (oneDPL)
Speed up data parallel workloads with these key productivity algorithms and functions.
Intel® oneAPI Math Kernel Library (oneMKL)
Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
Intel® oneAPI Threading Building Blocks (oneTBB)
Simplify parallelism with this advanced threading and memory-management template library.
Intel® oneAPI Video Processing Library (oneVPL)
Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing for broadcasting, live streaming and VOD, cloud gaming, and more.
Rendering and Ray-Tracing Libraries
Intel® Open Image Denoise
Improve image quality with machine learning algorithms that selectively filter visual noise. This independent component can be used for noise reduction on 3D rendered images, with or without Intel® Embree.
Intel® OpenSWR
Use a software rasterizer that's compatible with OpenGL* to work with datasets when GPU hardware isn’t available or is limiting.
Note Intel® OpenSWR is available as part of the Mesa OpenGL open source community project at OpenSWR.
Intel® Open Volume Kernel Library (Intel® Open VKL)
Enable rendering and simulation processing of 3D spatial data with low-level volumetric data-processing algorithms.
Intel® OSPRay
Use this ray-tracing engine to develop interactive, high-fidelity visualization applications.
Intel® OSPRay for Hydra*
Connect the Intel® oneAPI Rendering Toolkit libraries in your application to the universal scene description (USD) Hydra* rendering subsystem by using the Intel® OSPRay for Hydra* plug-in. This plug-in enables fast preview exploration for compositing and animation, as well as high-quality, physically based photorealistic rendering of USD content.
Intel® OSPRay Studio
Perform high-fidelity, ray traced, interactive, and real-time rendering through a graphical user interface with this new scene graph application addition to Intel® OSPRay.
Other Tools
Eclipse* IDE Plug-Ins
Simplify application development for systems and IoT edge devices with this standards-based development IDE with the provided Eclipse* plug-ins.
Requires a separate download.
IoT Connection Tools
Connect sensors to device and device to cloud with this collection of abstraction libraries and tools.
Linux* Kernel Build Tools
Using specialized platform project wizards that are integrated with Eclipse, quickly create, import, and customize Linux kernels based on the Yocto Project* for edge devices and systems.
Intel Contributions to the Industry Specification
The following components have a variant that has been documented and contributed to the industry specification.