Intel® oneAPI Math Kernel Library
Highly optimized, fast, and complete library of math functions for Intel® CPUs and GPUs. Accelerate math processing routines, increase application performance, and reduce development time.
Intel®-Optimized Math Library for Numerical Computing
Optimized Library for Scientific Computing
- The fastest and most-used math library for Intel®-based systems†
- Enhanced math routines enable developers and data scientists to create performant science, engineering, or financial applications
- Core functions include BLAS, LAPACK, sparse solvers, fast Fourier transforms (FFT), random number generator functions (RNG), summary statistics, data fitting, and vector math
- Optimizes applications for current and future generations of Intel CPUs, GPUs, and other accelerators
- Is a seamless upgrade for previous users of the Intel® Math Kernel Library (Intel® MKL)
- Additional matrix multiply optimizations for next generation CPUs and GPUs including DGEMM, SGEMM, Systolic GEMM, DGETRF, DPOTRF, DGEQRF, FFT SP/DP, and RNG functions.
- Increased CUDA* library function API compatibility coverage for BLAS, LAPACK, sparse BLAS, vector math, summary statistics, splines, and more, easing code migration to oneAPI and Intel GPUs.
- Several optimizations and features on Intel® Data Center GPU Max Series:
- FFTW3 Fortran OpenMP* offload APIs for real fast Fourier Transforms (FFT) and optimizations for 1D and 2D FFTs.
- Gaussian inverse cumulative distribution function (ICDF)-based method and RNG optimizations of device APIs.
- Sparse BLAS optimizations for sparse::gemv() and sparse::gemm() across a wide range of matrix sizes and sparse::matmat() for small, medium, and large problem sizes.
- Optimizations for Cholesky inverse, triangular matrix inverse, and batch group LU inverse on GPUs.
- Support for Intel® Advanced Matrix Extensions (Intel® AMX) bfloat16 data type and Intel® Advanced Vector Extensions 512 (Intel® AVX-512) float16 data type for the 4th generation Intel® Xeon® Scalable processor.
What You Need
- The Intel® oneAPI Math Kernel Library (oneMKL) is available as part of the Intel® oneAPI Base Toolkit.
- Using oneMKL with Intel® MPI library or Intel® Fortran Compilers requires the Intel® oneAPI HPC Toolkit.
Explore the Intel oneAPI Base Toolkit
Download as Part of the Toolkit
oneMKL is included in the Intel oneAPI Base Toolkit, which is a core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures.
Download the Stand-Alone Version
A stand-alone download of oneMKL is available. You can download binaries from Intel or choose your preferred repository.
Develop in the Cloud
Build and optimize oneAPI multiarchitecture applications using the latest optimized Intel® oneAPI and AI tools, and test your workloads across Intel® CPUs and GPUs. No hardware installations, software downloads, or configuration necessary. Free for 120 days with extensions possible.
Help oneMKL Evolve
oneMKL is part of the oneAPI industry standards initiative. We welcome you to participate.
These benchmarks are offered to help you make informed decisions about which routines to use in your applications, including performance for each major function domain in oneMKL by processor family. Some benchmark charts only include absolute performance measurements for specific problem sizes. Others compare previous versions, popular alternate open source libraries, and other functions for oneMKL.
Linear Algebra and Vector Math
Fast Fourier Transform, Random Number Generation, and Sparse Solver
Stay in the Know with All Things CODE
Sign up to receive the latest trends, tutorials, tools, training, and more to
help you write better code optimized for CPUs, GPUs, FPGAs, and other
accelerators—stand-alone or in any combination.