Intel® oneAPI Deep Neural Network Library
Increase Deep Learning Framework Performance on CPUs and GPUs
Discontinuation Notice
Intel® Developer Cloud for oneAPI will be discontinued effective October 31, 2024. This includes all features and services associated with the platform.
Develop Faster Deep Learning Frameworks and Applications
The Intel® oneAPI Deep Neural Network Library (oneDNN) provides highly optimized implementations of deep learning building blocks. With this open source, cross-platform library, deep learning application and framework developers can use the same API for CPUs, GPUs, or both—it abstracts out instruction sets and other complexities of performance optimization.
Using this library, you can:
- Improve performance of frameworks you already use, such as OpenVINO™ toolkit, AI Tools from Intel, PyTorch*, and TensorFlow*.
- Develop faster deep learning applications and frameworks using optimized building blocks.
- Deploy applications optimized for Intel CPUs and GPUs without writing any target-specific code.
Download the Stand-Alone Version
A stand-alone download of oneDNN is available. You can download binaries from Intel or choose your preferred repository.
Help oneDNN Evolve
oneDNN is an implementation of the oneAPI specification, part of the Unified Acceleration (UXL) Foundation. We welcome you to participate.
Download as Part of the Toolkit
oneDNN is included as part of the Intel® oneAPI Base Toolkit, which is a core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures.
Features
Automatic Optimization
- Use existing deep learning frameworks
- Develop and deploy platform-independent deep learning applications with automatic detection of instruction set architecture (ISA) and ISA-specific optimization
Network Optimization
- Identify performance bottlenecks using Intel® VTune™ Profiler
- Use automatic memory format selection and propagation based on hardware and convolutional parameters
- Fuse primitives with operations applied to the primitive’s result, for instance, Conv+ReLU
- Quantize primitives from FP32 to FP16, bf16, or int8 using Intel® Neural Compressor
Optimized Implementations of Key Building Blocks
- Convolution
- Matrix multiplication
- Pooling
- Batch normalization
- Activation functions
- Recurrent neural network (RNN) cells
- Long short-term memory (LSTM) cells
Abstract Programming Model
- Primitive: Any low-level operation from which more complex operations are constructed, such as convolution, data format reorder, and memory
- Memory: Handles to memory allocated on a specific engine, tensor dimensions, data type, and memory format
- Engine: A hardware processing unit, such as a CPU or GPU
- Stream: A queue of primitive operations on an engine
Benchmarks
Case Studies
Netflix* Improves Video Encoding Performance up to 2X
Netflix* engineers optimized cloud-based video encoding by using oneDNN to get the most out of the Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction set.
IBM* Research Improves Watson* NLP Performance up to 165%
Adding oneDNN optimizations improved text and sentiment classification task performance up to 35% on 3rd gen Intel® Xeon® Scalable processors. Upgrading to 4th gen Intel Xeon processors with oneDNN led to a 165% improvement.
Demonstrations
Accelerate Inference on x86-64 Machines with oneDNN Graph
oneDNN Graph, included in PyTorch 2.0 and beyond, can help accelerate inference on x86-64 CPUs with float32 and bfloat16 datatypes.
Optimize Transformer Model Inference on Intel Processors
Fusing TensorFlow operations helps take advantage of optimized oneDNN operations, reducing memory use and inference latency.
News
2024 Tools from Intel Now Available with UXL Foundation
The 2024.1 release of oneDNN adds features that streamline storage efficiency, optimize performance on Intel® Xeon® processors, and improve development productivity.
oneDNN 3.4 Adds Performance Improvements for CPUs and GPUs
Performance improvements target new and upcoming device support, including improved MATMUL performance for large language models and transformer-style models on Intel CPUs and GPUs.
Documentation & Code Samples
Code Samples
Get Started
Learn how to configure and compile oneDNN applications using prebuilt oneDNN binaries, and how to run these applications on different Intel architectures.
oneDNN with SYCL* Interoperability
Use this code sample to learn about programming for Intel CPU and GPU with SYCL* extensions API in oneDNN.
Tutorials
A oneDNN Library for Convolutional Neural Network (CNN) Inference (FP32)
Learn how oneDNN helps to build a neural network topology for forward-pass inference that implements topology layers as numbered primitives.
Use these guided samples on a Jupyter* Notebook to examine oneDNN functionality for developing deep learning applications and neural networks, optimized for Intel CPUs and GPUs.
How to work with code samples:
Training
Understanding oneDNN
AI Model Performance
Specifications
Processors:
- Intel Atom® processors with Intel® Streaming SIMD Extensions
- Intel® Core™ processors
- Intel® Xeon® processors
GPUs:
- Intel® Processor Graphics Gen9 and above
- Iris® graphics
- Intel® Data Center GPUs
- Intel® Arc™ A-series graphics
Host & target operating systems:
- Linux*
- Windows*
- macOS*
Languages:
- SYCL
Note Must have Intel oneAPI Base Toolkit installed
- C and C++
Compilers:
- Intel® oneAPI DPC++/C++ Compiler
- Clang*
- GNU C++ Compiler*
- Microsoft Visual Studio*
- LLVM* for Apple*
Threading runtimes:
- Intel® oneAPI Threading Building Blocks
- OpenMP*
- SYCL
For more information, see the system requirements.
Get Help
Your success is our success. Access these resources when you need assistance.
For additional help, see oneAPI Support.
Stay Up to Date on AI Workload Optimizations
Sign up to receive hand-curated technical articles, tutorials, developer tools, training opportunities, and more to help you accelerate and optimize your end-to-end AI and data science workflows. Take a chance and subscribe. You can change your mind at any time.