Intel® oneAPI Collective Communications Library
Scalable & Efficient Distributed Training for Deep Neural Networks
Implement Multi-Node Communication Patterns
The Intel® oneAPI Collective Communications Library (oneCCL) enables developers and researchers to more quickly train newer and deeper models. This is done by using optimized communication patterns to distribute model training across multiple nodes.
The library is designed for easy integration into deep learning frameworks, whether you are implementing them from scratch or customizing existing ones.
- Built on top of lower-level communication middleware. Message passing interface (MPI) and libfabrics transparently support many interconnects, such as Cornelis Networks*, InfiniBand*, and Ethernet.
- Optimized for high performance on Intel® CPUs and GPUs.
- Allows the tradeoff of compute for communication performance to drive scalability of communication patterns.
- Enables efficient implementations of collectives that are heavily used for neural network training, including all-gather, all-reduce, and reduce-scatter.
Download as Part of the Toolkit
oneCCL is included as part of the Intel® oneAPI Base Toolkit, which is a core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures.
Download the Stand-Alone Version
A stand-alone download of oneCCL is available. You can download binaries from Intel or choose your preferred repository.
Develop in the Free Intel® Cloud
Get what you need to build and optimize your oneAPI projects for free. With an Intel® DevCloud account, you get 120 days of access to the latest Intel® hardware—CPUs, GPUs, FPGAs—and Intel® oneAPI tools and frameworks. No software downloads. No configuration steps. No installations.
Help oneCCL Evolve
oneCCL is part of the oneAPI industry standards initiative. We welcome you to participate.
Features
Common APIs to Support Deep Learning Frameworks
oneCCL exposes a collective API that supports:
- Commonly used collective operations found in deep-learning and machine-learning workloads
- Interoperability with SYCL* from the Khronos* Group
Deep Learning Optimizations
The runtime implementation enables several optimizations, including:
- Asynchronous progress for compute communication overlap
- Dedication of one or more cores to ensure optimal network use
- Message prioritization, persistence, and out-of-order execution
- Collectives in low-precision data types
Documentation & Code Samples
Documentation
Code Samples
Learn how to access oneAPI code samples in a tool command line or IDE.
View All Code Samples (GitHub*)
Specifications
Processors:
- Intel® Xeon® processors
GPUs:
- Intel® Processor Graphics Gen9 and above
- Xe Architecture
Memory
- Dynamic RAM
- Intel® Optane™ DC persistent memory
Host operating systems:
- Linux*
Target operating systems:
- Linux
Languages:
- SYCL
- C and C++
For more information, see the system requirements.
Compilers:
- GNU Compiler Collection (GCC)*
Distributed environments:
- MPI (MPICH-based)
Get Help
Your success is our success. Access these forum and GitHub resources when you need assistance.
Stay in the Know with All Things CODE
Sign up to receive the latest trends, tutorials, tools, training, and more to help you
write better code optimized for CPUs, GPUs, FPGAs, and other accelerators—
stand-alone or in any combination.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.