Intel® oneAPI Deep Neural Network Developer Guide and Reference
A newer version of this document is available. Customers should click here to go to the newest version.
Build from Source
Download the Source Code
Download oneDNN source code or clone the repository.
git clone https://github.com/oneapi-src/oneDNN.git
Build the Library
Ensure that all software dependencies are in place and have at least the minimal supported version.
The oneDNN build system is based on CMake. Use
CMAKE_INSTALL_PREFIX to control the library installation location,
CMAKE_BUILD_TYPE to select between build type (Release, Debug, RelWithDebInfo).
CMAKE_PREFIX_PATH to specify directories to be searched for the dependencies located at non-standard locations.
See Build Options for detailed description of build-time configuration options.
Linux/macOS
GCC, Clang, or Intel oneAPI DPC++/C++ Compiler
Set up the environment for the compiler
Configure CMake and generate makefiles
mkdir -p build cd build # Uncomment the following lines to build with Clang # export CC=clang # export CXX=clang++ # Uncomment the following lines to build with Intel oneAPI DPC++/C++ Compiler # export CC=icx # export CXX=icpx cmake .. <extra build options>
Build the library
make -j
Intel oneAPI DPC++/C++ Compiler with SYCL runtime
Set up the environment for Intel oneAPI DPC++/C++ Compiler using the setvars.sh script. The command below assumes you installed to the default folder. If you customized the installation folder, setvars.sh (Linux/macOS) is in your custom folder:
source /opt/intel/oneapi/setvars.sh
Configure CMake and generate makefiles
mkdir -p build cd build export CC=icx export CXX=icpx cmake .. \ -DDNNL_CPU_RUNTIME=SYCL \ -DDNNL_GPU_RUNTIME=SYCL \ <extra build options>
Build the library
make -j
GCC targeting AArch64 on x64 host
Set up the environment for the compiler
Configure CMake and generate makefiles
export CC=aarch64-linux-gnu-gcc export CXX=aarch64-linux-gnu-g++ cmake .. \ -DCMAKE_SYSTEM_NAME=Linux \ -DCMAKE_SYSTEM_PROCESSOR=AARCH64 \ -DCMAKE_LIBRARY_PATH=/usr/aarch64-linux-gnu/lib \ <extra build options>
Build the library
make -j
GCC with Arm Compute Library (ACL) on AArch64 host
Set up the environment for the compiler
Configure CMake and generate makefiles
export ACL_ROOT_DIR=<path/to/Compute Library> cmake .. \ -DDNNL_AARCH64_USE_ACL=ON \ <extra build options>
Build the library
make -j
Windows
Microsoft Visual C++ Compiler
Generate a Microsoft Visual Studio solution
mkdir build cd build cmake -G "Visual Studio 16 2019" ..
Build the library
cmake --build . --config=Release
Intel oneAPI DPC++/C++ Compiler with SYCL Runtime
Set up the environment for Intel oneAPI DPC++/C++ Compiler using the setvars.bat script. The command below assumes you installed to the default folder. If you customized the installation folder, setvars.bat is in your custom folder:
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
or open Intel oneAPI Commmand Prompt instead.
Configure CMake and generate Ninja project
mkdir build cd build :: Set C and C++ compilers set CC=icx set CXX=icx cmake .. -G Ninja -DDNNL_CPU_RUNTIME=SYCL ^ -DDNNL_GPU_RUNTIME=SYCL ^ <extra build options>
Build the library
cmake --build .
Validate the Build
If the library is built for the host system, you can run unit tests using:
ctest
Build documentation
Install the requirements
conda env create -f ../doc/environment.yml conda activate onednn-doc
Build the documentation
cmake --build . --target doc
Install library
Install the library, headers, and documentation
cmake --build . --target install
The install directory is specified by the CMAKE_INSTALL_PREFIX cmake variable. When installing in the default directory, the above command needs to be run with administrative privileges using sudo on Linux/Mac or a command prompt run as administrator on Windows.