Intel® oneAPI Deep Neural Network Developer Guide and Reference
A newer version of this document is available. Customers should click here to go to the newest version.
Inspecting JIT Code
oneDNN uses just-in-time compilation (JIT) to generate optimal code for some functions based on input parameters and the instruction set supported by the system. The library provides a mechanism to save the generated code into a file for inspection.
This behavior can be enabled with the ONEDNN_JIT_DUMP environment variable or dnnl_set_jit_dump function.
| Value | Behavior | 
|---|---|
| 0 | JIT dump is disabled (default) | 
| any other value | JIT dump is enabled | 
The function setting takes precedence over the environment variable.
Example (CPU)
$ ONEDNN_JIT_DUMP=1 ./cnn-inference-f32-cppThis will produce the following output files if running on a CPU supporting Intel(R) Advanced Vector Extensions 2 (Intel AVX2):
dnnl_dump_cpu_jit_avx2_conv_fwd_kernel_f32.1.bin
...
dnnl_dump_cpu_jit_avx_gemv_t_f32_kern.30.binUse any disassembler to view the code. For example:
- objdump -D -b binary -mi386:x86-64 file.bin; 
- xed -64 -ir file.bin 
XED is a decoder tool available as part as Intel Software Development Emulator (Intel SDE).
Example (GPU)
$ ONEDNN_JIT_DUMP=1 ./simple-net-cpp gpuThis will produce the following output files if running on Intel Processor Graphics Gen9:
dnnl_dump_gpu_simple_reorder.0.bin
dnnl_dump_gpu_gen9_conv_fwd.1.bin
...Use Intel GPU ISA disassembler to disassemble a kernel:
- iga64 -d -p=9 file.bin (usage: -p=<PLATFORM>) 
Links: