Intel® oneAPI Deep Neural Network Developer Guide and Reference
A newer version of this document is available. Customers should click here to go to the newest version.
enum dnnl_normalization_flags_t
Overview
Flags for normalization primitives. More…
#include <dnnl_types.h>
enum dnnl_normalization_flags_t
{
    dnnl_normalization_flags_none = 0x0U,
    dnnl_use_global_stats         = 0x1U,
    dnnl_use_scale                = 0x2U,
    dnnl_use_shift                = 0x4U,
    dnnl_fuse_norm_relu           = 0x8U,
    dnnl_fuse_norm_add_relu       = 0x10U,
};Detailed Documentation
Flags for normalization primitives.
Enum Values
dnnl_normalization_flags_noneUse no normalization flags.
If specified
- on forward training propagation mean and variance are computed and stored as output 
- on backward propagation compute full derivative wrt data 
- on backward propagation prop_kind == dnnl_backward_data has the same behavior as prop_kind == dnnl_backward 
dnnl_use_global_statsUse global statistics.
If specified
- on forward propagation use mean and variance provided by user (input) 
- on backward propagation reduces the amount of computations, since mean and variance are considered as constants 
If not specified:
- on forward propagation mean and variance are computed and stored as output 
- on backward propagation compute full derivative wrt data 
dnnl_use_scaleUse scale parameter.
If specified:
- on forward propagation use scale for the normalization results 
- on backward propagation (for prop_kind == dnnl_backward) compute diff wrt scale (hence one extra output used) 
dnnl_use_shiftUse shift parameter.
If specified:
- on forward propagation use shift (aka bias) for the normalization results 
- on backward propagation (for prop_kind == dnnl_backward) compute diff wrt shift (hence one extra output used) 
dnnl_fuse_norm_reluFuse with ReLU.
The flag implies negative slope being 0. On training this is the only configuration supported. For inference, to use non-zero negative slope consider using Primitive Attributes: Post-ops.
If specified:
- on inference this option behaves the same as if the primitive were fused with ReLU using post ops API with zero negative slope. 
- on training primitive requires workspace (required to be able to perform backward pass) 
dnnl_fuse_norm_add_reluFuse with Add and then fuse with ReLU.
If specified:
- on forward propagation apply element-wise binary Add operation to to the normalization results with an additional input tensor and then apply ReLU with negative slope being 0. 
- on training primitive requires workspace (required to be able to perform backward pass). 
- on backward propagation save the result of backward ReLU operation with input tensor and workspace from forward pass to extra output tensor and then perform backward normalization.