Intel® FPGA AI Suite: IP Reference Manual

ID 768974
Date 12/01/2023
Public
Document Table of Contents

2.3. Intel® FPGA AI Suite Layer / Primitive Ranges

The following table lists the hyperparameter ranges supported by key primitive layers:

Layer / Primitive

Hyperparameter

Supported Range

Fully connected

None

n/a

2D Conv

Filter Size

Width = [1..28]

Height = [1..28]

Height does not have to equal width.

Default value for each is 14.

Stride

Maximum stride is 15

Pad

Maximum pad is (216) - 1

3D Conv Filter Size Width = [1..28]

Height = [1..28]

Depth = [1..14]

Filter volume should fit into the filter cache size.

Stride Maximum stride is 15.
Pad Maximum pad is (216) - 1

Depthwise

Filter Size

Same as 2D Conv filter size

Depth = 1

Stride

Same as 2D Conv stride

Depth = 1

Pad

Same as 2D conv padding

Depth = 1

Scale-Shift

Scale factor

FP16 float range

Bias term

FP16 float range

Deconv / Transpose Convolution

Filter Size

Any – Same as convolution, and height/width can be different

Depth = 1

Stride

1, 2, 4, 8 (stride width == stride height)

Depth = 1

Pad

Restricted to filter_[height, width] - 1

Depth = 1

ReLU

n/a

n/a

pReLU

Scaling parameter (a) (1 per filter / conv output channel)

float range

Depth = 1

Leaky ReLU

Scaling parameter (a) (1 per tensor)

float range

Clamp

Limit parameters (a, b) (1 per tensor)

float range

Round_Clamp Limit parameters (a, b) (1 per tensor) float range

H-sigmoid

n/a

n/a

H-swish

n/a

n/a

Max Pool

Window Size

3×3×3, 4×4×4, 5×5×5, 6×6×6, 7×7×7

Pad

1, 2

Stride

1, 2, 3, 4

Average Pool

Window Size

Up to 27x27 (one less than the maximum 2D convolution size)

Width == Height

Depth = 1 or 2

Pad

1, 2

Stride

1, 2, 3, 4

Softmax

n/a

n/a

Elementwise Multiplication of feature * filter and feature * feature tensors.2

n/a

Tensor sizes are expanded if necessary to support the multiplication.

Depth = 1

ChannelToSpace

DepthToSpace

PixelShuffle

block_mode blocks_first or blocks_last
block_size 2, 4, 8
2 This is an element-wise multiplication, not a matrix multiply operation.