AN 669: Drive-On-Chip Design Example for Cyclone V Devices

ID 683466
Date 5/15/2022
Public
Document Table of Contents

DSP Builder Model Resource Usage

Intel compared the FOC algorithm as a single precision floating-point model and a model that uses the folding feature. When you use folding, the model uses fewer logic elements (LEs) and multipliers but has an increase in latency. In addition, a fixed-point model uses significantly fewer LEs and multipliers and has lower latency than the floating-point model.

Intel compared floating- and fixed-point versions of the FOC algorithm with and without folding. In addition, Intel compared using a 26-bit (17-bit mantissa) instead of standard single-precision 32-bit (23-bit mantissa) floating point implementation. 26-bit is a standard type within DSP Builder that takes advantage of the FPGA architecture to save FPGA resources if this precision is sufficient.

Cyclone V devices use ALMs instead of LEs (one ALM is approximately two LEs plus two registers) and DSP blocks instead of multipliers (one DSP block can implement two 18-bit multipliers or other functions).

Table 8.  Resource Usage Comparison for Cyclone V Devices
Algorithm Precision (Bits) Folding Logic Usage (ALMs) DSP Usage Algorithm Latency (µs)
FOC, floating point including filter, DFf_float_alu_av.slx 32 No 11.5k 31 0.71
FOC, floating point including filter, DFf_float_alu_av.slx 32 Yes 3.9k 4 2.30
FOC, floating point including filter, DFf_float_alu_av.slx 26 No 11k 31 0.70
FOC, floating point including filter, DFf_float_alu_av.slx 26 Yes 3.6k 4 2.34
FOC, fixed point, including filter, DFf_fixp16_alu_av.slx 16 No 1.6k 36 0.22
FOC, fixed point, including filter, DFf_fixp16_alu_av.slx 16 Yes 2.3k 2 3.31
Table 9.  Resource Usage Comparison for MAX 10 Devices
Algorithm Precision (Bits) Folding Logic Usage 
(LEs) Multiplier Usage Algorithm Latency (µs)
FOC, floating point without filter, DF_float_alu_av.slx 32 No 30k 53 0.52
FOC, floating point without filter, DF_float_alu_av.slx 32 Yes 6.5k 10 1.75
FOC, floating point without filter, DF_float_alu_av.slx 26 No 23k 23 0.47
FOC, floating point without filter, DF_float_alu_av.slx 26 Yes 5.4k 6 1.61
FOC, fixed point, without filter, DF_fixp16_alu_av.slx 16 No 2.2k 12 0.14
FOC, fixed point, without filter, DF_fixp16_alu_av.slx 16 Yes 2.7k 2 2.08

The results show:

  • The model with folding uses fewer processing resources but has an increase in latency.
  • A floating-point model with folding also uses significantly fewer logic resources (LEs/ALMs) than a model without folding.
  • The 26-bit floating-point format saves significant resources compared to 32-bit in MAX 10 devices but proportionally less in Cyclone V SoCs.
  • The fixed-point algorithms without folding use the fewest logic resources and give the lowest latency.

The reference design implements these FOC configurations:

  • Floating-point 26-bit model on MAX 10 devices ; 32-bit on Cyclone V, both with folding.
  • Fixed-point 16-bit model without folding.

Did you find the information on this page useful?

Characters remaining:

Feedback Message