FFT IP Core: User Guide

ID 683374
Date 11/06/2017
Public
Document Table of Contents

4. Block Floating Point Scaling

Block-floating-point (BFP) scaling is a trade-off between fixed-point and full floating-point FFTs.

In fixed-point FFTs, the data precision needs to be large enough to adequately represent all intermediate values throughout the transform computation. For large FFT transform sizes, an FFT fixed-point implementation that allows for word growth can make either the data width excessive or can lead to a loss of precision.

Floating-point FFTs represents each number as a mantissa with an individual exponent. The improved precision is offset by demand for increased device resources.

In a block-floating point FFT, all of the values have an independent mantissa but share a common exponent in each data block. Data is input to the FFT function as fixed point complex numbers (even though the exponent is effectively 0, you do not enter an exponent).

The block-floating point FFT ensures full use of the data width within the FFT function and throughout the transform. After every pass through a radix-4 FFT, the data width may grow up to log2 (42) = 2.5 bits. The data scales according to a measure of the block dynamic range on the output of the previous pass. The FFT accumulates the number of shifts and then outputs them as an exponent for the entire block. This shifting ensures that the minimum of least significant bits (LSBs) are discarded prior to the rounding of the post-multiplication output. In effect, the block-floating point representation is as a digital automatic gain control. To yield uniform scaling across successive output blocks, you must scale the FFT function output by the final exponent.

In comparing the block-floating point output of the FFT MegaCore function to the output of a full precision FFT from a tool like MATLAB, you must scale the output by 2 (–exponent_out) to account for the discarded LSBs during the transform.

Unlike an FFT block that uses floating point arithmetic, a block-floating-point FFT block does not provide an input for exponents. Internally, the IP core represents a complex value integer pair with a single scale factor that it typically shares among other complex value integer pairs. After each stage of the FFT, the IP core detects the largest output value and scales the intermediate result to improve the precision. The exponent records the number of left or right shifts to perform the scaling. As a result, the output magnitude relative to the input level is:

output*2-exponent

For example, if exponent = –3, the input samples are shifted right by three bits, and hence the magnitude of the output is output*23.

After every pass through a radix-2 or radix-4 engine in the FFT core, the addition and multiplication operations cause the data bits width to grow. In other words, the total data bits width from the FFT operation grows proportionally to the number of passes. The number of passes of the FFT/IFFT computation depends on the logarithm of the number of points.

A fixed-point FFT needs a huge multiplier and memory block to accommodate the large bit width growth to represent the high dynamic range. Though floating-point is powerful in arithmetic operations, its power comes at the cost of higher design complexity such as a floating-point multiplier and a floating-point adder. BFP arithmetic combines the advantages of floating-point and fixed-point arithmetic. BFP arithmetic offers a better signal-to-noise ratio (SNR) and dynamic range than does floating-point and fixed-point arithmetic with the same number of bits in the hardware implementation.

In a block-floating-point FFT, the radix-2 or radix-4 computation of each pass shares the same hardware, the IP core reads the data from memory, passes ti through the engine, and writtes back to memory. Before entering the next pass, the IP core shifts each data sample right (an operation called "scaling") if the addition and multiplication operations produce a carry-out bit. The IP core bases the number of bits that it shifts on the difference in bit growth between the data sample and the maximum data sample it detect in the previous stage. The IP core records the maximum bit growth in the exponent register. Each data sample now shares the same exponent value and data bit width to go to the next core engine. You can reuse the same core engine without incurring the expense of a larger engine to accommodate the bit growth.

The output SNR depends on how many bits of right shift occur and at what stages of the radix core computation they occur. Tthat the signal-to-noise ratio is data dependent and you need to know the input signal to compute the SNR.