The following factors affect benchmarking results:
Benchmark design selection
Software tool settings and user constraints
Timing analysis techniques
Benchmark result reporting
The FPGA Performance Benchmark Methodology white paper (PDF) contains detailed discussion about the above factors.
Benchmark Design Selection
Altera uses real customer designs for benchmarking and does not generate artificial circuits or use intellectual property (IP) cores in isolation. By using a broad suite of customer designs from various market segments (such as telecommunications, networking, storage, wireless, and medical applications) Altera ensures that the logic and connectivity present in designs is real and can therefore be used to represent the complex interaction of large circuits and FPGA CAD tools.
The customer designs Altera uses are originally targeted for technologies such as Altera® and Xilinx FPGAs, ASICs, and gate arrays. Taking a design’s HDL code that is optimized for a particular technology and blindly benchmarking it for another technology produces very misleading results. Therefore, a team of Altera engineers is dedicated to converting each individual design to optimize its performance to its targeted architecture used in these benchmarks. For each design, the conversion and performance optimization process takes weeks to be completed so that each design can take full advantage of the dedicated features present in each architecture.
Software Settings and Constraints
All CAD tools make trade-offs between design performance in the areas of amount of logic used, compile time, and memory usage. The benchmarking results depend on the many settings for the tools used to compile an FPGA design. The outcome of benchmarking varies significantly with software settings and constraints applied.
The following must be considered when benchmarking:
Least-Effort Results. This involves very little user intervention for optimizing performance with the FPGA CAD tools such as Altera’s Quartus® II software or Xilinx’s ISE software. Least-effort results give an initial estimate on performance and resource usage.
Best-Effort Results. Best-effort results are based on significant user intervention to achieve the best possible results. Best-effort results typically require appropriate software settings and timing constraints for each design; they can also involve multiple compilations and iterations.
Figures 1 and 2 show how the average results swing, depending on whether the least- or best-effort settings were used in benchmarking activities.