Intel® FPGA SDK for OpenCL™ Standard Edition: Best Practices Guide
ID
683176
Date
9/24/2018
Public
1. Introduction to Standard Edition Best Practices Guide
2. Reviewing Your Kernel's report.html File
3. OpenCL Kernel Design Best Practices
4. Profiling Your Kernel to Identify Performance Bottlenecks
5. Strategies for Improving Single Work-Item Kernel Performance
6. Strategies for Improving NDRange Kernel Data Processing Efficiency
7. Strategies for Improving Memory Access Efficiency
8. Strategies for Optimizing FPGA Area Usage
A. Additional Information
2.1. High Level Design Report Layout
2.2. Reviewing the Report Summary
2.3. Reviewing Loop Information
2.4. Reviewing Area Information
2.5. Verifying Information on Memory Replication and Stalls
2.6. Optimizing an OpenCL Design Example Based on Information in the HTML Report
2.7. HTML Report: Area Report Messages
2.8. HTML Report: Kernel Design Concepts
3.1. Transferring Data Via Channels or OpenCL Pipes
3.2. Unrolling Loops
3.3. Optimizing Floating-Point Operations
3.4. Allocating Aligned Memory
3.5. Aligning a Struct with or without Padding
3.6. Maintaining Similar Structures for Vector Type Elements
3.7. Avoiding Pointer Aliasing
3.8. Avoid Expensive Functions
3.9. Avoiding Work-Item ID-Dependent Backward Branching
4.3.4.1. High Stall Percentage
4.3.4.2. Low Occupancy Percentage
4.3.4.3. Low Bandwidth Efficiency
4.3.4.4. High Stall and High Occupancy Percentages
4.3.4.5. No Stalls, Low Occupancy Percentage, and Low Bandwidth Efficiency
4.3.4.6. No Stalls, High Occupancy Percentage, and Low Bandwidth Efficiency
4.3.4.7. Stalling Channels
4.3.4.8. High Stall and Low Occupancy Percentages
7.1. General Guidelines on Optimizing Memory Accesses
7.2. Optimize Global Memory Accesses
7.3. Performing Kernel Computations Using Constant, Local or Private Memory
7.4. Improving Kernel Performance by Banking the Local Memory
7.5. Optimizing Accesses to Local Memory by Controlling the Memory Replication Factor
7.6. Minimizing the Memory Dependencies for Loop Pipelining
5.1.1. Removing Loop-Carried Dependency
Based on the feedback from the optimization report, you can remove a loop-carried dependency by implementing a simpler memory access pattern.
Consider the following kernel:
1 #define N 128
2
3 __kernel void unoptimized (__global int * restrict A,
4 __global int * restrict B,
5 __global int* restrict result)
6 {
7 int sum = 0;
8
9 for (unsigned i = 0; i < N; i++) {
10 for (unsigned j = 0; j < N; j++) {
11 sum += A[i*N+j];
12 }
13 sum += B[i];
14 }
15
16 * result = sum;
17 }
The optimization report for kernel unoptimized resembles the following:
================================================================================== Kernel: unoptimized ================================================================================== The kernel is compiled for single work-item execution. Loop Report: + Loop "Block1" (file k.cl line 9) | Pipelined with successive iterations launched every 2 cycles due to: | | Pipeline structure: every terminating loop with subloops has iterations | launched at least 2 cycles apart. | Having successive iterations launched every two cycles should still lead to | good performance if the inner loop is pipelined well and has sufficiently | high number of iterations. | | Iterations executed serially across the region listed below. | Only a single loop iteration will execute inside the listed region. | This will cause performance degradation unless the region is pipelined well | (can process an iteration every cycle). | | Loop "Block2" (file k.cl line 10) | due to: | Data dependency on variable sum (file k.cl line 7) | | |-+ Loop "Block2" (file k.cl line 10) Pipelined well. Successive iterations are launched every cycle.
- The first row of the report indicates that the successfully infers pipelined execution for the outer loop, and a new loop iteration will launch every other cycle.
- The message due to Pipeline structure indicates that the offline compiler creates a pipeline structure that causes an outer loop iteration to launch every two cycles. The behavior is not a result of how you structure your kernel code.
Note: For recommendations on how to structure your single work-item kernel, refer to the Good Design Practices for Single Work-Item Kernel section.
- The remaining messages in the first row of report indicate that the loop executes a single iteration at a time across the subloop because of data dependency on the variable sum. This data dependency exists because each outer loop iteration requires the value of sum from the previous iteration to return before the inner loop can start executing.
- The second row of the report notifies you that the inner loop executes in a pipelined fashion with no performance-limiting loop-carried dependencies.
To optimize the performance of this kernel, remove the data dependency on variable sum so that the outer loop iterations do not execute serially across the subloop. Perform the following tasks to decouple the computations involving sum in the two loops:
- Define a local variable (for example, sum2) for use in the inner loop only.
- Use the local variable from Step 1 to store the cumulative values of A[i*N + j] as the inner loop iterates.
- In the outer loop, store the variable sum to store the cumulative values of B[i] and the value stored in the local variable.
Below is the restructured kernel optimized:
1 #define N 128
2
3 __kernel void optimized (__global int * restrict A,
4 __global int * restrict B,
5 __global int * restrict result)
6 {
7 int sum = 0;
8
9 for (unsigned i = 0; i < N; i++) {
10 // Step 1: Definition
11 int sum2 = 0;
12
13 // Step 2: Accumulation of array A values for one outer loop iteration
14 for (unsigned j = 0; j < N; j++) {
15 sum2 += A[i*N+j];
16 }
17
18 // Step 3: Addition of array B value for an outer loop iteration
19 sum += sum2;
20 sum += B[i];
21 }
22
23 * result = sum;
24 }
An optimization report similar to the one below indicates the successful removal of the loop-carried dependency on the variable sum:
================================================================================== Kernel: optimized ================================================================================== The kernel is compiled for single work-item execution. Loop Report: + Loop "Block1" (file optimized.cl line 9) | Pipelined with successive iterations launched every 2 cycles due to: | | Pipeline structure: every terminating loop with subloops has iterations | launched at least 2 cycles apart. | Having successive iterations launched every two cycles should still lead to | good performance if the inner loop is pipelined well and has sufficiently | high number of iterations. | | |-+ Loop "Block2" (file optimized.cl line 14) Pipelined well. Successive iterations are launched every cycle. ==================================================================================
You have addressed all the loop-carried dependence issues successfully when you see only the following messages in the optimization report:
- Pipelined execution inferred for innermost loops.
- Pipelined execution inferred. Successive iterations launched every 2 cycles due to: Pipeline structure for all other loops.
Related Information