Intel FPGA SDK for OpenCL Pro Edition: Programming Guide
Version Information
Updated for: |
---|
Intel® Quartus® Prime Design Suite 20.4 |
1. Intel FPGA SDK for OpenCL Overview
1.1. Intel FPGA SDK for OpenCL Pro Edition Programming Guide Prerequisites
Before using the Intel® FPGA SDK for OpenCL™ or the Intel® FPGA Runtime Environment (RTE) for OpenCL to program your device, familiarize yourself with the respective getting started guides. This document assumes that you have performed the following tasks:
- For developing and deploying OpenCL kernels, download the tar file and run the installers to install the SDK, the Intel® Quartus® Prime Pro Edition software, and device support.
- For deployment of OpenCL kernels, download and install the RTE.
- If you want to use the SDK or the RTE to program an Intel® SoC FPGA, you also have to download and install the Intel® SoC FPGA Embedded Development Suite (EDS) Pro Edition.
- Install and set up your FPGA board.
- Verify that board installation is successful, and the board functions correctly.
If you have not performed the tasks described above, refer to the SDK's getting starting guides for more information.
1.2. Intel FPGA SDK for OpenCL FPGA Programming Flow
The following SDK components work together to program an Intel® FPGA:
- The host application and the host compiler
- The OpenCL kernel(s) and the offline compiler
- The Custom Platform
The Custom Platform provides the board support package. Typically, the board manufacturer develops the Custom Platform that supports a specific OpenCL board. The offline compiler targets the Custom Platform when compiling an OpenCL kernel to generate a hardware programming image. The host then runs the host application, which usually programs and executes the hardware image onto the FPGA.
In a sequential implementation of a program (for example, on a conventional processor), the program counter controls the sequence of instructions that are executed on the hardware, and the instructions that execute on the hardware across time. In a spatial implementation of a program, such as program implementation within the Intel® FPGA SDK for OpenCL™ , instructions are executed as soon as the prerequisite data is available. Programs are interpreted as a series of connections representing the data dependencies.
2. Intel FPGA SDK for OpenCL Offline Compiler Kernel Compilation Flows
An OpenCL kernel source file (.cl) contains your OpenCL kernel source code that runs on the FPGA. The offline compiler groups one or more kernels into a temporary file and then compiles this file to generate the following files and folders:
- A .aoco object file is an intermediate object file that contains information for later stages of the compilation. It is not presented unless the aoc option -save-temps is specified.
- A.aocx image file is the hardware configuration file and contains information necessary to program the FPGA at runtime.
- The work folder or subdirectory, which contains data necessary to create the .aocx file. By default, the name of the work directory is the name of your .cl file. If you compile multiple kernel source files, the name of the work directory is the name of the last .cl file you list in the aoc command line.
The .aocx file contains data that the host application uses to create program objects, a concept within the OpenCL runtime API, for the target FPGA. The host application first loads these program objects into memory. Then the host runtime uses these program objects to program the target FPGA, as required for kernel launch operations by the host program.
2.1. One-Step Compilation for Simple Kernels
The following figure illustrates the OpenCL kernel design flow that has a single compilation step.
A successful compilation results in the following files and reports:
- A .aocr file
- A .aocx file
- In the <your_kernel_filename>/reports/report.html file, the estimated resource usage summary provides a preliminary assessment of area usage. If you have a single work-item kernel, the optimization report identifies performance bottlenecks.
2.2. Multistep Intel FPGA SDK for OpenCL Pro Edition Design Flow
The following figure outlines the stages in the SDK's design flow. The steps in the design flow serve as checkpoints for identifying functional errors and performance bottlenecks. They allow you to modify your OpenCL kernel code without performing a full compilation on each iteration. You have the option to perform some or all of the compilations steps.
The SDK's design flow includes the following steps:
- Emulation
Assess the functionality of your OpenCL kernel by executing it on one or multiple emulation devices on an x86-64 host system. For Linux systems, the Emulator offers symbolic debug support. Symbolic debug allows you to locate the origins of functional errors in your kernel code.
- Intermediate Compilation
There are two available intermediate compilation steps. You have the option to include one or both of these compilation steps in your design flow.
- Compile one or more .cl kernel source files using the -c flag. Doing so instructs the offline compiler to generate .aoco object files that contain the output from the OpenCL parser.
- Compile one or more .cl kernel source files
or .aoco files,
but not both, using the -rtl flag. Doing so instructs the
offline compiler to perform the following tasks:
- If the input files are .cl files, the offline compiler generates an intermediate .aoco file for each kernel source file and then links them to generate a .aocr file.
- If the input files are .aoco files, the offline compiler links them to generate a .aocr file.
- Creates a <your_kernel_filename> directory.
The offline compiler uses the .aocr file to generate the final .aocx hardware configuration file.Note: If you compile your kernel(s) using the -c flag in an environment where the default board is X, and then you compile your .aoco files using the -rtl flag in an environment where the default board is Y, the offline compiler reads board X from the .aoco files and then pass it on to the subsequent compilation stages.
- Review HTML Report
Review the <your_kernel_filename>/reports/report.html file of your OpenCL application to determine whether the estimated kernel performance data is acceptable. The HTML report also provides suggestions on how you can modify your kernel to increase performance.
- Simulation (Preview)
Asses the functionality of your OpenCL kernel by running it through simulation. Simulation lets you asses the function correctness and dynamic performance of your kernel without a long compilation time. You can capture and view waveforms for your kernel to help you debug your kernel.
- Fast Compilation
Assess the functionality of your OpenCL kernel in hardware. The fast compilation step generates a .aocx file in a fraction of the time required to complete a full compilation. The Intel® FPGA SDK for OpenCL™ Offline Compiler reduces compilation time by performing only light optimizations.
- Incremental Compilation
Assess the functionality of your OpenCL kernel in hardware. The incremental compilation step generates a .aocx file by compiling only the kernels you have modified. The Intel® FPGA SDK for OpenCL™ Offline Compiler improves your productivity by scaling compilation times with the size of your design changes rather than the size of your overall design.
- Profiling
Instruct the Intel® FPGA SDK for OpenCL™ Offline Compiler to insert performance counters in the FPGA programming image. During execution, the counters collect performance information that you can then review in the Intel® FPGA dynamic profiler for OpenCL™ GUI.
- Full deployment
When you are satisfied with the performance of your OpenCL kernel throughout the design flow, perform a full compilation. The resulting .aocx file is suitable for deployment.
For more information about the HTML report and kernel profiling, refer to the Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide.
3. Obtaining General Information on Software, Compiler, and Custom Platform
- Displaying the Software Version (version)
To display the version of the Intel® FPGA SDK for OpenCL™ , invoke the version utility command. - Displaying the Compiler Version (-version)
To display the version of the Intel® FPGA SDK for OpenCL™ Offline Compiler, invoke the -version compiler command. - Listing the Intel FPGA SDK for OpenCL Utility Command Options (help)
To display information on the Intel® FPGA SDK for OpenCL™ utility command options, invoke the help utility command. - Listing the Intel FPGA SDK for OpenCL Offline Compiler Command Options (no argument, -help, or -h)
To display information on the Intel® FPGA SDK for OpenCL™ Offline Compiler command options, invoke the compiler command without an argument, or invoke the compiler command with the -help or -h command option. - Listing the Available FPGA Boards and Custom Platforms (-list-boards and -list-board-packages)
To list the FPGA boards available in your Custom Platform, include the -list-boards option in the aoc command. - Displaying the Compilation Environment of an OpenCL Binary (env)
To display the Intel® FPGA SDK for OpenCL™ Offline Compiler's input arguments and the environment for a compiled OpenCL design, invoke the env utility command.
3.1. Displaying the Software Version (version)
aocl <version>.<build> (Intel(R) FPGA SDK for OpenCL(TM), Version <version> Build <build>, Copyright (C) <year> Intel Corporation)
3.2. Displaying the Compiler Version (-version)
Intel(R) FPGA SDK for OpenCL(TM), 64-Bit Offline Compiler Version <version> Build <build> Copyright (C) <year> Intel Corporation
3.3. Listing the Intel FPGA SDK for OpenCL Utility Command Options (help)
3.3.1. Displaying Information on an Intel FPGA SDK for OpenCL Utility Command Option (help <command_option>)
aocl install - Installs a board onto your host system. Usage: aocl install Description: This command installs a board's drivers and other necessary software for the host operating system to communicate with the board. For example this might install PCIe drivers.
3.4. Listing the Intel FPGA SDK for OpenCL Offline Compiler Command Options (no argument, -help, or -h)
- aoc
- aoc -help
- aoc -h
3.5. Listing the Available FPGA Boards and Custom Platforms (-list-boards and -list-board-packages)
-
At a command prompt, invoke the
aoc
-list-boards
command.
The Intel® FPGA SDK for OpenCL™ Offline Compiler generates an output that resembles the following:
Board list: <board_name_1> <board_name_2> ...
Where <board_name_N> is the board name you use in your aoc command to target a specific FPGA board.
-
When multiple Custom Platforms are installed, to list FPGA boards available in
a specific Custom Platform, include the
-board-package=<custom_platform_path> option in the
aoc command. At the command prompt, invoke the command
as:
aoc -board-package=<custom_platform_path> -list-boards
The Intel® FPGA SDK for OpenCL™ Offline Compiler lists available boards within the Custom Platform. -
To list Custom Platforms available in the system, include the
-list-board-packages option in the aoc command. At a command prompt, invoke the
aoc
-list-board-packages
command.
Note: Starting from the 20.3 release, support for Windows and Linux BSPs is removed. Use version 20.2 or older BSPs available at Download Center for FPGAs as a reference. If you want to migrate your BSP to a newer version, follow the recommended steps provided in the Reference Platform Porting Guides available under Intel FPGA SDK for OpenCL documentation.The Intel® FPGA SDK for OpenCL™ Offline Compiler generates an output that resembles the following:
Installed board packages: <board_package_1> <board_package_2> ...
Where <board_package_N> is the board package of the Custom Platform installed in your system or shipped within the Intel® FPGA SDK for OpenCL™ .
3.6. Displaying the Compilation Environment of an OpenCL Binary (env)
4. Managing an FPGA Board
You can install multiple Custom Platforms simultaneously on the same system with the aocl install utility. The Custom Platform subdirectory contains the board_env.xml file.
In a system with multiple Custom Platforms, ensure that the host program uses the FPGA Client Driver (FCD), formerly Altera Client Driver (ACD), to discover the boards rather than linking to the Custom Platform memory-mapped device (MMD) libraries directly.
FCD is set up for you when you run the aocl install utility. The installed BSP is registered on the system so the runtime and SDK utilities can find the necessary BSP files.
Do not move a BSP to a different directory after you install it. To move a BSP:
- Uninstall the BSP from its current location with the aocl uninstall utility.
- Change the BSP directory.
- Reinstall the BSP in the new location with the aocl install utility.
- Installing an FPGA Board (install)
- Uninstalling an FPGA Board (uninstall)
To uninstall an FPGA board, invoke the uninstall utility command, uninstall the Custom Platform, and unset the relevant environment variables. - Querying the Device Name of Your FPGA Board (diagnose)
When you query a list of accelerator boards, the OpenCL software produces a list of installed devices on your machine in the order of their device names. - Running a Board Diagnostic Test (diagnose <device_name>)
To perform a detailed diagnosis on a specific FPGA board, include <device_name> as an argument of the diagnose utility command. - Programming the FPGA Offline or without a Host (program <device_name>)
To program an FPGA device offline or without a host, invoke the program utility command. - Programming the Flash Memory (flash <device_name>)
If supported by a Custom Platform, invoke the flash utility command to initialize the FPGA with a specified startup configuration.
4.1. Installing an FPGA Board (install)
To install your board into the host system, invoke the aocl install <path_to_customplatform> utility command.
The steps below outline the board installation procedure. Some Custom Platforms require additional installation tasks. Consult your board vendor's documentation for further information on board installation.
- If you are installing the Intel® Arria® 10 SoC Development Kit for use with the Intel® Arria® 10 SoC Development Kit Reference Platform (a10soc), refer to the Installing the Intel® Arria® 10 Development Kit in AN 807: Configuring the Intel® Arria® 10 GX FPGA Development Kit for the Intel FPGA SDK for OpenCL for more information.
- If you want to use Intel® FPGA SDK for OpenCL™ with the Intel® Arria® 10 GX FPGA Development Kit, refer to the Application Note AN 807: Configuring the Intel® Arria® 10 GX FPGA Development Kit for the Intel FPGA SDK for OpenCL for more information.
- Follow your board vendor's instructions to connect the FPGA board to your system.
-
Download the Custom Platform for your FPGA board from your
board vendor's
website.
Note: Starting from 20.3 release, support for Windows BSPs is removed. Use 20.2 or older BSPs available at Download Center for FPGAs as a reference. If you want to migrate your BSP to a newer version, follow the recommended steps provided in the Reference Platform Porting Guides available under Intel FPGA SDK for OpenCL documentation.
-
Install the Custom Platform in a folder that you own (that is,
not a system folder).
You can install multiple Custom Platforms simultaneously on the same system using the SDK utilities, such as aocl diagnose with multiple Custom Platforms. The Custom Platform subdirectory contains the board_env.xml file.
In a system with multiple Custom Platforms, ensure that the host program uses the FPGA Client Driver (FCD) to discover the boards rather than linking to the Custom Platforms' memory-mapped device (MMD) libraries directly. If FCD is correctly set up for Custom Platform, FCD finds all the installed boards at runtime.
- Set the QUARTUS_ROOTDIR_OVERRIDE user environment variable to point to the Intel® Quartus® Prime Pro Edition software installation directory.
-
Add the paths to the Custom Platform libraries (for example,
path to the MMD library of the board support package resembles <path_to_customplatform>/windows64/bin) to
the PATH (Windows) or LD_LIBRARY_PATH (Linux) environment variable
setting.
The Intel® FPGA SDK for OpenCL™ Pro Edition Getting Started Guide contains more information on the init_opencl script. For information on setting user environment variables and running the init_opencl script, refer to the Setting the Intel® FPGA SDK for OpenCL™ Pro Edition User Environment Variables section.
-
Invoke the command
aocl
install
<path_to_customplatform>
at a command
prompt.
Invoking aocl install <path_to_customplatform> installs both the FCD and a board driver that allows communication between host applications and hardware kernel programs.Remember:
- You need administrative rights to install a board.
To run a Windows command prompt as an administrator, click
Start > All Programs > Accessories. Under Accessories, right click Command Prompt, In the right-click
menu, click Run as
Administrator.
On Windows 8.1 or Windows 10 systems, you might also need to disable signed driver verification. For details, see the following articles:
- Windows 8: https://www.intel.com/content/altera-www/global/en_us/index/support/support-resources/knowledge-base/solutions/fb321729.html
- Windows 10: https://www.intel.com/content/altera-www/global/en_us/index/support/support-resources/knowledge-base/embedded/2017/Why-does-aocl-diagnose-fail-while-using-Windows-10.html
- If the system already has the driver installed and
you need to install FCD without the administrative rights, you can
invoke the aocl install command
with the flag -fcd-only as shown
below and follow the prompt for FCD
installation:
aocl install <path_to_customplatform> -fcd-only
- You need administrative rights to install a board.
To run a Windows command prompt as an administrator, click
Start > All Programs > Accessories. Under Accessories, right click Command Prompt, In the right-click
menu, click Run as
Administrator.
-
Query a list of FPGA devices installed in your machine by
invoking the
aocl
diagnose
command.
The software generates an output that includes the <device_name>, which is an acl number that ranges from acl0 to acl127.Attention: For possible errors after implementing the aocl diagnose utility, refer to Possible Errors After Running the diagnose Utility section in the Intel® Arria® 10 GX FPGA Development Kit Reference Platform Porting Guide. For more information on querying the <device_name> of your accelerator board, refer to the Querying the Device Name of Your FPGA Board section.
- Verify the successful installation of the FPGA board by invoking the command aocl diagnose <device_name> to run any board vendor-recommended diagnostic test.
4.2. Uninstalling an FPGA Board (uninstall)
To uninstall your FPGA board, perform the following tasks:
- Disconnect the board from your machine by following the instructions provided by your board vendor.
-
Invoke the
aocl
uninstall
<path_to_customplatform>
utility command to
remove the current host computer drivers (for example, PCIe® drivers). The
Intel® FPGA SDK for OpenCL™
uses these drivers to communicate with the
FPGA board.
Remember:
- You need root privileges to
uninstall the Custom Platform. If you want to keep the driver while
removing the installed FCD, you can invoke the
aocl
uninstall
command
with the flag -fcd-only as shown
below and follow the prompt for FCD
uninstall:
aocl uninstall <path_to_customplatform> -fcd-only
- For Linux systems, if you had installed the FCD to a specific directory, then prior to uninstalling, you need to ensure that you have set an environment variable ACL_BOARD_VENDOR_PATH that points to that specific FCD installation directory.
- You need root privileges to
uninstall the Custom Platform. If you want to keep the driver while
removing the installed FCD, you can invoke the
aocl
uninstall
command
with the flag -fcd-only as shown
below and follow the prompt for FCD
uninstall:
- Uninstall the Custom Platform.
- Unset the LD_LIBRARY_PATH (for Linux) or PATH (for Windows) environment variable.
4.3. Querying the Device Name of Your FPGA Board (diagnose)
aocl diagnose: Running diagnostic from <board_package_path>/<board_name>/<platform>/libexec Verified that the kernel mode driver is installed on the host machine. Using board package from vendor: <board_vendor_name> Querying information for all supported devices that are installed on the host machine ... device_name Status Information acl0 Passed <descriptive_board_name> PCIe dev_id = <device_ID>, bus:slot.func = 02:00.00, at Gen 2 with 8 lanes. FPGA temperature = 43.0 degrees C. acl1 Passed <descriptive_board_name> PCIe dev_id = <device_ID>, bus:slot.func = 03:00.00, at Gen 2 with 8 lanes. FPGA temperature = 35.0 degrees C. Found 2 active device(s) installed on the host machine, to perform a full diagnostic on a specific device, please run aocl diagnose <device_name> DIAGNOSTIC_PASSED
4.4. Running a Board Diagnostic Test (diagnose <device_name>)
4.5. Programming the FPGA Offline or without a Host (program <device_name>)
<device_name> refers to the acl number (for example, acl0 to acl127) that corresponds to your FPGA device, and
<your_kernel_filename>.aocx is the executable file you use to program the hardware.
4.6. Programming the Flash Memory (flash <device_name>)
For example instructions on programming the micro SD flash card on an SoC board such as the Intel® Arria® 10 SoC Development Kit, refer to the Building the SD Card Image section of the Intel® FPGA SDK for OpenCL™ Intel® Arria® 10 SoC Development Kit Reference Platform Porting Guide.
<device_name> refers to the acl number (for example, acl0 to acl127) that corresponds to your FPGA device, and
<your_kernel_filename>.aocx is the executable file you use to program the hardware.
5. Structuring Your OpenCL Kernel
- Guidelines for Naming the Kernel
Intel® recommends that you include only alphanumeric characters in your file names. - Programming Strategies for Optimizing Data Processing Efficiency
Optimize the data processing efficiency of your kernel by implementing strategies such as unrolling loops, setting work-group sizes, and specifying compute units and work-items. - Programming Strategies for Optimizing Pointer-to-Local Memory Size
This specification allows the offline compiler to build the correctly sized local memory system for the pointer argument. If you do not specify a size, the offline compiler uses the default size. - Implementing the Intel FPGA SDK for OpenCL Channels Extension
The Intel® FPGA SDK for OpenCL™ channels extension provides a mechanism for passing data between kernels and synchronizing kernels with high efficiency and low latency. - Implementing OpenCL Pipes
The Intel® FPGA SDK for OpenCL™ provides preliminary support for OpenCL pipe functions. - Implementing Arbitrary Precision Integers
Use the Intel® FPGA SDK for OpenCL™ arbitrary precision integer extension to define integers with a custom bit-width. You can define integer custom bit-widths up to and including 64 bits. - Using Predefined Preprocessor Macros in Conditional Compilation
You may take advantage of predefined preprocessor macros that allow you to conditionally compile portions of your kernel code. - Declaring __constant Address Space Qualifiers
There are several limitations and workarounds you must consider when you include __constant address space qualifiers in your kernel. - Including Structure Data Types as Arguments in OpenCL Kernels
Pass structure parameters (struct) in OpenCL kernels either by value or as a pointer to a structure. - Inferring a Register
In general, the offline compiler chooses registers if the access to a variable is fixed and does not require any dynamic indexes. - Enabling Double Precision Floating-Point Operations
The Intel® FPGA SDK for OpenCL™ offers preliminary support for all double precision floating-point functions. - Single-Cycle Floating-Point Accumulator for Single Work-Item Kernels
Single work-item kernels that perform accumulation in a loop can take advantage of the single-cycle floating-point accumulator feature of the Intel® FPGA SDK for OpenCL™ Offline Compiler. - Integer Promotion Rules
The rules of integer promotion applied when you use intX_t data types are different from standard C/C++ rules. Your kernel design should account for these differing rules.
5.1. Guidelines for Naming the Kernel
-
Begin a file name with an alphanumeric character.
If the file name of your OpenCL™ application begins with a nonalphanumeric character, compilation fails with the following error message:
Error: Quartus compilation FAILED See quartus_sh_compile.log for the output log.
-
Ensure that the kernel file name only contains alphanumeric
character, dash, underscore, or dot.
The Intel® FPGA SDK for OpenCL™ application only accepts file name containing alphanumeric character, dash, underscore, or dot. File name having other characters is treated as an invalid file name and triggers the following compilation error message:
aoc foo\*1.cl Error: File: foo*1.cl contains invalid characters. Ensure the file name only contains alphanumeric characters, dash, underscore or dot.
-
For Windows systems, ensure that the combined length of the
kernel file name and its file path does not exceed 260 characters.
64-bit Windows 7 and Windows 8.1 have a 260-character limit on the length of a file path. If the combined length of the kernel file name and its file path exceeds 260 characters, the offline compiler generates the following error message:
The filename or extension is too long. The system cannot find the path specified.
In addition to the compiler error message, the following error message appears in the <your_kernel_filename>/quartus_sh_compile.log file:
Error: Can’t copy <file_type> files: Can’t open <your_kernel_filename> for write: No such file or directory
For Windows 10, you can remove the 260-character limit. For more information, see your Windows 10 documentation.
-
Do not name your .cl
OpenCL kernel source file "kernel", "Verilog", or "VHDL" as they are reserved
keywords.
Naming the source file kernel.cl, Verilog.cl, or VHDL.cl causes the offline compiler to generate intermediate design files that have the same names as certain internal files, which leads to a compilation error.
5.2. Programming Strategies for Optimizing Data Processing Efficiency
- Unrolling a Loop (unroll Pragma)
To direct the offline compiler to unroll a loop, or explicitly not to unroll a loop, insert an unroll kernel pragma in the kernel code preceding a loop you want to unroll. - Disabling Pipelining of a Loop (disable_loop_pipelining Pragma)
If loop-carried dependencies result in an initiation interval (II) that is equal or close to the latency of a given iteration (effectively inducing serial execution of the pipelined loop), disable pipelining of the loop to generate a simpler datapath and reduce area utilization. - Coalescing Nested Loops
Use the loop_coalesce pragma to direct the Intel® FPGA SDK for OpenCL™ Offline Compiler to coalesce nested loops into a single loop without affecting the loop functionality. Coalescing loops can help reduce your kernel area usage by directing the compiler to reduce the overhead needed for loop control. - Fusing Adjacent Loops (loop_fuse Pragma)
Use the loop_fuse pragma to direct the Intel® FPGA SDK for OpenCL™ Offline Compiler to fuse adjacent loops into a single loop without affecting either loop's functionality. The loop_fuse construct defines a region of code where the compiler always attempts to fuse adjacent loops when it is safe to do so. - Marking Loops to Prevent Automatic Fusion (nofusion Pragma)
Use the nofusion pragma to direct the Intel® FPGA SDK for OpenCL™ Offline Compiler to avoid fusing the annotated loop with any of the adjacent loops. - Specifying a Loop Initiation interval (II)
Use the ii pragma to direct the Intel® FPGA SDK for OpenCL™ Offline Compiler to attempt to set the II for the loop that follows the pragma declaration. If the offline compiler cannot achieve the specified II for the loop, then the compilation errors out. - Loop Concurrency (max_concurrency Pragma)
You can use the max_concurrency pragma to limit the concurrency of a loop in your component. - Loop Speculation (speculated_iterations Pragma)
Use the speculated_iterations pragma to direct the Intel® FPGA SDK for OpenCL™ Offline Compiler to improve the performance of pipelined loops. - Loop Interleaving Control (max_interleaving Pragma)
The Intel® FPGA SDK for OpenCL™ Offline Compiler attempts to maximize the throughput and hardware resource occupancy of pipelined inner loops in a loop nest by issuing new inner loop iterations as frequently as possible (minimizing the loop initiation interval). - Floating Point Optimizations (fp contract and fp reassoc Pragma)
Use fp contract and fp reassoc pragmas to influence the intermediate rounding and conversions of floating-point operations and the ordering of arithmetic operations in your kernel at finer granularity than the Intel® FPGA SDK for OpenCL™ Offline Compiler command options. - Specifying Work-Group Sizes
Specify a maximum or required work-group size whenever possible. - Specifying Number of Compute Units
To increase the data-processing efficiency of an OpenCL™ kernel, you can instruct the Intel® FPGA SDK for OpenCL™ Offline Compiler to generate multiple kernel compute units. Each compute unit is capable of executing multiple work-groups simultaneously. - Specifying Number of SIMD Work-Items
Specify the number of work-items within a work-group that the Intel® FPGA SDK for OpenCL™ Offline Compiler should execute in an SIMD or vectorized manner. - Specifying the private_copies Memory Attribute
You have the option to apply the private_copies memory attribute to a variable declaration inside an OpenCL kernel as follows: - Specifying the stall_enable Cluster-control Attribute
You can apply the stall_enable cluster-control attribute to your OpenCL kernel to reduce the area of your kernel while possibly decreasing kernel fMAX and throughput, as follows:
5.2.1. Unrolling a Loop (unroll Pragma)
Loop unrolling involves replicating a loop body multiple times, and reducing the trip count of a loop. Unroll loops to reduce or eliminate loop control overhead on the FPGA. In cases where there are no loop-carried dependencies and the offline compiler can perform loop iterations in parallel, unrolling loops can also reduce latency and overhead on the FPGA.
-
Provide an unroll factor whenever possible. To specify an unroll
factor N, insert the #pragma unroll <N>
directive before a loop in your kernel code.
The offline compiler attempts to unroll the loop at most <N> times.Consider the code fragment below. By assigning a value of 2 as the unroll factor, you direct the offline compiler to unroll the loop twice.
#pragma unroll 2 for(size_t k = 0; k < 4; k++) { mac += data_in[(gid * 4) + k] * coeff[k]; }
- To prevent a loop from unrolling, specify an unroll factor of 1 (that is, #pragma unroll 1).
-
To unroll a loop fully, you may omit the unroll factor by
simply inserting the #pragma unroll directive
before a loop in your kernel code.
The offline compiler attempts to unroll the loop fully if it understands the trip count. The offline compiler issues a warning if it cannot execute the unroll request.Consider the following code fragment where the unroll factor is 2:
float data[N]; #pragma unroll 2 for (int i = 0; i < N; i++){ data[i] = function(i, a); }
The offline compiler partially unrolls the loop as shown in the following code fragment:
float data[N]; for (int i = 0; i < N; i += 2){ data[i + 0] = function(i + 0, a); if (i + 1 < N){ data[i + 1] = function(i + 1, a); } }
5.2.2. Disabling Pipelining of a Loop (disable_loop_pipelining Pragma)
Use the disable_loop_pipelining pragma to direct the Intel® FPGA SDK for OpenCL™ Offline Compiler to disable pipelining of a loop. This pragma applies to single work-item kernels (that is, single-threaded kernels) in which loops are pipelined. Refer to the Single Work-Item Kernel versus NDRange Kernel section of the Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide for information about loop pipelining and kernel properties that drive the offline compiler's decision about whether to treat a kernel as single-threaded.
Unless otherwise specified, the compiler always attempts to generate a pipelined loop datapath where possible. When generating a pipelined circuit, resources of the loop must be duplicated to execute multiple iterations simultaneously, leading to an increased silicon area utilization. In cases where loop pipelining does not result in an improvement in throughput, avoid the area overhead by applying the disable_loop_pipelining pragma to the loop, as shown in the following code snippet. When you apply this pragma, the offline compiler generates a simple sequential loop datapath.
#pragma disable_loop_pipelining for (int i = 1; i < N; i++) { int j = a[i-1]; // Memory dependency induces a high-latency loop feedback path a[i] = foo(j) }
In the above example, the offline compiler fails to schedule this loop with a small II due to memory dependency (as reported in the Details pane of the Loops Analysis section of the HTML report). In such cases, loop pipelining is unlikely to be beneficial.
5.2.3. Coalescing Nested Loops
Coalescing nested loops also reduces the latency of the component, which could further reduce your kernel area usage. However, in some cases, coalescing loops might lengthen the critical loop initiation interval path, so coalescing loops might not be suitable for all kernels .
For NDRange kernels, the compiler automatically attempts to coalesce loops even if they are not annotated by the loop_coalesce pragma. Coalescing loops in NDRange kernels improves throughput as well as reducing kernel area usage. You can use the loop_coalesce pragma to prevent the automatic coalescing of loops in NDRange kernels.
#pragma loop_coalesce <loop_nesting_level>
The <loop_nesting_level> parameter is optional and is an integer that specifies how many nested loop levels that you want the compiler to attempt to coalesce. If you do not specify the <loop_nesting_level> parameter, the compiler attempts to coalesce all of the nested loops.
for (A) for (B) for (C) for (D) for (E)
- Loop (A) has a loop nesting level of 1.
- Loop (B) has a loop nesting level of 2.
- Loop (C) has a loop nesting level of 3.
- Loop (D) has a loop nesting level of 4.
- Loop (E) has a loop nesting level of 3.
- If you specify #pragma loop_coalesce 1 on loop (A), the compiler does not attempt to coalesce any of the nested loops.
- If you specify #pragma loop_coalesce 2 on loop (A), the compiler attempts to coalesce loops (A) and (B).
- If you specify #pragma loop_coalesce 3 on loop (A), the compiler attempts to coalesce loops (A), (B), (C), and (E).
- If you specify #pragma loop_coalesce 4 on loop (A), the compiler attempts to coalesce all of the loops [loop (A) - loop (E)].
Example
The following simple example shows how the compiler coalesces two loops into a single loop.
#pragma loop_coalesce for (int i = 0; i < N; i++) for (int j = 0; j < M; j++) sum[i][j] += i+j;
int i = 0; int j = 0; while(i < N){ sum[i][j] += i+j; j++; if (j == M){ j = 0; i++; } }
5.2.4. Fusing Adjacent Loops (loop_fuse Pragma)
Fusing adjacent loops can help reduce your kernel area use by reducing the overhead required for loop control and increasing the performance of your kernel by executing both original loops concurrently as one (fused) loop.
To specify a block of program code within which the compiler attempts to fuse loops, specify the pragma as follows:
#pragma loop_fuse [clause[[,]clause]...] new-line structured_block
where clause is one of the following:
- depth(constant-integer-expression)
- If a depth clause is present, the constant-integer-expression clause parameter defines the number of nesting depths at which the fusion of adjacent loops is attempted. The depth clause extends the applicability of the loop_fuse construct to all loops nested in top-level loops contained in the construct at nesting depth less-than or equal to the clause parameter, including loops that become adjacent as a result of fusion of their corresponding containing loops. In the absence of a depth clause, only loops at the top-level of the loop_fuse construct are attempted to be fused (that is, loops not contained in other loops defined within the construct). The depth clause with a parameter of 1 is equivalent to the absence of a depth clause.
- independent
- If an independent clause is present, adjacent loops that are fusion candidates within a loop_fuse construct are assumed to have no negative-distance data access dependencies. That is, for two adjacent loops considered for fusion, iterations of the logically-second loop does not access data elements produced in a later iteration of the logically-first loop. The independent clause overrides the offline compiler's static analysis during loop fusion safety analysis.
If a function call is present in a loop_fuse construct at any of the applicable nesting depths and inlining the function call materializes a loop, then the resulting loop is considered to be a candidate for fusion.
Nested Depth Clauses
In programs where loop_fuse constructs are nested and their implied sets of fusion candidates overlap, the overall set of fusion candidates comprises a union of all loops covered by the distinct loop_fuse regions. The loop_fuse attribute clauses apply only to the fusion candidates implied by the directive to which the clauses apply.
#pragma loop_fuse depth(2) independent { L1: for(...) {} L2: for(...) { #pragma loop_fuse depth(2) { L3: for(...) {} L4: for(...) { L5: for(...) {} L6: for(...) {} } } } }In this example, loops L1, L2, L3, L4, L5, and L6 are considered for fusion and loops L1, L2, L3, L4 are considered for fusion overriding the compiler's dependence analysis.
5.2.5. Marking Loops to Prevent Automatic Fusion (nofusion Pragma)
To specify a loop to not be eligible for fusion, specify the nofusion pragma as follows:
#pragma nofusion for (int i = 0; i < N; ++i) { ... }
In the following example, the compiler does not apply the loop fusion transformation to loops L1 and L2.
#pragma nofusion L1: for (int j=0; j < N; ++j) { data[j] += Q; } L2: for (int i = 0; i < N; ++l) { output[i] = Q * data[i]; }
5.2.6. Specifying a Loop Initiation interval (II)
The ii pragma applies to single work-item kernels (that is, single-threaded kernels) in which loops are pipelined. Refer to the Single Work-Item Kernel versus NDRange Kernel section of the Intel® FPGA SDK for OpenCL™ Best Practices Guide for information on loop pipelining, and on kernel properties that drive the offline compiler's decision on whether to treat a kernel as single-threaded.
The higher the II value, the longer the wait before the subsequent loop iteration starts executing. Refer to the Reviewing Your Kernel's report.html File section of the Intel® FPGA SDK for OpenCL™ Best Practices Guide for information on II, and on the compiler reports that provide you with details on the performance implications of II on a specific loop.
For some loops in your kernel, specifying a higher II value with the ii pragma than the value the compiler chooses by default can increase the maximum operating frequency (fMAX) of your kernel without a decrease in throughput.
- The loop is pipelined because the kernel is single-threaded.
- The loop is not critical to the throughput of your kernel .
- The running time of the loop is small compared to other loops it might contain.
#pragma ii <desired_initiation_interval>The <desired_initiation_interval> parameter is required and is an integer that specifies the number of clock cycles to wait between the beginning of execution of successive loop iterations.
Example
Consider a case where your kernel has two distinct, pipelineable loops: a short-running initialization loop that has a loop-carried dependence and a long-running loop that does the bulk of your processing. In this case, the compiler does not know that the initialization loop has a much smaller impact on the overall throughput of your design. If possible, the compiler attempts to pipeline both loops with an II of 1.
Because the initialization loop has a loop-carried dependence, it has a feedback path in the generated hardware. To achieve an II with such a feedback path, some clock frequency might be sacrificed. Depending on the feedback path in the main loop, the rest of your design can run at a higher operating frequency.
If you specify #pragma ii 2 on the initialization loop, you tell the compiler that it can be less aggressive in optimizing II for this loop. Less aggressive optimization allows the compiler to pipeline the path limiting the fmax and could allow your overall kernel design to achieve a higher fmax.
The initialization loop takes longer to run with its new II. However, the decrease in the running time of the long-running loop due to higher fmax compensates for the increased length in running time of the initialization loop.
5.2.7. Loop Concurrency (max_concurrency Pragma)
The concurrency of a loop is how many iterations of that loop can be in progress at one time. By default, the Intel® FPGA SDK for OpenCL™ tries to maximize the concurrency of loops so that your component runs at peak throughput.
The max_concurrency pragma applies to single work-item kernels (that is, single-threaded kernels) in which loops are pipelined. Refer to the Single Work-Item Kernel versus NDRange Kernel section of the Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide for information on loop pipelining, and on kernel properties that drive the offline compiler's decision on whether to treat a kernel as single-threaded.
The max_concurrency pragma enables you to control the on-chip memory resources required to pipeline your loop. To achieve simultaneous execution of loop iterations, the offline compiler must create copies of any memory that is private to a single iteration. These copies are called private copies. The greater the permitted concurrency, the more private copies the compiler must create.
The kernel's HTML report (report.html) provides the following information pertaining to loop concurrency:
- Maximum concurrency that the offline compiler has chosen
This information is available in the Loop Analysis report and Kernel Memory viewer:
- In the Loop Analysis report, a message in the Details pane reports as
the maximum number of simultaneous executions has been limited to
N.Note: The value of unsigned N can be greater than or equal to zero. A value of N = 0 indicates unlimited concurrency.
- In the Kernel Memory Viewer, the bank view of your local memory graphically shows the number of private copies.
- In the Loop Analysis report, a message in the Details pane reports as
the maximum number of simultaneous executions has been limited to
N.
- Impact to memory usage
This information is available in the Area Analysis report. A message in the Details pane reports that the offline compiler has created N independent copies of the memory to enable simultaneous execution of N loop iterations.
If you want to exchange some performance for physical memory savings, apply #pragma max_concurrency <N> to the loop, as shown below. When you apply this pragma, the offline compiler limits the number of simultaneously-executed loop iterations to N. The number of private copies of loop memories is also reduced to N.
#pragma max_concurrency 1 for (int i = 0; i < N; i++) { int arr[M]; // Doing work on arr }
You can also control the number of private copies (created for a local memory and accessed within a loop) by using __attribute__((private_copies(N))). Refer to Memory Attributes for Configuring Kernel Memory Systems for more details about the attribute. If a local memory with __attribute__((private_copies(N))) is accessed with a loop that has #pragma max_concurency M, the offline compiler limits the number of simultaneously-executed loop iterations to min(M,N).
5.2.8. Loop Speculation (speculated_iterations Pragma)
The speculated_iterations pragma is applied to loops and hence, it must appear directly before the loop (the same place as other loop pragmas) as shown in the following:
#pragma speculated_iterations k // where k >= 0
The Intel® FPGA SDK for OpenCL™ Offline Compiler generates hardware to run k extra iterations of the loop while ensuring that the extra iterations do not affect anything. This allows either reducing the II of the loop or increasing the fmax. The deciding factor is how quickly the exit condition of the loop is calculated. If the calculation takes many cycles, it is better to have speculated_iterations larger.
5.2.9. Loop Interleaving Control (max_interleaving Pragma)
As an example, consider the loop nest in the following code snippet:
// Loop j is pipelined with ii=1 for (int j = 0; j < M; j++) { int a[N]; // Loop i is pipelined with ii=2 for (int i = 1; i < N; i++) { a[i] = foo(i) } }
In this example, the inner i loop is pipelined with a loop II of 2. Under normal pipelining, this means that the inner loop hardware only achieves 50% utilization since one i iteration is initiated every other cycle. To take advantage of these idle cycles, the compiler interleaves a second invocation of the i loop from the next iteration of the outer j loop. Here, a loop invocation means to start pipelined execution of a loop body. In this example, since the i loop resides inside the j loop, and the j loop has a trip count of M, the i loop is invoked M times. Since the j loop is an outermost loop, it is invoked once. The following table illustrates the difference between normal pipelined execution of the i loop and interleaved execution for this example where N=5:
Cycle | Pipelined | Interleaved |
---|---|---|
0 | (0,0) | (0,0) |
1 | --- | (1,0) |
2 | (0,1) | (0,1) |
3 | --- | (1,1) |
4 | (0,2) | (0,2) |
5 | --- | (1,2) |
6 | (0,3) | (0,3) |
7 | --- | (1,3) |
8 | (0,4) | (0,4) |
9 | --- | (1,4) |
10 | (1,0) | (2,0) |
11 | --- | (3,0) |
12 | (1,1) | (2,1) |
13 | --- | (3,1) |
14 | (1,2) | (2,2) |
15 | --- | (3,2) |
16 | (1,3) | (2,3) |
17 | --- | (3,3) |
18 | (1,4) | (2,4) |
19 | --- | (3,4) |
The table shows the values (j,i) for each inner loop iteration that is initiated at each cycle. At cycle 0, both modes of execution initiate the (0,0)th iteration of the i loop. Under normal pipelined execution, no i loop iteration is initiated at cycle 1. Under interleaved execution, the (1,0)th iteration of the innermost loop, that is, the first iteration of the next (j=1) invocation of the i loop is initiated. By cycle 10, interleaved execution has initiated all of the iterations of both the j=0 invocation of the i loop and the j=1 invocation of the i loop. This represents twice the efficiency of the normal pipelined execution.
In some cases, you may decide that the performance benefit from interleaving is not equal to the area cost associated with enabling interleaving. In these cases, you may want to limit or restrict the amount of interleaving to reduce FPGA area utilization. To limit the number of interleaved invocations of an inner loop that can be executed simultaneously, annotate the inner loop with the max_interleaving pragma. The annotated loop must be contained inside another pipelined loop. The required parameter ( n) specifies an upper bound on the degree of interleaving allowed, that is, how many invocations of the containing loop can execute the annotated loop at a given time.
Specify the max_interleaving pragma in one of the following ways:
-
#pragma max_interleaving 1
The compiler restricts the annotated (inner) loop to be invoked only once per outer loop iteration. That is, all iterations of the inner loop travels the pipeline before the next invocation of the inner loop can occur.
-
#pragma max_interleaving 0
The compiler allows the pipeline to contain a number of simultaneous invocations of the inner loop equal to the loop initiation interval (II) of the inner loop. For example, an inner loop with an II of 2 can have iterations from two invocations in the pipeline at a time. This behavior is the default behavior for the compiler if you do not specify the max_interleaving pragma.
// Loop j is pipelined with ii=1 for (int j = 0; j < M; j++) { int a[N]; // Loop i is pipelined with ii=2 #pragma max_interleaving 1 for (int i = 1; i < N; i++) { a[i] = foo(i) } … }
5.2.10. Floating Point Optimizations (fp contract and fp reassoc Pragma)
Use fp contract and fp reassoc pragmas to influence the intermediate rounding and conversions of floating-point operations and the ordering of arithmetic operations in your kernel at finer granularity than the Intel® FPGA SDK for OpenCL™ Offline Compiler command options.
fp contract and fp reassoc pragma statements must be placed either outside of all functions (file scope) such as a #define or at the start of a function within the curly braces, unlike other pragmas. For example:
{ #pragma clang fp reassoc(on) T=(1.0f-a)*(1.0f-b)*Ti0j0+a*(1.0f-b)*Ti1j0+(1.0f-a)*b*Ti0j1+a*b*Ti1j1; }
fp contract Pragma
The fp contract pragma controls whether the compiler can skip intermediate rounding and conversions mainly between double precision arithmetic operations.
#pragma clang fp contract(state)
where, the state parameter can be one of the following values:
Value | Description |
---|---|
off |
Turns off any permissions to fuse instructions into FMAs. It suppresses the -ffp-contract=fast aoc command flag for instructions within the scope of the pragma. For information about the -ffp-contract=fast flag, refer to Reducing Floating-Point Rounding Operations (-ffp-contract=fast). |
fast | Allows the fusing of mult and add instructions into an FMA, but might violate the language standard. |
fp reassoc Pragma
The fp reassoc pragma controls the relaxing of the order of floating point arithmetic operations within the code block that this pragma is applied to.
This pragma has the following syntax:
#pragma clang fp reassoc(state)
where, the state parameter can be one of the following values:
Value | Description |
---|---|
on |
Enables the effect of the -ffp-reassoc aoc command flag for instructions within the scope of the pragma. |
off | Suppresses the-ffp-reassoc
aoc global flag for instructions
within the scope of the pragma if the flag is enabled. For information about the -ffp-reassoc flag, refer to Relaxing the Order of Floating-Point Operations (-ffp-reassoc). |
5.2.11. Specifying Work-Group Sizes
- If your kernel contains a barrier, the offline compiler sets a default maximum scalarized work-group size of 128 work-items.
- If your kernel does not query any OpenCL intrinsics that allow different threads to behave differently (that is, local or global thread IDs, or work-group ID), the offline compiler infers a single-threaded execution mode and sets the maximum work-group size to (1,1,1). In this case, the OpenCL runtime also enforces a global enqueue size of (1,1,1), and loop pipelining optimizations are enabled within the offline compiler.
To specify the work-group size, modify your kernel code in the following manner:
-
To specify the maximum number of work-items that the offline
compiler provisions for a work-group in a kernel, insert the max_work_group_size(X,
Y, Z) attribute in your kernel source code.
For example:
__attribute__((max_work_group_size(512,1,1))) __kernel void sum (__global const float * restrict a, __global const float * restrict b, __global float * restrict answer) { size_t gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; }
-
To specify the required number of work-items that the offline
compiler provisions for a work-group in a kernel, insert the reqd_work_group_size(X,
Y, Z) attribute in your kernel source code.
For example:
__attribute__((reqd_work_group_size(64,1,1))) __kernel void sum (__global const float * restrict a, __global const float * restrict b, __global float * restrict answer) { size_t gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; }
5.2.12. Specifying Number of Compute Units
__attribute__((num_compute_units(2))) __kernel void test(__global const float * restrict a, __global const float * restrict b, __global float * restrict answer) { size_t gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; }
5.2.13. Specifying Number of SIMD Work-Items
__attribute__((num_simd_work_items(4))) __attribute__((reqd_work_group_size(64,1,1))) __kernel void test(__global const float * restrict a, __global const float * restrict b, __global float * restrict answer) { size_t gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; }
5.2.14. Specifying the private_copies Memory Attribute
int __attribute__((private_copies(k)) local_A[M];
where, k is an unsigned integer. When this attribute is applied to a variable declared or accessed inside a pipelined loop, the Intel® FPGA SDK for OpenCL™ Offline Compiler creates k independent copies of the memory implementing this variable. This allows up to k iterations of the pipelined loop to run in parallel, where each iteration accesses its own copy of the memory. If this attribute is not applied or if k is set to 0, then the compiler chooses an appropriate number of copies, up to a maximum of 16 to maximize throughput.
Consider the following example where the outer loop declares four local arrays:
for (int i = 0; i < N; i++) { int local_A[M]; int local_B[M]; int local_C[M]; int local_D[M]; // Step 1 for (int j = 0; j < M; j++) { local_A[j ] = initA(); } // Step 2 for (int j = 0; j < M; j++) { local_B[j] = initB(local_A[j]); } // Step 3 for (int j = 0; j < M; j++) { local_C[j] = initC(local_B[j]); } // Step 4 for (int j = 0; j < M; j++) { local_D[j] = initD(local_C[j]); } }
In this example, the outer loop contains four steps, where each step corresponds to an inner loop. In Step 1, the first local array local_A is initialized. In Step 2, local_A is read from, but not written to. This is the last use of local_A in the outer loop. Similarly, local_B is first used in Step 2, where it is initialized. In Step 3, local_B is read from, but not written to, and this is the last use of local_B. Similarly, local_C is used only in Steps 3 and 4. The Intel® FPGA SDK for OpenCL™ Offline Compiler privatizes each array by making 16 copies. These copies are enough to support concurrency of 16 on the outer loop. However, because the live ranges of these local arrays do not span the entire outer loop, all 16 copies are not required to maximize throughput of the outer loop. This means that the amount of area consumed by making these copies is higher than necessary. In this case, applying the private_copies attribute to control the number of copies of these local arrays can reduce the area used while maintaining the throughput of the outer loop.
5.2.15. Specifying the stall_enable Cluster-control Attribute
__kernel void __attribute__((stall_enable)) example(__global int * restrict input, __global int * restrict output, int size){ for(int i = 0; i < size; ++i){ output[i] = input[i]; } ... }
The Intel® FPGA SDK for OpenCL™ Offline Compiler typically groups related operations into clusters. In several scenarios, the clusters are stall-free clusters. A stall-free cluster executes the operations without any stalls and contains a FIFO at the end of the cluster that holds the results if the cluster is stalled. This FIFO adds area and latency to the kernel, but might allow a higher fMAX and increased throughput.
If you prefer lower FPGA area use and lower latency over higher throughput, use the __attribute__((stall_enable)) attribute to bias the compiler to produce stall-enabled clusters. Stall-enabled clusters lack an exit FIFO to buffer all data in the event that the whole SFC is stalled, which reduces area and latency, but passes stall signals to the contained operations. Passing stall signals might reduce fMAX.
Not all operations support stall, and these operations cannot be contained in a stall-enabled cluster. The compiler generates a warning if some operations cannot be placed into a stall-enabled cluster.
The compiler automatically uses stall-free clusters for kernels as they are generally more beneficial. This attribute requests the compiler to form stall-enabled clusters if possible.
5.3. Programming Strategies for Optimizing Pointer-to-Local Memory Size
__kernel void myLocalMemoryPointer( __local float * A, __attribute__((local_mem_size(1024))) __local float * B, __attribute__((local_mem_size(32768))) __local float * C) { //statements }
In the myLocalMemoryPointer kernel, 16 kB of local memory (default) is allocated to pointer A, 1 kB is allocated to pointer B, and 32 kB is allocated to pointer C.
5.4. Implementing the Intel FPGA SDK for OpenCL Channels Extension
5.4.1. Overview of the Intel FPGA SDK for OpenCL Channels Extension
Implementation of channels decouples data movement between concurrently executing kernels from the host processor.
5.4.2. Channel Data Behavior
Data in channels does not persist between context, program, device, kernel, or platform releases, even if the OpenCL implementation performs optimizations that avoid reprogramming operations on a device. For example, if you run a host program twice using the same .aocx file, or if a host program releases and reacquires a context, the data in the channel might or might not persist across the operation. FPGA device reset operations might happen behind the scenes on object releases that purge data in any channels
Consider the following code example:
channel int c0; __kernel void producer() { for (int i = 0; i < 10; i++) { write_channel_intel (c0, i); } } __kernel void consumer (__global uint * restrict dst) { for (int i = 0; i < 5; i++) { dst[i] = read_channel_intel(c0); } }
The kernel producer writes ten elements ([0, 9]) to the channel. The kernel consumer does not contain any work-item identifier queries; therefore, it receives an implicit reqd_work_group_size attribute of (1,1,1). The implied reqd_work_group_size(1,1,1) attribute means that consumer must be launched as a single work-item kernel. In the example above, consumer reads five elements from the channel per invocation. During the first invocation, the kernel consumer reads values 0 to 4 from the channel. Because the data persists across NDRange invocations, the second time you execute the kernel consumer, it reads values 5 to 9.
For this example, to avoid a deadlock from occurring, you need to invoke the kernel consumer twice for every invocation of the kernel producer. If you call consumer less than twice, producer stalls because the channel becomes full. If you call consumer more than twice, consumer stalls because there is insufficient data in the channel.
5.4.3. Multiple Work-Item Ordering for Channels
Multiple work-item accesses to a channel can be useful in some scenarios. For example, they are useful when data words in the channel are independent, or when the channel is implemented for control logic. The main concern regarding multiple work-item accesses to a channel is the order in which the kernel writes data to and reads data from the channel. If possible, the SDK's channels extension processes work-item read and write operations to the channel in a deterministic order. As such, the read and write operations remain consistent across kernel invocations.
Requirements for Deterministic Multiple Work-Item Ordering
To guarantee deterministic ordering, the SDK checks that a channel access is work-item invariant based on the following characteristics:
- All paths through the kernel must execute the channel access.
- If the first requirement is not satisfied, none of the branch conditions that reach the channel call should execute in a work-item-dependent manner.
- The kernel is not inferred as a single work-item kernel.
If the SDK cannot guarantee deterministic ordering of multiple work-item accesses to a channel, it warns you that the channels might not have well-defined ordering and therefore might exhibit nondeterministic execution. Primarily, the SDK fails to provide deterministic ordering if you have work-item-variant code on loop executions with channel calls, as illustrated below:
__kernel void ordering (__global int * restrict check, __global int * restrict data) { int condition = check[get_global_id(0)]; if (condition) { for (int i = 0; i < N, i++) { process(data); write_channel_intel (req, data[i]); } } else { process(data); } }
5.4.3.1. Work-Item Serial Execution of Channels
When you implement channels in a kernel, the Intel® FPGA SDK for OpenCL™ Offline Compiler enforces that kernel behavior is equivalent to having at most one work-group in flight within the compute unit at a time. The compiler also ensures that the kernel executes channels in work-item serial execution, where the kernel executes work-items with smaller IDs first. A work-item has the identifier (x, y, z, group), where x, y, z are the local 3D identifiers, and group is the work-group identifier.
The work-item ID (x0, y0, z0, group0) is considered to be smaller than the ID (x1, y1, z1, group1) if one of the following conditions is true:
- group0 < group1
- group0 = group1 and z0 < z1
- group0 = group1 and z0 = z1 and y0 < y1
- group0 = group1 and z0 = z1 and y0 = y1 and x0 < x1
Work-items with incremental IDs execute in a sequential order. For example, the work-item with an ID (x0, y0, z0, group0) executes the write channel call first. Then, the work-item with an ID (x1, y0, z0, group0) executes the call, and so on. Defining this order ensures that the system is verifiable with external models.
Channel Execution in Loop with Multiple Work-Items
When channels exist in the body of a loop with multiple work-items, as shown below, each loop iteration executes prior to subsequent iterations. This implies that loop iteration 0 of each work-item in a work-group executes before iteration 1 of each work-item in a work-group, and so on.
__kernel void ordering (__global int * data, int X) { int n = 0; while (n < X) { write_channel_intel (req, data[get_global_id(0)]); n++; } }
5.4.4. Restrictions in the Implementation of Intel FPGA SDK for OpenCL Channels Extension
Multiple Channel Call Site
__kernel void k1() { read_channel_intel (channel1); read_channel_intel (channel1); read_channel_intel (channel1); }
__kernel void k1(){ write_channel_intel (channel1, 1); } __kernel void k2() { write_channel_intel (channel1, 2); }
Feedback and Feed-forward Channels
Performance of a kernel that has multiple accesses (reads or writes) to the same channel might be poor.
Static Indexing
The Intel® FPGA SDK for OpenCL™ channels extension does support indexing into arrays of channel IDs, but it leads to inefficient hardware.
Consider the following example:
channel int ch[WORKGROUP_SIZE]; __kernel void consumer() { int gid = get_global_id(0); int value = read_channel_intel(ch[gid]); //statements }
Compilation of this example generates the following warning message:
Compiler Warning: Dynamic access into channel array ch was expanded into predicated static accesses on every channel of the array.
If the access is dynamic and you know that only a subset of the channels in the array can be accessed, you can generate slightly more efficient hardware with a switch statement:
channel int ch[WORKGROUP_SIZE]; __kernel void consumer() { int gid = get_global_id(0); int value; switch(gid) { case 0: value = read_channel_intel(ch[0]); break; case 2: value = read_channel_intel(ch[2]); break; case 3: value = read_channel_intel(ch[3]); break; //statements case WORKGROUP_SIZE-1:read_channel_intel(ch[WORKGROUP_SIZE-1]); break; } //statements }
Kernel Vectorization Support
You cannot vectorize kernels that use channels; that is, do not include the num_simd_work_items kernel attribute in your kernel code. Vectorizing a kernel that uses channels creates multiple channel accesses inside the same kernel and requires arbitration, which negates the advantages of vectorization. As a result, the SDK's channel extension does not support kernel vectorization.
Instruction-Level Parallelism on read_channel_intel and write_channel_intel Calls
If no data dependencies exist between read_channel_intel and write_channel_intel calls, the offline compiler attempts to execute these instructions in parallel. As a result, the offline compiler might execute these read_channel_intel and write_channel_intel calls in an order that does not follow the sequence expressed in the OpenCL kernel code.
Consider the following code sequence:
in_data1 = read_channel_intel(channel1); in_data2 = read_channel_intel(channel2); in_data3 = read_channel_intel(channel3);
Because there are no data dependencies between the read_channel_intel calls, the offline compiler can execute them in any order.
5.4.5. Enabling the Intel FPGA SDK for OpenCL Channels for OpenCL Kernel
To enable the channel extension, use the following pragma:
#pragma OPENCL EXTENSION cl_intel_channels : enableChannel declarations are unique within a given OpenCL kernel program. Also, channel instances are unique for every OpenCL kernel program device pair. If the runtime loads a single OpenCL kernel program onto multiple devices, each device will have a single copy of the channel. However, these channel copies are independent and do not share data across the devices.
5.4.5.1. Declaring the Channel Handle
To read from and write to a channel, the kernel must pass the channel variable to each of the corresponding API calls.
-
Declare the channel handle as a file scope variable in the
kernel source code
using
the following convention: channel <type>
<variable_name>
For example: channel int c;
-
The
Intel® FPGA SDK for OpenCL™
channel extension
supports simultaneous channel accesses by multiple variables declared in a data
structure. Declare a struct data structure for
a channel in the following manner:
typedef struct type_ { int a; int b; } type_t; channel type_t foo;
5.4.5.2. Implementing Blocking Channel Writes
Where:
channel_id identifies the buffer to which the channel connects, and it must match the channel_id of the corresponding read channel (read_channel_intel).
data is the data that the channel write operation writes to the channel.
<type> defines a channel data width. Follow the OpenCL™ conversion rules to ensure that data the kernel writes to a channel is convertible to <type>.
//Defines chan, a kernel file-scope channel variable. channel long chan; /*Defines the kernel which reads eight bytes (size of long) from global memory, and passes this data to the channel.*/ __kernel void kernel_write_channel( __global const long * src ) { for (int i = 0; i < N; i++) { //Writes the eight bytes to the channel. write_channel_intel(chan, src[i]); } }
Implementing Nonblocking Channel Writes
Consider a scenario where your application has one data producer with two identical workers. Assume the time each worker takes to process a message varies depending on the contents of the data. In this case, there might be situations where one worker is busy while the other is free. A nonblocking write can facilitate work distribution such that both workers are busy.
channel long worker0, worker1; __kernel void producer( __global const long * src ) { for(int i = 0; i < N; i++) { bool success = false; do { success = write_channel_nb_intel(worker0, src[i]); if(!success) { success = write_channel_nb_intel(worker1, src[i]); } } while(!success); } }
5.4.5.3. Implementing Blocking Channel Reads
Where:
channel_id identifies the buffer to which the channel connects, and it must match the channel_id of the corresponding write channel (write_channel_intel).
<type> defines a channel data width. Ensure that the variable the kernel assigns to read the channel data is convertible from <type>.
//Defines chan, a kernel file-scope channel variable. channel long chan; /*Defines the kernel, which reads eight bytes (size of long) from the channel and writes it back to global memory.*/ __kernel void kernel_read_channel (__global long * dst); { for (int i = 0; i < N; i++) { //Reads the eight bytes from the channel. dst[i] = read_channel_intel(chan); } }
Implementing Nonblocking Channel Reads
On a successful read (valid set to true), the value read from the channel is returned by the read_channel_nb_intel function. On a failed read (valid set to false), the return value of the read_channel_nb_intel function is not defined.
channel long chan; __kernel void kernel_read_channel (__global long * dst) { int i = 0; while (i < N) { bool valid0, valid1; long data0 = read_channel_nb_intel(chan, &valid0); long data1 = read_channel_nb_intel(chan, &valid1); if (valid0) { process(data0); } if (valid1) { process(data1); } } }
5.4.5.4. Implementing I/O Channels Using the io Channels Attribute
The io("chan_id") attribute specifies the I/O feature of an accelerator board with which a channel is connected, where chan_id is the name of the I/O interface listed in the board_spec.xml file of your Custom Platform.
Because peripheral interface usage might differ for each device type, consult your board vendor's documentation when you implement I/O channels in your kernel program. Your OpenCL™ kernel code must be compatible with the type of data generated by the peripheral interfaces.
- Implicit data dependencies might exist for channels that connect to the board directly and communicate with peripheral devices via I/O channels. These implicit data dependencies might lead to unexpected behavior because the Intel® FPGA SDK for OpenCL™ Offline Compiler does not have visibility into these dependencies.
- External I/O channels communicating with the same peripherals do not obey any sequential ordering. Ensure that the external device does not require sequential ordering because unexpected behavior might occur.
-
Consult the board_spec.xml file in your Custom Platform to identify the input and output features available on your FPGA board.
For example, a board_spec.xml file might include the following information on I/O features:
<channels> <interface name="udp_0" port="udp0_out" type="streamsource" width="256" chan_id="eth0_in"/> <interface name="udp_0" port="udp0_in" type="streamsink" width="256" chan_id="eth0_out"/> <interface name="udp_0" port="udp1_out" type="streamsource" width="256" chan_id="eth1_in"/> <interface name="udp_0" port="udp1_in" type="streamsink" width="256" chan_id="eth1_out"/> </channels>
The width attribute of an interface element specifies the width, in bits, of the data type used by that channel. For the example above, both the uint and float data types are 32 bits wide. Other bigger or vectorized data types must match the appropriate bit width specified in the board_spec.xml file.
-
Implement the io channel attribute as demonstrated in the following code example. The
io channel attribute names must match those of the I/O
channels (chan_id) specified in the
board_spec.xml file.
channel QUDPWord udp_in_IO __attribute__((depth(0))) __attribute__((io("eth0_in"))); channel QUDPWord udp_out_IO __attribute__((depth(0))) __attribute__((io("eth0_out"))); __kernel void io_in_kernel (__global ulong4 *mem_read, uchar read_from, int size) { int index = 0; ulong4 data; int half_size = size >> 1; while (index < half_size) { if (read_from & 0x01) { data = read_channel_intel(udp_in_IO); } else { data = mem_read[index]; } write_channel_intel(udp_in, data); index++; } } __kernel void io_out_kernel (__global ulong2 *mem_write, uchar write_to, int size) { int index = 0; ulong4 data; int half_size = size >> 1; while (index < half_size) { ulong4 data = read_channel_intel(udp_out); if (write_to & 0x01) { write_channel_intel(udp_out_IO, data); } else { //only write data portion ulong2 udp_data; udp_data.s0 = data.s0; udp_data.s1 = data.s1; mem_write[index] = udp_data; } index++; } }
Attention: Declare a unique io("chan_id") handle for each I/O channel specified in the channels eXtensible Markup Language (XML) element within the board_spec.xml file.
5.4.5.5. Emulating I/O Channels
When you emulate a kernel that has a channel declared with the io attribute, I/O channel input is emulated by reading from a file, and channel output is emulated by writing to a file.
channel uint chanA __attribute__((io("myIOChannel")));
channel uint readChannel __attribute__((io("myIOChannel"))); channel uint writeChannel __attribute__((io("myIOChannel")));
Emulating Reading from an I/O Channel
- Non-blocking read
- If the file does not exist or there is insufficient data, the read attempt returns with a failure message.
- Blocking read
- If the file does not exist or there is insufficient data, the read attempt blocks your program until the file is created on the disk, or the file contains sufficient data.
Emulating Writing to an I/O Channel
- Non-blocking write
- If the write attempt fails, an error is returned.
- Blocking write
- If the write attempt fails, further write attempts are made.
5.4.5.6. Use Models of Intel FPGA SDK for OpenCL Channels Implementation
The following use models provide an overview on how to exploit concurrent execution safely and efficiently.
Feed-Forward Design Model
Implement the feed-forward design model to send data from one kernel to the next without creating any cycles between them. Consider the following code example:
__kernel void producer (__global const uint * src, const uint iterations) { for (int i = 0; i < iterations; i++) { write_channel_intel(c0, src[2*i]); write_channel_intel(c1, src[2*i+1]); } } __kernel void consumer (__global uint * dst, const uint iterations) { for (int i = 0; i < iterations; i++) { dst[2*i] = read_channel_intel(c0); dst[2*i+1] = read_channel_intel(c1); } }
The producer kernel writes data to channels c0 and c1. The consumer kernel reads data from c0 and c1. The figure below illustrates the feed-forward data flow between the two kernels:
Buffer Management
In the feed-forward design model, data traverses between the producer and consumer kernels one word at a time. To facilitate the transfer of large data messages consisting of several words, you can implement a ping-pong buffer, which is a common design pattern found in applications for communication. The figure below illustrates the interactions between kernels and a ping-pong buffer:
The manager kernel manages circular buffer allocation and deallocation between the producer and consumer kernels. After the consumer kernel processes data, the manager receives memory regions that the consumer frees up and sends them to the producer for reuse. The manager also sends to the producer kernel the initial set of free locations, or tokens, to which the producer can write data.
The following figure illustrates the sequence of events that take place during buffer management:
- The manager kernel sends a set of tokens to the producer kernel to indicate initially which regions in memory are free for producer to use.
- After manager allocates the memory region, producer writes data to that region of the ping-pong buffer.
- After producer completes the write operation, it sends a
synchronization token to the
consumer kernel to indicate what memory region
contains data for processing. The
consumer kernel then reads data from that region of the
ping-pong buffer.
Note: When consumer is performing the read operation, producer can write to other free memory locations for processing because of the concurrent execution of the producer, consumer, and manager kernels.
- After consumer completes the read operation, it releases the memory region and sends a token back to the manager kernel. The manager kernel then recycles that region for producer to use.
Implementation of Buffer Management for OpenCL Kernels
To ensure that the SDK implements buffer management properly, the ordering of channel read and write operations is important. Consider the following kernel example:
__kernel void producer (__global const uint * restrict src, __global volatile uint * restrict shared_mem, const uint iterations) { int base_offset; for (uint gID = 0; gID < iterations; gID++) { // Assume each block of memory is 256 words uint lID = 0x0ff & gID; if (lID == 0) { base_offset = read_channel_intel(req); } shared_mem[base_offset + lID] = src[gID]; // Make sure all memory operations are committed before // sending token to the consumer mem_fence(CLK_GLOBAL_MEM_FENCE | CLK_CHANNEL_MEM_FENCE); if (lID == 255) { write_channel_intel(c, base_offset); } } }
In this kernel, because the following lines of code are independent, the Intel® FPGA SDK for OpenCL™ Offline Compiler can schedule them to execute concurrently:
shared_mem[base_offset + lID] = src[gID];
and
write_channel_intel(c, base_offset);
Writing data to base_offset and then writing base_offset to a channel might be much faster than writing data to global memory. The consumer kernel might then read base_offset from the channel and use it as an index to read from global memory. Without synchronization, consumer might read data from producer before shared_mem[base_offset + lID] = src[gID]; finishes executing. As a result, consumer reads in invalid data. To avoid this scenario, the synchronization token must occur after the producer kernel commits data to memory. In other words, a consumer kernel cannot consume data from the producer kernel until producer stores its data in global memory successfully.
To preserve this ordering, include an OpenCL mem_fence token in your kernels. The mem_fence construct takes two flags: CLK_GLOBAL_MEM_FENCE and CLK_CHANNEL_MEM_FENCE. The mem_fence effectively creates a control flow dependence between operations that occur before and after the mem_fence call. The CLK_GLOBAL_MEM_FENCE flag indicates that global memory operations must obey the control flow. The CLK_CHANNEL_MEM_FENCE indicates that channel operations must obey the control flow. As a result, the write_channel_intel call in the example cannot start until the global memory operation is committed to the shared memory buffer.
5.4.5.7. Implementing Buffered Channels Using the depth Channels Attribute
You may use a buffered channel to control data traffic, such as limiting throughput or synchronizing accesses to shared memory. In an unbuffered channel, a write operation cannot proceed until the read operation reads a data value. In a buffered channel, a write operation cannot proceed until the data value is copied to the buffer. If the buffer is full, the operation cannot proceed until the read operation reads a piece of data and removes it from the channel.
channel int c __attribute__((depth(10))); __kernel void producer (__global int * in_data) { for (int i = 0; i < N; i++) { if (in_data[i]) { write_channel_intel(c, in_data[i]); } } } __kernel void consumer (__global int * restrict check_data, __global int * restrict out_data) { int last_val = 0; for (int i = 0; i < N, i++) { if (check_data[i]) { last_val = read_channel_intel(c); } out_data[i] = last_val; } }
In this example, the write operation can write ten data values to the channel without blocking. Once the channel is full, the write operation cannot proceed until an associated read operation to the channel occurs.
Because the channel read and write calls are conditional statements, the channel might experience an imbalance between read and write calls. You may add a buffer capacity to the channel to ensure that the producer and consumer kernels are decoupled. This step is particularly important if the producer kernel is writing data to the channel when the consumer kernel is not reading from it.
5.4.5.8. Enforcing the Order of Channel Calls
When the Intel® FPGA SDK for OpenCL™ Offline Compiler generates a compute unit, it does not always create instruction-level parallelism on all instructions that are independent of each other. As a result, channel read and write operations might not execute independently of each other even if there is no control or data dependence between them. When channel calls interact with each other, or when channels write data to external devices, deadlocks might occur.
For example, the code snippet below consists of a producer kernel and a consumer kernel. Channels c0 and c1 are unbuffered channels. The schedule of the channel read operations from c0 and c1 might occur in the reversed order as the channel write operations to c0 and c1. That is, the producer kernel writes to c0 but the consumer kernel might read from c1 first. This rescheduling of channel calls might cause a deadlock because the consumer kernel is reading from an empty channel.
__kernel void producer (__global const uint * src, const uint iterations) { for (int i = 0; i < iterations; i++) { write_channel_intel(c0, src[2*i]); write_channel_intel(c1, src[2*i+1]); } } __kernel void consumer (__global uint * dst, const uint iterations) { for (int i = 0; i < iterations; i++) { /*During compilation, the AOC might reorder the way the consumer kernel writes to memory to optimize memory access. Therefore, c1 might be read before c0, which is the reverse of what appears in code.*/ dst[2*i+1] = read_channel_intel(c0); dst[2*i] = read_channel_intel(c1); } }
channel uint c0 __attribute__((depth(0))); channel uint c1 __attribute__((depth(0))); __kernel void producer (__global const uint * src, const uint iterations) { for (int i = 0; i < iterations; i++) { write_channel_intel(c0, src[2*i]); mem_fence(CLK_CHANNEL_MEM_FENCE); write_channel_intel(c1, src[2*i+1]); } } __kernel void consumer (__global uint * dst; const uint iterations) { for (int i = 0; i < iterations; i++) { dst[2*i+1] = read_channel_intel(c0); mem_fence(CLK_CHANNEL_MEM_FENCE); dst[2*i] = read_channel_intel(c1); } }
In this example, mem_fence in the producer kernel ensures that the channel write operation to c0 occurs before that to c1. Similarly, mem_fence in the consumer kernel ensures that the channel read operation from c0 occurs before that from c1.
Defining Memory Consistency Across Kernels When Using Channels
__kernel void producer( __global const uint * src, const uint iterations ) { for(int i=0; i < iterations; i++) { write_channel_intel(c0, src[2*i]); mem_fence(CLK_CHANNEL_MEM_FENCE | CLK_GLOBAL_MEM_FENCE); write_channel_intel(c1, src[2*i+1]); } }
In this kernel, the mem_fence function ensures that the write operation to c0 and memory access to src[2*i] occur before the write operation to c1 and memory access to src[2*i+1]. This allows data written to c0 to be visible to the read channel before data is written to c1.
5.5. Implementing OpenCL Pipes
Implement pipes if it is important that your OpenCL kernel is compatible with other SDKs.
Refer to the OpenCL Specification version 2.0 for OpenCL C programming language specification and general information about pipes.
The Intel® FPGA SDK for OpenCL™ implementation of pipes does not encompass the entire pipes specification. As such, it is not fully conformant to the OpenCL Specification version 2.0. The goal of the SDK's pipes implementation is to provide a solution that works seamlessly on a different OpenCL 2.0-conformant device. To enable pipes for Intel® FPGA products, your design must satisfy certain additional requirements.
5.5.1. Overview of the OpenCL Pipe Functions
Implementation of pipes decouples kernel execution from the host processor. The foundation of the Intel® FPGA SDK for OpenCL™ pipes support is the SDK's channels extension. However, the syntax for pipe functions differs from the channels syntax.
For more information about blocking and nonblocking functions, refer to the corresponding documentation on channels.
5.5.2. Pipe Data Behavior
Consider the following code example:
__kernel void producer (write_only pipe uint __attribute__((blocking)) c0) { for (uint i = 0; i < 10; i++) { write_pipe (c0, &i); } } __kernel void consumer (__global uint * restrict dst, read_only pipe uint __attribute__((blocking)) __attribute__((depth(10))) c0) { for (int i = 0; i < 5; i++) { read_pipe (c0, &dst[i]); } }
A read operation to a pipe reads the least recent piece of data written to the pipe first. Pipe data maintains a FIFO ordering within the pipe.
The kernel producer writes ten elements ([0, 9]) to the pipe. The kernel consumer reads five elements from the pipe per NDRange invocation. During the first invocation, the kernel consumer reads values 0 to 4 from the pipe. Because the data persists across NDRange invocations, the second time you execute the kernel consumer, it reads values 5 to 9.
For this example, to avoid a deadlock from occurring, you need to invoke the kernel consumer twice for every invocation of the kernel producer. If you call consumer less than twice, producer stalls because the pipe becomes full. If you call consumer more than twice, consumer stalls because there is insufficient data in the pipe.
5.5.3. Multiple Work-Item Ordering for Pipes
Multiple work-item accesses to a pipe can be useful in some scenarios. For example, they are useful when data words in the pipe are independent, or when the pipe is implemented for control logic. The main concern regarding multiple work-item accesses to a pipe is the order in which the kernel writes data to and reads data from the pipe. If possible, the OpenCL pipes process work-items read and write operations to a pipe in a deterministic order. As such, the read and write operations remain consistent across kernel invocations.
Requirements for Deterministic Multiple Work-Item Ordering
To guarantee deterministic ordering, the SDK checks that the pipe call is work-item invariant based on the following characteristics:
- All paths through the kernel must execute the pipe call.
- If the first requirement is not satisfied, none of the branch conditions that reach the pipe call should execute in a work-item-dependent manner.
If the SDK cannot guarantee deterministic ordering of multiple work-item accesses to a pipe, it warns you that the pipes might not have well-defined ordering with nondeterministic execution. Primarily, the SDK fails to provide deterministic ordering if you have work-item-variant code on loop executions with pipe calls, as illustrated below:
__kernel void ordering (__global int * check, global int * data, write_only pipe int __attribute__((blocking)) req) { int condition = check[get_global_id(0)]; if (condition) { for (int i = 0; i < N; i++) { process(data); write_pipe (req, &data[i]); } } else { process(data); } }
5.5.3.1. Work-item Serial Execution of Pipes
When you implement pipes in a kernel, the Intel® FPGA SDK for OpenCL™ Offline Compiler enforces that kernel behavior is equivalent to having at most one work-group in flight. The offline compiler also ensures that the kernel executes pipes in work-item serial execution, where the kernel executes work items with smaller IDs first. A work-item has the identifier (x, y, z, group), where x, y, z are the local 3D identifiers, and group is the work-group identifier.
The work-item ID (x0, y0, z0, group0) is considered to be smaller than the ID (x1, y1, z1, group1) if one of the following conditions is true:
- group0 < group1
- group0 = group1 and z0 < z1
- group0 = group1 and z0 = z1 and y0 < y1
- group0 = group1 and z0 = z1 and y0 = y1 and x0 < x1
Work items with incremental IDs execute in a sequential order. For example, the work-item with an ID (x0, y0, z0, group0) executes the write channel call first. Then, the work-item with an ID (x1, y0, z0, group0) executes the call, and so on. Defining this order ensures that the system is verifiable with external models.
Pipe Execution in a Loop with Multiple Work Items
Only in kernels compiled as an NDRange kernel, when pipes exist in the body of a loop with multiple work items, as shown below, each loop iteration executes prior to subsequent iterations. This implies that loop iteration 0 of each work item in a work group executes before iteration 1 of each work item in a work group, and so on.
__kernel void ordering (__global int * data, write_only pipe int __attribute__((blocking)) req) { write_pipe (req, &data[get_global_id(0)]); }
5.5.4. Restrictions in OpenCL Pipes Implementation
Default Behavior
Emulation Support
The Intel® FPGA SDK for OpenCL™ Emulator supports emulation of kernels that contain pipes. The level of Emulator support aligns with the subset of OpenCL pipes support that is implemented for the FPGA hardware.
Pipes API Support
Currently, the SDK's implementation of pipes does not support all the built-in pipe functions in the OpenCL Specification version 2.0. For a list of supported and unsupported pipe APIs, refer to OpenCL 2.0 C Programming Language Restrictions for Pipes.
Single Call Site
Because the pipe read and write operations do not function deterministically, for a given kernel, you can only assign one call site per pipe ID. For example, the Intel® FPGA SDK for OpenCL™ Offline Compiler cannot compile the following code example:
read_pipe(pipe1, &in_data1); read_pipe(pipe2, &in_data2); read_pipe(pipe1, &in_data3);
The second read_pipe call to pipe1 causes compilation failure because it creates a second call site to pipe1.
To gather multiple data from a given pipe, divide the pipe into multiple pipes, as shown below:
read_pipe(pipe1, &in_data1); read_pipe(pipe2, &in_data2); read_pipe(pipe3, &in_data3);
Because you can only assign a single call site per pipe ID, you cannot unroll loops containing pipes. Consider the following code:
#pragma unroll 4 for (int i = 0; i < 4; i++) { read_pipe (pipe1, &in_data1); }
The offline compiler issues the following warning message during compilation:
Compiler Warning: Unroll is required but the loop cannot be unrolled.
Feedback and Feed-Forward Pipes
Pipes within a kernel can be either read_only or write_only. Performance of a kernel that reads and writes to the same pipe is poor.
Kernel Vectorization Support
You cannot vectorize kernels that use pipes; that is, do not include the num_simd_work_items kernel attribute in your kernel code. Vectorizing a kernel that uses pipes creates multiple pipe masters and requires arbitration, which OpenCL pipes specification does not support.
Instruction-Level Parallelism on read_pipe and write_pipe Calls
If no data dependencies exist between read_pipe and write_pipe calls, the offline compiler attempts to execute these instructions in parallel. As a result, the offline compiler might execute these read_pipe and write_pipe calls in an order that does not follow the sequence expressed in the OpenCL kernel code.
Consider the following code sequence:
in_data1 = read_pipe(pipe1); in_data2 = read_pipe(pipe2); in_data3 = read_pipe(pipe3);
Because there are no data dependencies between the read_pipe calls, the offline compiler can execute them in any order.
5.5.5. Enabling OpenCL Pipes for Kernels
Pipes declarations are unique within a given OpenCL kernel program. Also, pipe instances are unique for every OpenCL kernel program-device pair. If the runtime loads a single OpenCL kernel program onto multiple devices, each device will have a single copy of each pipe. However, these pipe copies are independent and do not share data across the devices.
5.5.5.1. Ensuring Compatibility with Other OpenCL SDKs
Original Program Code
Below is an example of an OpenCL application:
#include <stdio.h> #include <stdlib.h> #include <string.h> #include "CL/opencl.h" #define SIZE 1000 const char *kernel_source = "__kernel void pipe_writer(__global int *in," " write_only pipe int p_in)\n" "{\n" " int gid = get_global_id(0);\n" " write_pipe(p_in, &in[gid]);\n" "}\n" "__kernel void pipe_reader(__global int *out," " read_only pipe int p_out)\n" "{\n" " int gid = get_global_id(0);\n" " read_pipe(p_out, &out[gid]);\n" "}\n"; int main() { int *input = (int *)malloc(sizeof(int) * SIZE); int *output = (int *)malloc(sizeof(int) * SIZE); memset(output, 0, sizeof(int) * SIZE); for (int i = 0; i != SIZE; ++i) { input[i] = rand(); } cl_int status; cl_platform_id platform; cl_uint num_platforms; status = clGetPlatformIDs(1, &platform, &num_platforms); cl_device_id device; cl_uint num_devices; status = clGetDeviceIDs(platform, CL_DEVICE_TYPE_ALL, 1, &device, &num_devices); cl_context context = clCreateContext(0, 1, &device, NULL, NULL, &status); cl_command_queue queue = clCreateCommandQueue(context, device, 0, &status); size_t len = strlen(kernel_source); cl_program program = clCreateProgramWithSource(context, 1, (const char **)&kernel_source, &len, &status); status = clBuildProgram(program, num_devices, &device, "", NULL, NULL); cl_kernel pipe_writer = clCreateKernel(program, "pipe_writer", &status); cl_kernel pipe_reader = clCreateKernel(program, "pipe_reader", &status); cl_mem in_buffer = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof(int) * SIZE, input, &status); cl_mem out_buffer = clCreateBuffer(context, CL_MEM_WRITE_ONLY, sizeof(int) * SIZE, NULL, &status); cl_mem pipe = clCreatePipe(context, 0, sizeof(cl_int), SIZE, NULL, &status); status = clSetKernelArg(pipe_writer, 0, sizeof(cl_mem), &in_buffer); status = clSetKernelArg(pipe_writer, 1, sizeof(cl_mem), &pipe); status = clSetKernelArg(pipe_reader, 0, sizeof(cl_mem), &out_buffer); status = clSetKernelArg(pipe_reader, 1, sizeof(cl_mem), &pipe); size_t size = SIZE; cl_event sync; status = clEnqueueNDRangeKernel(queue, pipe_writer, 1, NULL, &size, &size, 0, NULL, &sync); status = clEnqueueNDRangeKernel(queue, pipe_reader, 1, NULL, &size, &size, 1, &sync, NULL); status = clFinish(queue); status = clEnqueueReadBuffer(queue, out_buffer, CL_TRUE, 0, sizeof(int) * SIZE, output, 0, NULL, NULL); int golden = 0, result = 0; for (int i = 0; i != SIZE; ++i) { golden += input[i]; result += output[i]; } int ret = 0; if (golden != result) { printf("FAILED!"); ret = 1; } else { printf("PASSED!"); } printf("\n"); return ret; }
Host Code Modification
If the original host code runs on OpenCL SDKs that conforms to the OpenCL Specification version 2.0, you must modify it before running it on the Intel® FPGA SDK for OpenCL™ . To modify the host code, perform the following changes:
- Use the clCreateProgramWithBinary function instead of the clCreateProgramWithSource function to create the program.
- Move the contents of the kernel_source string into a separate source file. Refer to Kernel Code Modification for more information.
Kernel Code Modification
If your kernel code runs on OpenCL SDKs that conforms to the OpenCL Specification version 2.0, you must modify it before running it on the Intel® FPGA SDK for OpenCL™ . To modify the kernel code, perform the following changes:
- Create a separate source (.cl) file for the kernel code.
- Rename the pipe arguments so that they are the same in both kernels. For example, rename p_in and p_out to p.
- Specify the depth attribute for the pipe arguments. Assign a depth attribute value that equals to the maximum number of packets that the pipe creates to hold in the host.
- Build the kernel program in the offline compilation mode because the Intel® FPGA SDK for OpenCL™ has an offline compiler.
The modified kernel code appears as follows:
#define SIZE 1000 __kernel void pipe_writer(__global int *in, write_only pipe int __attribute__((depth(SIZE))) p) { int gid = get_global_id(0); write_pipe(p, &in[gid]); } __kernel void pipe_reader(__global int *out, read_only pipe int __attribute__((depth(SIZE))) p) { int gid = get_global_id(0); read_pipe(p, &out[gid]); }
5.5.5.2. Declaring the Pipe Handle
To read from and write to a pipe, the kernel must pass the pipe variable to each of the corresponding API call.
The <type> of the pipe may be any OpenCL™ built-in scalar or vector data type with a scalar size of 1024 bits or less. It may also be any user-defined type that is comprised of scalar or vector data type with a scalar size of 1024 bits or less.
Consider the following pipe handle declarations:
__kernel void first (pipe int c)
__kernel void second (write_only pipe int c)
The first example declares a read-only pipe handle of type int in the kernel first. The second example declares a write-only pipe in the kernel second. The kernel first may only read from pipe c, and the kernel second may only write to pipe c.
In an Intel® OpenCL system, only one kernel may read to a pipe. Similarly, only one kernel may write to a pipe. If a non-I/O pipe does not have at least one corresponding reading operation and one writing operation, the offline compiler issues an error.
For more information in the Intel® FPGA SDK for OpenCL™ I/O pipe implementation, refer to Implementing I/O Pipes Using the io Attribute.
5.5.5.3. Implementing Pipe Writes
Intel® only supports the convenience version of the write_pipe function. By default, write_pipe calls are nonblocking. Pipe write operations are successful only if there is capacity in the pipe to hold the incoming packet.
Where:
pipe_id identifies the buffer to which the pipe connects, and it must match the pipe_id of the corresponding read pipe (read_pipe).
data is the data that the pipe write operation writes to the pipe. It is a pointer to the packet type of the pipe. Note that writing to the pipe might lead to a global or local memory load, depending on the source address space of the data pointer.
<type> defines a pipe data width. The return value indicates whether the pipe write operation is successful. If successful, the return value is 0. If pipe write is unsuccessful, the return value is -1.
/*Declares the writable nonblocking pipe, p, which contains packets of type int*/ __kernel void kernel_write_pipe (__global const long *src, write_only pipe int p) { for (int i = 0; i < N; i++) { //Performs the actual writing //Emulates blocking behavior via the use of a while loop while (write_pipe(p, &src[i]) < 0) { } } }
The while loop is unnecessary if you specify a blocking attribute. To facilitate better hardware implementations, Intel® provides facility for blocking write_pipe calls by specifying the blocking attribute (that is, __attribute__((blocking))) on the pipe argument declaration for the kernel. Blocking write_pipe calls always return success.
5.5.5.4. Implementing Pipe Reads
Intel® only supports the convenience version of the read_pipe function. By default, read_pipe calls are non-blocking.
Where:
- pipe_id identifies the buffer to which the pipe connects, and it must match the pipe_id of the corresponding pipe write operation (write_pipe).
-
data is the data that
the pipe read operation reads from the pipe. It is a pointer to the
location of the data. Note: read_pipe call might lead to a global or local memory load, depending on the source address space of the data pointer.
- <type> defines the packet size of the data.
/*Declares the read_only_pipe that contains packets of type long.*/ /*Declares that read_pipe calls within the kernel will exhibit blocking behavior*/ __kernel void kernel_read_pipe(__global long *dst, read_only pipe long __attribute__((blocking)) p) { for (int i = 0; i < N; i++) { /*Reads from a long from the pipe and stores it into global memory at the specified location*/ read_pipe(p, &dst[i]); } }
To facilitate better hardware implementations, Intel® provides facility for blocking read_pipe calls by specifying the blocking attribute (that is, __attribute__((blocking))) on the pipe argument declaration for the kernel. Blocking read_pipe calls always return success.
5.5.5.5. Implementing Buffered Pipes Using the depth Attribute
You may use a buffered pipe to control data traffic, such as limiting throughput or synchronizing accesses to shared memory. In an unbuffered pipe, a write operation can only proceed when the read operation is expecting to read data. Use unbuffered pipes in conjunction with blocking read and write behaviors in kernels that execute concurrently. The unbuffered pipes provide self-synchronizing data transfers efficiently.
In a buffered pipe, a write operation can only proceed if there is capacity in the pipe to hold the incoming packet. A read operation can only proceed if there is at least one packet in the pipe.
Use buffered pipes if pipe calls are predicated differently in the writer and reader kernels, and the kernels do not execute concurrently.
__kernel void producer (__global int *in_data, write_only pipe int __attribute__((blocking)) __attribute__((depth(10))) c) { for (i = 0; i < N; i++) { if (in_data[i]) { write_pipe( c, &in_data[i] ); } } } __kernel void consumer (__global int *check_data, __global int *out_data, read_only pipe int __attribute__((blocking)) c ) { int last_val = 0; for (i = 0; i < N; i++) { if (check_data[i]) { read_pipe( c, &last_val ); } out_data[i] = last_val; } }
In this example, the write operation can write ten data values to the pipe successfully. After the pipe is full, the write kernel returns failure until a read kernel consumes some of the data in the pipe.
Because the pipe read and write calls are conditional statements, the pipe might experience an imbalance between read and write calls. You may add a buffer capacity to the pipe to ensure that the producer and consumer kernels are decoupled. This step is particularly important if the producer kernel is writing data to the pipe when the consumer kernel is not reading from it.
5.5.5.6. Implementing I/O Pipes Using the io Attribute
In the Intel® FPGA SDK for OpenCL™ channels extension, the io("chan_id") attribute specifies the I/O feature of an accelerator board with which a channel interfaces. The chan_id argument is the name of the I/O interface listed in the board_spec.xml file of your Custom Platform. The same I/O features can be used to identify I/O pipes.
Because peripheral interface usage might differ for each device type, consult your board vendor's documentation when you implement I/O pipes in your kernel program. Your OpenCL™ kernel code must be compatible with the type of data generated by the peripheral interfaces. If there is a difference in the byte ordering between the external I/O pipes and the kernel, the Intel® FPGA SDK for OpenCL™ Offline Compiler converts the byte ordering seamlessly upon entry and exit.
- Implicit data dependencies might exist for pipes that connect to the board directly and communicate with peripheral devices via I/O pipes. These implicit data dependencies might lead to compilation issues because the offline compiler cannot identify these dependencies.
- External I/O pipes communicating with the same peripherals do not obey any sequential ordering. Ensure that the external device does not require sequential ordering because unexpected behavior might occur.
-
Consult the board_spec.xml file in
your Custom Platform to identify the input and output features available on your FPGA
board.
For example, a board_spec.xml file might include the following information on I/O features:
<channels> <interface name="udp_0" port="udp0_out" type="streamsource" width="256" chan_id="eth0_in"/> <interface name="udp_0" port="udp0_in" type="streamsink" width="256" chan_id="eth0_out"/> <interface name="udp_0" port="udp1_out" type="streamsource" width="256" chan_id="eth1_in"/> <interface name="udp_0" port="udp1_in" type="streamsink" width="256" chan_id="eth1_out"/> </channels>
The width attribute of an interface element specifies the width, in bits, of the data type used by that pipe. For the example above, both the uint and float data types are 32 bits wide. Other bigger or vectorized data types must match the appropriate bit width specified in the board_spec.xml file.
-
Implement the io attribute as
demonstrated in the following code example. The io
attribute names must match those of the I/O channels (chan_id) specified in the board_spec.xml
file.
__kernel void test (pipe uint pkt __attribute__((io(“enet”))),; pipe float data __attribute__((io(“pcie”))));
Attention: Declare a unique io("chan_id") handle for each I/O pipe specified in the channels XML element within the board_spec.xml file.
5.5.5.7. Enforcing the Order of Pipe Calls
When the Intel® FPGA SDK for OpenCL™ Offline Compiler generates a compute unit, it does not create instruction-level parallelism on all instructions that are independent of each other. As a result, pipe read and write operations might not execute independently of each other even if there is no control or data dependence between them. When pipe calls interact with each other, or when pipes write data to external devices, deadlocks might occur.
For example, the code snippet below consists of a producer kernel and a consumer kernel. Pipes c0 and c1 are unbuffered pipes. The schedule of the pipe read operations from c0 and c1 might occur in the reversed order as the pipe write operations to c0 and c1. That is, the producer kernel writes to c0 but the consumer kernel might read from c1 first. This rescheduling of pipe calls might cause a deadlock because the consumer kernel is reading from an empty pipe.
__kernel void producer (__global const uint * restrict src, const uint iterations, write_only pipe uint __attribute__((blocking)) c0, write_only pipe uint __attribute__((blocking)) c1) { for (int i = 0; i < iterations; i++) { write_pipe (c0, &src[2*i ]); write_pipe (c1, &src[2*i+1]); } } __kernel void consumer (__global uint * restrict dst, const uint iterations, read_only pipe uint __attribute__((blocking)) c0, read_only pipe uint __attribute__((blocking)) c1) { for (int i = 0; i < iterations; i++) { read_pipe (c0, &dst[2*i+1]); read_pipe( c1, &dst[2*i]); } }
__kernel void producer (__global const uint * src, const uint iterations, write_only_pipe uint __attribute__((blocking)) c0, write_only_pipe uint __attribute__((blocking)) c1) { for (int i = 0; i < iterations; i++) { write_pipe(c0, &src[2*i ]); mem_fence(CLK_CHANNEL_MEM_FENCE); write_pipe(c1, &src[2*i+1]); } } __kernel void consumer (__global uint * dst; const uint iterations, read_only_pipe uint __attribute__((blocking)) c0, read_only_pipe uint __attribute__((blocking)) c1) { for(int i = 0; i < iterations; i++) { read_pipe(c0, &dst[2*i ]); mem_fence(CLK_CHANNEL_MEM_FENCE); read_pipe(c1, &dst[2*i+1]); } }
In this example, mem_fence in the producer kernel ensures that the pipe write operation to c0 occurs before that to c1. Similarly, mem_fence in the consumer kernel ensures that the pipe read operation from c0 occurs before that from c1.
Defining Memory Consistency Across Kernels When Using Pipes
__kernel void producer (__global const uint * restrict src, const uint iterations, write_only pipe uint __attribute__((blocking)) c0, write_only pipe uint __attribute__((blocking)) c1) { for (int i = 0; i < iterations; i++) { write_pipe(c0, &src[2*i]); mem_fence(CLK_CHANNEL_MEM_FENCE | CLK_GLOBAL_MEM_FENCE); write_pipe(c1, &src[2*i+1]); } }
In this kernel, the mem_fence function ensures that the write operation to c0 and memory access to src[2*i] occur before the write operation to c1 and memory access to src[2*i+1]. This allows data written to c0 to be visible to the read pipe before data is written to c1.
5.5.6. Direct Communication with Kernels via Host Pipes
The extension provides two new values in the flags argument of clCreatePipe to make a pipe host accessible, and adds four new API functions (clReadPipeIntelFPGA, clWritePipeIntelFPGA, clMapHostPipeIntelFPGA, and clUnmapHostPipeIntelFPGA) to allow the host to read from and write to a pipe that was created with host access enabled. A new optional kernel argument attribute is added to specify in the kernel language that the opposing end of a pipe kernel argument is the host program, and consequently that the pipe is not connected to another kernel. A pipe kernel argument is specialized in the kernel definition to connect to either a host pipe or another kernel, and cannot dynamically switch between the two at runtime.
When a pipe kernel argument is marked for host accessibility, the kernel language pipe accessors are restricted to a subset of the 2.x functions (reservations are not supported), and memory consistency or visibility guarantees are made beyond OpenCL™ synchronization points.
- Data written to a host pipe is eventually made visible on the read side without any OpenCL synchronization point.
- A restriction of our implementation of host pipes is that the platform only supports two host pipes. One for read and one for write. Furthermore, the compiler accepts a pipe of only 32-bytes width, and hence ulong4 is used in the Example Use of cl_intel_fpga_host_pipe Extension section.
- Host programs using the cl_intel_fpga_host_pipe extension must include the CL/cl_ext_intelfpga.h header file and enable OpenCL 2.0 support, as described in Support Statuses of OpenCL 2.0 Features.
5.5.6.1. Optional intel_host_accessible Kernel Argument Attribute
__attribute__((intel_host_accessible))
5.5.6.2. API Functions for Interacting with cl_mem Pipe Objects Bound to Host-Accessible Pipe Kernel Arguments
- clReadPipeIntelFPGA and clWritePipeIntelFPGA functions operate on single words of the pipe’s width.
- clMapHostPipeIntelFPGA function is an advanced mechanism to reduce latency and overhead when performing many word reads or writes on a host pipe.
- clUnmapHostPipeIntelFPGA function allows the host program to signal to the OpenCL runtime that it has written to or read from either a portion of or the entire mapped region that was created through a previous clMapHostPipeIntelFPGA function call.
The following sections describe the API functions for bound cl_mem objects:
clReadPipeIntelFPGA Function
This function reads a data packet from a pipe with the following characteristics:
- Created with the CL_MEM_HOST_READ_ONLY flag.
- Bound to a kernel argument that has the write_only definition and the intel_host_accessible kernel argument attribute.
Each clReadPipeIntelFPGA function call reads one packet from the pipe. The operation is non-blocking. It does not wait until data is available in the pipe to successfully read before returning.
Syntaxcl_int clReadPipeIntelFPGA (cl_mem pipe, gentype *ptr);Return Values
The clReadPipeIntelFPGA function returns CL_SUCCESS if the read is successful. Otherwise, it returns one of the following errors:
Return Value | Description |
---|---|
CL_INVALID_MEM_OBJECT | Pipe was not created with the clCreatePipe function or if the CL_MEM_HOST_READ_ONLY flag is not used when creating the pipe. |
CL_INVALID_KERNEL | Pipe is not bound to a kernel argument using the clSetKernelArg function. |
CL_INVALID_VALUE | The ptr attribute is NULL. |
CL_PIPE_EMPTY | Pipe is empty with no data, and therefore there were no valid packets to read. |
clWritePipeIntelFPGA Function
This function writes a data packet to a pipe with the following characteristics:
- Created with the CL_MEM_HOST_WRITE_ONLY flag.
- Bound to a kernel argument that has the read_only definition and the intel_host_accessible kernel argument attribute.
Each clWritePipeIntelFPGA function call writes one packet to the pipe. The operation is non-blocking. It does not wait until there is a capacity in the pipe to successfully write before returning. A return status of CL_SUCCESS does not imply that data is available to the kernel for reading. Data eventually is available for reading by the kernel, assuming that any previously mapped buffers on the host pipe are unmapped.
Syntaxcl_int clWritePipeIntelFPGA (cl_mem pipe, gentype *ptr);Return Values
The clWritePipeIntelFPGA function returns CL_SUCCESS if the write is successful. Otherwise, it returns one of the following errors:
Return Value | Description |
---|---|
CL_INVALID_MEM_OBJECT | Pipe is not created with clCreatePipe function or if the CL_MEM_HOST_WRITE_ONLY flag is not used when creating the pipe. |
CL_INVALID_KERNEL | Pipe is not bound to a kernel argument using the clSetKernelArg function. |
CL_INVALID_VALUE | The ptr attribute is NULL. |
CL_PIPE_FULL | Pipe is already full of data and cannot accept further packets without one being read by a kernel. The packet capacity of the pipe is specified as an argument to the clCreatePipe function. |
clMapHostPipeIntelFPGA Function
This function returns a void * in the host address space. The pipe can write data to this address space if it was created with the CL_MEM_HOST_WRITE_ONLY flag. The pipe can read data from this address space if it was created with the CL_MEM_HOST_READ_ONLY flag.
The mapped_size argument specifies the maximum number of bytes that the host can access, as determined by the runtime in the memory. The value specified by mapped_size argument is less than or equal to the value of the requested_size argument that the caller specifies.
After writing to or reading from the returned void *, the host must execute one or more clUnmapHostPipeIntelFPGA function calls to signal to the runtime that data is ready for transfer to the device (on a write) and that the runtime can reclaim the memory for reuse (on a read or write). If the clMapHostPipeIntelFPGA function is called before the clUnmapHostPipeIntelFPGA function unmaps all memory mapped by a previous clMapHostPipeIntelFPGA function call, the buffer returned by the second clMapHostPipeIntelFPGA call does not overlap with that returned by the first call.
Syntaxvoid * clMapHostPipeIntelFPGA (cl_mem pipe, cl_map_flags map_flags, size_t requested_size, size_t * mapped_size, cl_int * errcode_ret);Return Values
The clMapHostPipeIntelFPGA function returns a valid non-NULL address and CL_SUCCESS is returned in errcode_ref if the host pipe is mapped successfully. Otherwise, it returns NULL and errcode_ret is set with one of the following values:
Description | Error Values |
---|---|
This error is returned when the:
|
CL_INVALID_MEM_OBJECT |
Pipe is not bound to a kernel argument using the clSetKernelArg function. | CL_INVALID_KERNEL |
The requested_size attribute is not a multiple of the packet size that is specified to the clCreatePipe function. | CL_INVALID_VALUE |
The mapped_size attribute is NULL. | |
Runtime cannot provide buffer space for the host to read or write. This may occur when a new mapping is requested before data is transferred to or from the device from a previous clUnmapHostPipeIntelFPGA function call, so attempting to map again may succeed. | CL_OUT_OF_RESOURCES |
Runtime is unable to allocate the host addressable memory. | CL_OUT_OF_HOST_MEMORY |
clUnmapHostPipeIntelFPGA Function
This function signals to the runtime that the host no longer uses size_to_unmap bytes of a host-addressable buffer that the clMapHostPipeIntelFPGA function has returned previously. In the case of a writeable host pipe, calling the clUnmapHostPipeIntelFPGA function allows the unmapped data to become available to the kernel. If the size_to_unmap value is smaller than the mapped_size value specified by the clMapHostPipeIntelFPGA function, then multiple clUnmapHostPipeIntelFPGA function calls are necessary to unmap full capacity of the buffer. You may include multiple clUnmapHostPipeIntelFPGA function calls to unmap successive bytes in the buffer returned by a clMapHostPipeIntelFPGA function call, up to the mapped_size value defined by the clMapHostPipeIntelFPGA call.
Syntaxcl_int clUnmapHostPipeIntelFPGA (cl_mem pipe, void * mapped_ptr, size_t size_to_unmap, size_t * unmapped_size);Return Values
The clUnmapHostPipeIntelFPGA function returns CL_SUCCESS on successful unmapping. Otherwise, it returns the following error:
Return Value | Description |
---|---|
CL_INVALID_VALUE | The mapped_ptr attribute is not a valid pointer returned by a clMapHostPipeIntelFPGA function call associated with the pipe, or if the mapped_ptr has already been fully unmapped. |
The requested_size attribute is not a multiple of the packet size that was specified to the clCreatePipe function. | |
The requested_size attribute is larger than the remaining unmapped bytes in the mapped_ptr buffer. | |
The unmapped_size attribute is NULL. |
Recommendations for Using Host Pipes Efficiently
To use host pipes efficiently, Intel® recommends the following:
- Use map or unmap with multiple packets rather than single packet with read or write.
- Use two threads to simultaneously map and unmap.
- Ensure kernel can read or write data to host pipe in every clock cycle.
Recommendations for Avoiding Hangs, Stalls or Deadlocks
To avoid hangs, stalls, or deadlocks situation, Intel® recommends the following:
- Packets may get lost when enqueuing kernels from different cl_programs. Use a single cl_program object in the host application.
- Ensure that the same amount of data is written and read on kernel and host.
5.5.6.3. Creating a Host Accessible Pipe
To enable host access (reading or writing) to pipes, the cl_intel_fpga_host_pipe extension legalizes the following two flags values to clCreatePipe:
- CL_MEM_HOST_READ_ONLY
- CL_MEM_HOST_WRITE_ONLY
When one of these flags is passed to the clCreatePipe function, the corresponding cl_mem object can be passed as the first argument to clReadPipeIntelFPGA and clWritePipeIntelFPGA functions. Throughout the remainder of the cl_intel_fpga_host_pipe extension, such a pipe is referred to as a host pipe.
5.5.6.4. Example Use of the cl_intel_fpga_host_pipe Extension
Kernel Code
#pragma OPENCL EXTENSION cl_intel_fpga_host_pipe : enable kernel void reader(__attribute__((intel_host_accessible)) __read_only pipe ulong4 host_in) { ulong4 val; if (read_pipe(host_in, &val)) { .... } .... } kernel void writer(__attribute__((intel_host_accessible)) __write_only pipe ulong4 device_out) { ulong4 val; .... if (write_pipe(device_out, &val)) { .... } }
Host Code
.... cl_kernel read_kern = clCreateKernel(program, "reader", NULL); cl_kernel write_kern = clCreateKernel(program, "writer", NULL); cl_mem read_pipe = clCreatePipe(context, CL_MEM_HOST_READ_ONLY, sizeof( cl_ulong4 ), 128, // Number of packets that can be buffered NULL, &error); cl_mem write_pipe = clCreatePipe(context, CL_MEM_HOST_WRITE_ONLY, sizeof( cl_ulong4 ), 64, // Number of packets that can be buffered NULL, &error); // Bind pipes to kernels clSetKernelArg(read_kern, 0, sizeof(cl_mem), (void *)&write_pipe); clSetKernelArg(write_kern, 0, sizeof(cl_mem), (void *)&read_pipe); // Enqueue kernels .... cl_ulong4 val if (!clReadPipeIntelFPGA (read_pipe, &val)) { cl_int result = clWritePipeIntelFPGA (write_pipe, &val); // Check write success/failure and handle .... } ....
5.6. Implementing Arbitrary Precision Integers
Use the Intel® FPGA SDK for OpenCL™ arbitrary precision integer extension to define integers with a custom bit-width. You can define integer custom bit-widths up to and including 64 bits.
#include "ihc_apint.h"
aoc <other command options> -I $INTELFPGAOCLSDKROOT/include/kernel_headers <my_kernel_file>
The header defines signed and unsigned arbitrary precision integer type definitions of the form intd_t and uintd_t, where d is the bit width of the type.
int10_t x_signed; uint10_t x_unsigned;
You can declare arbitrary precision integers with widths up to 64 bits.
If you do operations where the bit width of the result is larger than the bit widths of the arguments, you must explicitly cast one of the arguments to the resulting bit width.
int10_t a; int10_t b; int20_t res; res = a * b;
In the example, the compiler attempts to instantiate a multiplier that multiplies two 10-bit integers and put the results into another 10-bit integer. The result is then sign extended or zero extended up to 20-bits.
res = ((int20_t)a) * b
When you compile a program for x86-64 platforms, the bit widths for arbitrary precisions integers are rounded up to either 32 bits or 64 bits. When you compile a kernel for an FPGA platform, the bit widths are not rounded up and the arbitrary precision integers remain at their declared bit width.
As a result, an operation that appears to work correctly in an x86-64 program can overflow and lose precision when you compile that same operation in an FPGA kernel. The additional precision provided by bit-width rounding on x86-64 platforms masks possible overflow and precision-loss problems you might encounter when your compile your FPGA kernel.
5.7. Using Predefined Preprocessor Macros in Conditional Compilation
To introduce Intel® FPGA SDK for OpenCL™ Offline Compiler version-specific code and optimizations, structure your kernel program in the following manner:
#if INTELFPGA_CL >= 191 // use new features added in 19.1 #else // do things the old way #endif
Where INTELFPGA_CL is the Intel® predefined preprocessor macro set to the Intel® FPGA SDK for OpenCL™ Offline Compiler version. This macro enables you to version your code based on the compiler version.
5.8. Declaring __constant Address Space Qualifiers
Function Scope __constant Variables
The Intel® FPGA SDK for OpenCL™ Offline Compiler does not support function scope __constant variables. Replace function scope __constant variables with file scope constant variables. You can also replace function scope __constant variables with __constant buffers that the host passes to the kernel.
File Scope __constant Variables
If the host always passes the same constant data to your kernel, consider declaring that data as a constant preinitialized file scope array within the kernel file. Declaration of a constant preinitialized file scope array creates a ROM directly in the hardware to store the data. This ROM is available to all work-items in the NDRange.
For example:
__constant int my_array[8] = {0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7};
__kernel void my_kernel (__global int * my_buffer)
{
size_t gid = get_global_id(0);
my_buffer[gid] += my_array[gid % 8];
}
In this case, the offline compiler sets the values for my_array in a ROM because the file scope constant data does not change between kernel invocations.
Pointers to __constant Parameters from the Host
You can replace file scope constant data with a pointer to a __constant parameter in your kernel code if the data is not fixed across kernel invocations. You must then modify your host application in the following manner:
- Create cl_mem memory objects associated with the pointers in global memory.
- Load constant data into cl_mem objects with clEnqueueWriteBuffer prior to kernel execution.
- Pass the cl_mem objects to the kernel as arguments with the clSetKernelArg function.
For simplicity, if a constant variable is of a complex type, use a typedef argument, as shown in the table below:
If your source code is structured as follows: | Rewrite your code to resemble the following syntax: |
---|---|
__constant int Payoff[2][2] = {{ 1, 3}, {5, 3}}; __kernel void original(__global int * A) { *A = Payoff[1][2]; // and so on } |
__kernel void modified(__global int * A, __constant Payoff_type * PayoffPtr ) { *A = (PayoffPtr)[1][2]; // and so on } |
5.9. Including Structure Data Types as Arguments in OpenCL Kernels
5.9.1. Matching Data Layouts of Host and Kernel Structure Data Types
To match member data types, use the cl_ version of the data type in your host application that corresponds to the data type in the kernel code. The cl_ version of the data type is available in the opencl.h header file. For example, if you have a data member of type float4 in your kernel code, the corresponding data member you declare in the host application is cl_float4.
Align the structures and align the struct data members between the host and kernel applications. Manage the alignments carefully because of the variability among different host compilers.
For example, if you have float4 OpenCL data types in the struct, the alignments of these data items must satisfy the OpenCL specification (that is, 16-byte alignment for float4).
The following rules apply when the Intel® FPGA SDK for OpenCL™ Offline Compiler compiles your OpenCL kernels:
- Alignment of built-in scalar and vector types follow the rules outlined in
Section 6.1.5 of the OpenCL Specification version 1.0.
The offline compiler usually aligns a data type based on its size. However, the compiler aligns a value of a three-element vector the same way it aligns a four-element vector.
- An array has the same alignment as one of its elements.
- A struct (or a union) has the same alignment as the maximum alignment necessary for any of its
data members.
Consider the following example:
struct my_struct { char data[3]; float4 f4; int index; };
The offline compiler aligns the struct elements above at 16-byte boundaries because of the float4 data type. As a result, both data and index also have 16-byte alignment boundaries.
- The offline compiler does not reorder data members of a struct.
- Normally, the offline compiler inserts a minimum amount of data structure
padding between data members of a struct to satisfy the
alignment requirements for each data member.
- In your OpenCL kernel code, you may specify data packing (that is, no insertion of data structure padding) by applying the packed attribute to the struct declaration. If you impose data packing, ensure that the alignment of data members satisfies the OpenCL alignment requirements. The Intel® FPGA SDK for OpenCL™ does not enforce these alignment requirements. Ensure that your host compiler respects the kernel attribute and sets the appropriate alignments.
- In your OpenCL kernel code, you may specify the amount of data
structure padding by applying the aligned(N) attribute to a data member, where N is the amount of padding. The SDK does not enforce these alignment requirements. Ensure
that your host compiler respects the kernel attribute and sets the appropriate
alignments.
For Windows systems, some versions of the Microsoft Visual Studio compiler pack structure data types by default. If you do not want to apply data packing, specify an amount of data structure padding as shown below:
struct my_struct { __declspec(align(16)) char data[3]; /*Note that cl_float4 is the only known float4 definition on the host*/ __declspec(align(16)) cl_float4 f4; __declspec(align(16)) int index; };
Tip: An alternative way of adding data structure padding is to insert dummy struct members of type char or array of char.
5.9.2. Disabling Insertion of Data Structure Padding
struct __attribute__((packed)) Context { float param1; float param2; int param3; uint param4; }; __kernel void algorithm(__global float * restrict A, __global struct Context * restrict c) { if ( c->param3 ) { // Dereference through a pointer and so on } }
5.9.3. Specifying the Alignment of a Struct
struct __attribute__((aligned(2))) Context { float param1; float param2; int param3; uint param4; }; __kernel void algorithm(__global float * A, __global struct Context * restrict c) { if ( c->param3 ) { // Dereference through a pointer and so on } }
5.10. Inferring a Register
The offline compiler infers private arrays as registers either as single values or in a piecewise fashion. Piecewise implementation results in very efficient hardware; however, the offline compiler must be able to determine data accesses statically. To facilitate piecewise implementation, hardcode the access points into the array. You can also facilitate register inference by unrolling loops that access the array.
If array accesses are not inferable statically, the offline compiler might infer the array as registers. However, the offline compiler limits the size of these arrays to 64 bytes in length for single work-item kernels. There is effectively no size limit for kernels with multiple work-items.
Consider the following code example:
int array[SIZE]; for (int j = 0; j < N; ++j) { for (int i = 0; i < SIZE - 1; ++i) { array[i] = array[i + 1]; } }
The indexing into array[i] is not inferable statically because the loop is not unrolled. If the size of array[SIZE] is less than or equal to 64 bytes for single work-item kernels, the offline compiler implements array[SIZE] into registers as a single value. If the size of array[SIZE] is greater than 64 bytes for single work-item kernels, the offline compiler implements the entire array in block RAMs. For multiple work-item kernels, the offline compiler implements array[SIZE] into registers as a single value provided that its size is less than 1 kilobyte (KB).
5.10.1. Inferring a Shift Register
Consider the following code example:
channel int in, out; #define SIZE 512 //Shift register size must be statically determinable __kernel void foo() { int shift_reg[SIZE]; //The key is that the array size is a compile time constant // Initialization loop #pragma unroll for (int i=0; i < SIZE; i++) { //All elements of the array should be initialized to the same value shift_reg[i] = 0; } while(1) { // Fully unrolling the shifting loop produces constant accesses #pragma unroll for (int j=0; j < SIZE–1; j++) { shift_reg[j] = shift_reg[j + 1]; } shift_reg[SIZE – 1] = read_channel_intel(in); // Using fixed access points of the shift register int res = (shift_reg[0] + shift_reg[1]) / 2; // ‘out’ channel will have running average of the input channel write_channel_intel(out, res); } }
In each clock cycle, the kernel shifts a new value into the array. By placing this shift register into a block RAM, the Intel® FPGA SDK for OpenCL™ Offline Compiler can efficiently handle multiple access points into the array. The shift register design pattern is ideal for implementing filters (for example, image filters like a Sobel filter or time-delay filters like a finite impulse response (FIR) filter).
When implementing a shift register in your kernel code, keep in mind the following key points:
- Unroll the shifting loop so that it can access every element of the array.
- All access points must have constant data accesses. For example, if you write a calculation in nested loops using multiple access points, unroll these loops to establish the constant access points.
- Initialize all elements of the array to the same value. Alternatively, you may leave the elements uninitialized if you do not require a specific initial value.
- If some accesses to a large array are not inferable statically, they force the offline compiler to create inefficient hardware. If these accesses are necessary, use __local memory instead of __private memory.
- Do not shift a large shift register conditionally. The shifting must occur in very loop iteration that contains the shifting code to avoid creating inefficient hardware.
5.11. Enabling Double Precision Floating-Point Operations
Before declaring any double precision floating-point data type in your OpenCL kernel, include the following OPENCL EXTENSION pragma in your kernel code:
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
5.12. Single-Cycle Floating-Point Accumulator for Single Work-Item Kernels
The offline compiler supports an accumulator that adds or subtracts a value. To use this feature, describe the accumulation in a way that allows the offline compiler to infer the accumulator.
- The accumulator is only available on Intel® Arria® 10 devices.
- The accumulator must be part of a loop.
- The accumulator must have an initial value of 0.
- The accumulator cannot be conditional.
Below are examples of a description that results in the correct inference of the accumulator by the offline compiler.
channel float4 RANDOM_STREAM; __kernel void acc_test(__global float *a, int k) { // Simplest example of an accumulator. // In this loop, the accumulator acc is incremented by 5. int i; float acc = 0.0f; for (i = 0; i < k; i++) { acc+=5; } a[0] = acc; } __kernel void acc_test2(__global float *a, int k) { // Extended example showing that an accumulator can be // conditionally incremented. The key here is to describe the increment // as conditional, not the accumulation itself. int i; float acc = 0.0f; for (i = 0; i < k; i++) { acc += ((i < 30) ? 5 : 0); } a[0] = acc; } __kernel void acc_test3(__global float *a, int k) { // A more complex case where the accumulator is fed // by a dot product. int i; float acc = 0.0f; for (i = 0; i < k; i++ ){ float4 v = read_channel_intel(RANDOM_STREAM); float x1 = v.x; float x2 = v.y; float y1 = v.z; float y2 = v.w; acc += (x1*y1+x2*y2); } a[0] = acc; } __kernel void loader(__global float *a, int k) { int i; float4 my_val = 0; for(i = 0; i < k; i++) { if ((i%4) == 0) write_channel_intel(RANDOM_STREAM, my_val); if ((i%4) == 0) my_val.x = a[i]; if ((i%4) == 1) my_val.y = a[i]; if ((i%4) == 2) my_val.z = a[i]; if ((i%4) == 3) my_val.w = a[i]; } }
5.12.1. Programming Strategies for Inferring the Accumulator
Describing an Accumulator Using Multiple Loops
Consider a case where you want to describe an accumulator using multiple loops, with some of the loops being unrolled:
float acc = 0.0f; for (i = 0; i < k; i++) { #pragma unroll for(j=0;j < 16; j++) acc += (x[i+j]*y[i+j]); }
In this situation, it is important to compile the kernel with the -ffp-reassoc Intel® FPGA SDK for OpenCL™ Offline Compiler command option to enable the offline compiler to rearrange the operations in a way that exposes the accumulation. If you do not compile the kernel with -ffp-reassoc, the resulting accumulator structure will have a high initiation interval (II). II is the number of cycles between launching successive loop iterations. The higher the II value, the longer the accumulator structure must wait before it can process the next loop iteration.
Modifying a Multi-Loop Accumulator Description
In cases where you cannot compile an accumulator description using the -ffp-reassoc offline compiler command option, rewrite the code to expose the accumulation.
For the code example above, rewrite it in the following manner:
float acc = 0.0f; for (i = 0; i < k; i++) { float my_dot = 0.0f; #pragma unroll for(j=0;j < 16; j++) my_dot += (x[i+j]*y[i+j]); acc += my_dot; }
Modifying an Accumulator Description Containing a Variable or Non-Zero Initial Value
Consider a situation where you might want to apply an offset to a description of an accumulator that begins with a non-zero value:
float acc = array[0]; for (i = 0; i < k; i++) { acc += x[i]; }
Because the accumulator hardware does not support variable or non-zero initial values in a description, you must rewrite the description.
float acc = 0.0f; for (i = 0; i < k; i++) { acc += x[i]; } acc += array[0];
Rewriting the description in the above manner enables the kernel to use an accumulator in a loop. The loop structure is then followed by an increment of array[0].
5.13. Integer Promotion Rules
- If both operands are of standard integer type (for example char or short), integers are promoted following the C/C++ standard. That is, the operation is carried out in the data type and size of the largest operand, with at least 32 bits. The expression returns the result in that larger data type.
- If both operands are intX_t data types, operations are carried out in the largest intX_t data type even if that data type is smaller than 32 bits. The expression returns the result in that type.
- If the expression has one standard data type and one intX_t data type, the rules for intX_t data type promotion apply. The resulting expression type is always an intX_t data type. For example, if the largest data type is a standard integer type short, the resulting data type is an int16_t.
- In C/C++, literals are by default an int
data type, so when you use a literal without any casting, the expression type is always at
least 32 bits. For example, if you have code as shown in the following snippet, the
comparison is carried out in 32
bits:
int5_t ap; ... if (ap < 4) { ...
- If operands are of different signage and the unsigned type is at least as large as the other type, the operation is carried out as an unsigned operation. Otherwise, the unsigned operand is converted to a signed value.
For example, if you have code as shown in the following snippet, -1 expands to a 32-bit negative number (0xffffffff) while the uint3_t ends up as the positive 32-bit number 7 (0x00000007), which are not equal.
uint3_t x = 7; if (x != -1) { // FAIL }
6. Designing Your Host Application
- Host Programming Requirements
When designing your OpenCL host application for use with the Intel® FPGA SDK for OpenCL™ , ensure that the application satisfies the following host programming requirements. - Allocating OpenCL Buffers for Manual Partitioning of Global Memory
Manual partitioning of global memory buffers allows you to control memory accesses across buffers to maximize the memory bandwidth. You can partition buffers across interfaces of the same memory type or across interfaces of different memory types. - Triggering Collection Profiling Data During Kernel Execution
- Accessing Custom Platform-Specific Functions
To reference Custom Platform-specific user-accessible functions while linking to the FCD, include the clGetBoardExtensionFunctionAddressIntelFPGA extension in your host application. - Modifying Host Program for Structure Parameter Conversion
If you convert any structure parameters to pointers-to-constant structures in your OpenCL™ kernel, you must modify your host application accordingly. - Managing Host Application
The Intel® FPGA SDK for OpenCL™ includes utility commands you can invoke to obtain information on flags and libraries necessary for compiling and linking your host application. - Allocating Shared Memory for OpenCL Kernels Targeting SoCs
Intel® recommends that OpenCL™ kernels that run on Intel® SoCs access shared memory instead of the FPGA DDR memory. - Sharing Multiple Devices Across Multiple Host Programs
In a system with multiple FPGA devices, each device appears as a separate cl_device_id object in the OpenCL host API. You can query various device properties using the clGetDeviceInfo function. Based on the properties, you can select devices you want to use in your program.
6.1. Host Programming Requirements
6.1.1. Host Machine Memory Requirements
The host machine must support the following components:
- The host application and operating system.
- The working set for the host application.
- The maximum amount of OpenCL™ memory buffers that can be allocated at once. Every device-side cl_mem buffer is associated with a corresponding storage area in the host process. Therefore, the amount of host memory necessary might be as large as the amount of external memory supported by the FPGA.
6.1.2. Host Binary Requirement
6.1.3. Multiple Host Threads
All OpenCL APIs are thread safe except the clSetKernelArg function.
It is safe to call clSetKernelArg from any host thread or in a reentrant way as long as concurrent calls to clSetKernelArg operate on different cl_kernel objects.
6.1.4. Out-of-order Command Queues
6.1.5. Requirement for Multiple Command Queues to Execute Kernels Concurrently
A single in-order command queue can only dispatch a single operation for execution. Subsequent operations are not dispatched until the previous operation is fully complete. Thus, to execute kernels within the same OpenCL program object concurrently, instantiate a separate command queue for each kernel you want to run concurrently.
Similarly, multiple in-order command queues are also required to concurrently execute different transfers (clEnqueueReadBuffer or clEnqueueWriteBuffer), including executing transfers concurrently with kernels. For example, to execute kernel A concurrently with kernel B as well as concurrently with a clEnqueueWriteBuffer transfer, you must create three command queues and enqueue each of the operations in a separate queue. Achieving this in a steady state leads to maximum utilization of the FPGA device.
Out-of-order command queues may also be used to launch buffer writes, reads, and kernel execution concurrently. Enqueueing events that do not have any dependencies on one another into an out-of-order queue schedules them to execute concurrently, with the exception that these events are not blocked by their other dependencies.
6.2. Allocating OpenCL Buffers for Manual Partitioning of Global Memory
Manual partitioning of global memory buffers allows you to control memory accesses across buffers to maximize the memory bandwidth. You can partition buffers across interfaces of the same memory type or across interfaces of different memory types.
6.2.1. Partitioning Buffers Across Multiple Interfaces of the Same Memory Type
The figure below illustrates the differences between burst-interleaved and non-interleaved memory partitions.
To manually partition some or all of the available global memory types, perform the following tasks:
- Compile your OpenCL kernel using the -no-interleaving=<global_memory_type> flag to configure the memory bank(s) of the specified memory type as separate addresses. For more information about the use of the -no-interleaving=<global_memory_type> flag, refer to the Disabling Burst-Interleaving of Global Memory (-no-interleaving=<global_memory_type>) section.
-
Create an OpenCL buffer in your host application, and allocate
the buffer to one of the banks using the CL_CHANNEL flags.
- Specify CL_CHANNEL_1_INTELFPGA to allocate the buffer to the lowest available memory region.
- Specify CL_CHANNEL_2_INTELFPGA to allocation memory to the second bank (if available).
Attention: Allocate each buffer to a single memory bank only. If the second bank is not available at runtime, the memory is allocated to the first bank. If no global memory is available, the clCreateBuffer call fails with the error message CL_MEM_OBJECT_ALLOCATION_FAILURE.
6.2.2. Partitioning Buffers Across Different Memory Types (Heterogeneous Memory)
To use the heterogeneous memory, modify the code in your .cl file as follows:
-
Determine the names of the global memory types available on your FPGA board in
one of the following ways:
- Refer to the board vendor's documentation for more information.
- Find the names in the board_spec.xml file of your board Custom Platform. For each global memory type, the name is the unique string assigned to the name attribute of the global_mem element.
-
To instruct the host to allocate a buffer to a specific global memory type,
insert the
buffer_location("<memory_type>")
attribute, where <memory_type> is the
name of the global memory type provided by your board vendor.
For example:
__kernel void foo(__global __attribute__((buffer_location("DDR"))) int *x, __global __attribute__((buffer_location("QDR"))) int *y)
If you do not specify the buffer_location attribute, the host allocates the buffer to the default memory type automatically. To determine the default memory type, consult the documentation provided by your board vendor. Alternatively, in the board_spec.xml file of your Custom Platform, search for the memory type that is defined first or has the attribute default=1 assigned to it.Intel® recommends that you define the buffer_location attribute in a preprocessor macro for ease of reuse, as follows:#define QDR\ __global __attribute__((buffer_location("QDR"))) #define DDR\ __global __attribute__((buffer_location("DDR"))) __kernel void foo (QDR uint * data, DDR uint * lup) { //statements }
Attention: If you assign a kernel argument to a non-default memory (for example, QDR uint * data and DDR uint * lup from the code above), you cannot declare that argument using the constant keyword. In addition, you cannot perform atomic operations with pointers derived from that argument.
By default, the host allocates buffers into the main memory when you load kernels into the OpenCL runtime via the clCreateProgramWithBinary function. During kernel invocation, the host automatically relocates heterogeneous memory buffers that are bound to kernel arguments to the main memory.
-
To avoid the initial allocation of heterogeneous memory
buffers in the main memory, include the CL_MEM_HETEROGENEOUS_INTELFPGA flag when you call the clCreateBuffer function. Also, bind the cl_mem buffer to the argument that used the
buffer_location attribute using clSetKernelArg before doing any reads or writes
from that buffer, as follows:
mem = clCreateBuffer(context, flags|CL_MEM_HETEROGENEOUS_INTELFPGA, memSize, NULL, &errNum); clSetKernelArg(kernel, 0, sizeof(cl_mem), &mem); clEnqueueWriteBuffer(queue, mem, CL_FALSE, 0, N, 0, NULL, &write_event); clEnqueueNDRangeKernel(queue, kernel, 1, NULL, global_work_size, NULL, 0, NULL, &kernel_event);
For example, the following clCreateBuffer call allocates memory into the lowest available memory region of a nondefault memory bank:
mem = clCreateBuffer(context, (CL_MEM_HETEROGENEOUS_INTELFPGA|CL_CHANNEL_1_INTELFPGA), memSize, NULL, &errNum);
Note: Host programs using CL_MEM_HETEROGENEOUS_INTELFPGA and CL_CHANNEL_*_INTELFPGA flags must include the CL/cl_ext_intelfpga.h header file.The clCreateBuffer call allocates memory into a certain global memory type based on what you specify in the kernel argument. If a memory (cl_mem) object residing in a memory type is set as a kernel argument that corresponds to a different memory technology, the host moves the memory object automatically when it queues the kernel. Do not pass a buffer as kernel arguments that associate it with multiple memory technologies.
For more information about optimizing heterogeneous global memory accesses, refer to the Heterogeneous Memory Buffers and the Manual Partitioning of Global Memory sections of the Intel® FPGA SDK for OpenCL™ Best Practices Guide.
6.2.3. Creating a Pipe Object in Your Host Application
An SDK-specific pipe object is not a true OpenCL pipe object as described in the OpenCL Specification version 2.0. This implementation allows you to migrate away from Intel® FPGA products with a conformant solution. The SDK-specific pipe object is a memory object (cl_mem); however, the host does not allocate any memory for the pipe itself.
The following clCreatePipe host API creates a pipe object:
cl_mem clCreatePipe(cl_context context, cl_mem_flags flags, cl_uint pipe_packet_size, cl_uint pipe_max_packets, const cl_pipe_properties *properties, cl_int *errcode_ret)
For more information about the clCreatePipe host API function, refer to section 5.4.1 of the OpenCL Specification version 2.0.
Below is an example syntax of the clCreatePipe host API function:
cl_int status; cl_mem c0_pipe = clCreatePipe(context, 0, sizeof(int), 1, NULL, &status); status = clSetKernelArg(kernel, 1, sizeof(cl_mem), &c0_pipe);
6.2.4. Enabling All Global Memory
6.3. Triggering Collection Profiling Data During Kernel Execution
The Intel® FPGA dynamic profiler for OpenCL™ can be used to collect performance information from the hardware when the design is executed. For instructions about how to add the profiler to your hardware design and how to view the collected data, refer to Chapter 5 of Intel® FPGA SDK for OpenCL™ Best Practices Guide.
In cases where kernel execution finishes after the host application completes and temporal profiling is disabled, you can query the FPGA explicitly to collect profile data during kernel execution.
extern CL_API_ENTRY cl_int CL_API_CALL clGetProfileInfoIntelFPGA(cl_event);
where cl_event is the kernel event. The kernel event you pass to this host library call must be the same one you pass to the clEnqueueNDRangeKernel call.
- If kernel execution completes before the invocation of clGetProfileInfoIntelFPGA, the function returns an event error message.
- Host programs that use clGetProfileInfoIntelFPGA and clGetProfileDataDeviceIntelFPGA function calls must include the CL/cl_ext_intelfpga.h header file.
int main() { ... clEnqueueNDRangeKernel(queue, kernel, ..., NULL); ... clEnqueueNDRangeKernel(queue, kernel, .. , NULL); ... }
This host application runs on the assumption that a kernel launches twice and then completes. In the profile.mon file, there are two sets of profile data, one for each kernel invocation. To collect profile data while the kernel is running, modify the host code in the following manner:
int main() { ... clEnqueueNDRangeKernel(queue, kernel, ..., &event); //Get the profile data before the kernel completes clGetProfileInfoIntelFPGA(event); //Wait until the kernel completes clFinish(queue); ... clEnqueueNDRangeKernel(queue, kernel, ..., NULL); ... }
The call to clGetProfileInfoIntelFPGA adds a new entry in the profile.mon file.
6.3.1. Profiling Autorun Kernels
Unlike enqueued kernels that automatically generate profiler data on completion (if the compiler flag is set), autorun kernels never complete. Hence, you must explicitly indicate when to profile kernels by calling the clGetProfileDataDeviceIntelFPGA host library call. All profiler data is output to a profile.mon file. Data collected by the host library call is a snapshot of the autorun profile data.
Following is the code snippet of the clGetProfileDataDeviceIntelFPGA host library call:
cl_int clGetProfileDataDeviceIntelFPGA (cl_device_id device_id, cl_program program, cl_bool read_enqueue_kernels, cl_bool read_auto_enqueued, cl_bool clear_counters_after_readback, size_t param_value_size, void *param_value, size_t *param_value_size_ret, cl_int *errcode_ret);
where,
- read_enqueue_kernels parameter profiles enqueued kernels. In this release, this parameter has no effect.
- read_auto_enqueued parameter profiles autorun kernels.
- Following are the placeholder parameters for the future releases:
- clear_counters_after_readback
- param_value_size
- param_value
- param_value_size_ret
- errcode_ret
The clGetProfileDataDeviceIntelFPGA host library call returns CL_SUCCESS on success. Else, it returns one of the following errors:
- CL_INVALID_DEVICE if the device is not a valid device.
- CL_INVALID_PROGRAM if the program is not a valid program.
read_auto_enqueued | |
---|---|
Profile only enqueued kernels Note: Automatically outputs profile information once the execution is
completed.
|
False |
Profile only autorun kernels | True |
Profile both enqueued and autorun kernels | True |
6.3.1.1. Multiple Autorun Profiling Calls
6.3.2. Profile Data Acquisition
Pausing data acquisition is not synchronized exactly across all kernels. The skew between halting profile data acquisition across kernels is dependent on the communication link with the device, driver overhead, and congestion on communication buses. Exact synchronized snapshotting of profile data between kernels should not be relied upon.
6.4. Accessing Custom Platform-Specific Functions
The clGetBoardExtensionFunctionAddressIntelFPGA extension specifies an API that retrieves a pointer to a user-accessible function from the Custom Platform.
Definitions of the extension interfaces are available in the INTELFPGAOCLSDKROOT/host/include/CL/cl_ext_intelfpga.h file.
void* clGetBoardExtensionFunctionAddressIntelFPGA ( const char* function_name, cl_device_id device );
Where:
function_name is the name of the user-accessible function that your Custom Platform vendor provides,
and
device is the device ID returned by the clGetDeviceIDs function.
To access the clGetBoardExtensionFunctionAddressIntelFPGA API via the Installable Client Driver (ICD), ensure that the ICD extension API clGetExtensionFunctionAddressIntelFPGA retrieves the pointer to the clGetBoardExtensionFunctionAddressIntelFPGA API first.
The following code example shows how you can access the Custom Platform-specific function via ICD:
clGetBoardExtensionFunctionAddressIntelFPGA_fn clGetBoardExtensionFunctionAddressIntelFPGA = (clGetBoardExtensionFunctionAddressIntelFPGA_fn) clGetExtensionFunctionAddressForPlatform (platform, "clGetBoardExtensionFunctionAddressIntelFPGA"); if (clGetBoardExtensionFunctionAddressIntelFPGA == NULL){ printf ("Failed to get clGetBoardExtensionFunctionAddressIntelFPGA\n"); } void * board_extension_function_ptr = clGetBoardExtensionFunctionAddressIntelFPGA("function_name",device_id);
6.5. Modifying Host Program for Structure Parameter Conversion
Perform the following changes to your host application:
-
Allocate a
cl_mem buffer to store the
structure contents.
Attention: You need a separate cl_mem buffer for every kernel that uses a different structure value.
- Set the structure kernel argument with a pointer to the structure buffer, not with a pointer to the structure contents.
-
Populate the
structure buffer contents before queuing the kernel. Perform one of the
following steps to ensure that the structure buffer is populated before the
kernel launches:
- Queue the structure buffer on the same command queue as the kernel queue.
- Synchronize separate kernel queues and structure buffer queues with an event.
- When your application no longer needs to call a kernel that uses the structure buffer, release the cl_mem buffer.
6.6. Managing Host Application
For Linux systems, if you debug your host application using the GNU Project Debugger (GDB), invoke the following command prior to running the host application:
handle SIG44 nostop
Without this command, the GDB debugging process terminates with the following error message:
Program received signal SIG44, Real-time event 44.
6.6.1. Displaying Example Makefile Fragments (example-makefile or makefile)
The following are example Makefile fragments for compiling and linking
a host program against the host runtime libraries included with the
Intel® FPGA SDK for OpenCL™
.
Example GNU makefile on Linux, with GCC toolchain:
AOCL_COMPILE_CONFIG=$(shell aocl compile-config)
AOCL_LINK_CONFIG=$(shell aocl link-config)
host_prog : host_prog.o
g++ -o host_prog host_prog.o $(AOCL_LINK_CONFIG)
host_prog.o : host_prog.cpp
g++ -c host_prog.cpp $(AOCL_COMPILE_CONFIG)
Example GNU makefile on Windows, with Microsoft Visual C++ command line compiler:
AOCL_COMPILE_CONFIG=$(shell aocl compile-config)
AOCL_LINK_CONFIG=$(shell aocl link-config)
host_prog.exe : host_prog.obj
link -nologo /OUT:host_prog.exe host_prog.obj $(AOCL_LINK_CONFIG)
host_prog.obj : host_prog.cpp
cl /MD /Fohost_prog.obj -c host_prog.cpp $(AOCL_COMPILE_CONFIG)
Example GNU makefile cross-compiling to ARM SoC from Linux or Windows, with
Linaro GCC cross-compiler toolchain:
CROSS-COMPILER=arm-linux-gnueabihf-
AOCL_COMPILE_CONFIG=$(shell aocl compile-config --arm)
AOCL_LINK_CONFIG=$(shell aocl link-config --arm)
host_prog : host_prog.o
$(CROSS-COMPILER)g++ -o host_prog host_prog.o $(AOCL_LINK_CONFIG)
host_prog.o : host_prog.cpp
$(CROSS-COMPILER)g++ -c host_prog.cpp $(AOCL_COMPILE_CONFIG)
6.6.2. Compiling and Linking Your Host Application
- Linking Your Host Application to the Khronos ICD Loader Library
The Intel® FPGA SDK for OpenCL™ supports the OpenCL ICD extension from the Khronos Group™. - Displaying Flags for Compiling Host Application (compile-config)
To display a list of flags necessary for compiling a host application, invoke the compile-config utility command. - Displaying Paths to OpenCL Host Runtime and MMD Libraries (ldflags)
To display the paths necessary for linking a host application to the OpenCL host runtime and MMD libraries, invoke the ldflags utility command. - Listing OpenCL Host Runtime and MMD Libraries (ldlibs)
To display the names of the OpenCL host runtime and MMD libraries necessary for linking a host application, invoke the ldlibs utility command. - Displaying Information on OpenCL Host Runtime and MMD Libraries (link-config or linkflags)
To display a list of flags necessary for linking a host application with OpenCL host runtime and MMD libraries, invoke the link-config or linkflags utility command.
6.6.2.1. Linking Your Host Application to the Khronos ICD Loader Library
In addition to the SDK's host runtime libraries, Intel® supplies a version of the ICD Loader Library that supports the OpenCL Specification version 1.0 and the implemented APIs from the OpenCL Specification versions 1.1, 1.2, and 2.0. To use an ICD library from another vendor, consult the vendor's documentation on how to link to their ICD library.
Before linking your OpenCL host application to the ICD Loader Library, you must also set up the FCD for loading the board MMD libraries. If you have not set up the FCD yet, refer to Managing an FPGA Board for more information.
Ensure that you have set up both ICD and FCD correctly. You can verify this by using the aocl diagnose –icd-only utility, which populates the corresponding ICD/FCDs and verifies if the libraries are registered in the system.
- If the output of the aocl diagnose utility displays ICD diagnostics PASSED, then when you build your host application, the host application automatically gets linked with the ICD Loader Libraries.
- If the aocl diagnose utility fails to detect ICD, follow
these steps to verify the ICD setup:
- For Windows system, open the regedit with administrator privilege and go
to the Windows registry key
HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors.
The value of Name should be
altera_icd.dll. You can find this dynamic library
file in
<INTELFPGAOCLSDKROOT>/host/windows64/bin.
The Type should be DWORD,
and the Data should be
00000000. For example, the registry key
should resemble the
following:
HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors] "alteracl_icd.dll"=dword:00000000
- For Linux system, ensure that the file /etc/OpenCL/vendors/Altera.icd exists in the system and contains the text libalteracl.so.
- For Windows system, open the regedit with administrator privilege and go
to the Windows registry key
HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors.
The value of Name should be
altera_icd.dll. You can find this dynamic library
file in
<INTELFPGAOCLSDKROOT>/host/windows64/bin.
The Type should be DWORD,
and the Data should be
00000000. For example, the registry key
should resemble the
following:
- If the aocl diagnose utility fails to detect FCD, follow
these steps for checking the FCD setup:
- For Windows system, check the libraries in the registry key
HKEY_LOCAL_MACHINE\SOFTWARE\Intel\OpenCL\Boards to determine if
you have chosen to install for all users. Otherwise, check the registry
key HKEY_CURRENT_USER\SOFTWARE\Intel\OpenCL\Boards. The value of
Name should be
<path_to_the_mmd_library>, and the
Data should be a DWORD
that is set to 0. For example, the registry key should resemble the
following:
[HKEY_LOCAL_MACHINE\SOFTWARE\Intel\OpenCL\Boards] "c:\board_vendor a\my_board_mmd.dll"=dword:00000000
- For Linux system, ensure that the file
/opt/Intel/OpenCL/Boards/my_board.fcd exists in
the system and contains the name of the vendor-specific libraries (for
example,
/data/board_vendor_a/libmy_board_mmd.so).Attention:
- If your board vendor provides multiple libraries, you must double check that they are in a particular order. Consult with your board vendor to determine the correct order to load the libraries. You must list the libraries in the registry in their loading order.
- For Intel® Arria® 10 SoC boards, when you build the SD flash card image for your Custom Platform, create an Altera.icd file containing the text libalteracl.so. Store the Altera.icd file in the /etc/OpenCL/vendors directory of your Custom Platform. Refer to Building the Software and SD Card Image for the Intel® Arria® 10 SoC Development Kit Reference Platform for more information.
- For Windows system, check the libraries in the registry key
HKEY_LOCAL_MACHINE\SOFTWARE\Intel\OpenCL\Boards to determine if
you have chosen to install for all users. Otherwise, check the registry
key HKEY_CURRENT_USER\SOFTWARE\Intel\OpenCL\Boards. The value of
Name should be
<path_to_the_mmd_library>, and the
Data should be a DWORD
that is set to 0. For example, the registry key should resemble the
following:
6.6.2.2. Displaying Flags for Compiling Host Application (compile-config)
-
At a command prompt, invoke the
aocl
compile-config
utility command.
The software displays the path to the folder or directory in which the OpenCL™ API header files reside. For example:
- For Windows systems, the path is -I%INTELFPGAOCLSDKROOT%/host/include
- For Linux systems, the path is -I$INTELFPGAOCLSDKROOT/host/include
where INTELFPGAOCLSDKROOT points to the location of the software installation.
- Add this path to your C preprocessor.
6.6.2.3. Displaying Paths to OpenCL Host Runtime and MMD Libraries (ldflags)
6.6.2.4. Listing OpenCL Host Runtime and MMD Libraries (ldlibs)
The software lists the OpenCL host runtime libraries residing in the INTELFPGAOCLSDKROOT/host/<OS_platform>/lib directory. It also lists the Custom Platform-specific MMD libraries residing in the /<board_family_name>/<OS_platform>/lib directory of your Custom Platform.
If you set up FCD correctly, the software does not list the MMD libraries.
- For Windows systems, the output is OpenCL.lib
- For Linux systems, the output is -lOpenCL.
6.6.2.5. Displaying Information on OpenCL Host Runtime and MMD Libraries (link-config or linkflags)
- For Windows systems, the output is /libpath:%INTELFPGAOCLSDKROOT%/host/windows64/lib OpenCL.lib.
- For Linux systems, the output is -L/$INTELFPGAOCLSDKROOT/host/[linux64|arm32]/lib -lOpenCL
6.6.3. Using OpenCL ICD Extension APIs
Consider the following example code snippet:
extern CL_API_ENTRY cl_int CL_API_CALL clGetProfileDataDeviceIntelFPGA( cl_device_id /*device_id*/, cl_program /*program*/, cl_bool /*read_enqueue_kernels*/, cl_bool /*read_auto_enqueued*/, cl_bool /*clear_counters_after_readback*/, size_t /*param_value_size*/, void * /*param_value*/, size_t * /*param_value_size_ret*/, cl_int * /*errcode_ret*/ );
Replace the following function call:
cl_int status = clGetProfileDataDeviceIntelFPGA (device, program, false, true, false, 0, NULL, NULL, NULL);
with code using the following syntax to define and load the function pointer:
typedef cl_int (*clGetProfileDataDevice_fn) (cl_device_id, cl_program, cl_bool, cl_bool, cl_bool, size_t, void *, size_t *, cl_int *); clGetProfileDataDevice_fn get_profile_data_ptr = (clGetProfileDataDevice_fn) clGetExtensionFunctionAddressForPlatform (platform, "clGetProfileDataDeviceIntelFPGA");
and use the function pointer as the function call:
cl_int status = (get_profile_data_ptr) (device, program, false, true, false, 0, NULL, NULL, NULL);
6.6.4. Programming an FPGA via the Host
- Compile your OpenCL kernel with the offline compiler to create the .aocx file.
- Include the clCreateProgramWithBinary function in your host application to create the cl_program OpenCL program objects from the .aocx file.
-
Include the clBuildProgram
function in your host application to create the program executable for the
specified device.
Below is an example host code on using clCreateProgramWithBinary to program an FPGA device:
size_t lengths[1]; unsigned char* binaries[1] ={NULL}; cl_int status[1]; cl_int error; cl_program program; const char options[] = ""; FILE *fp = fopen("program.aocx","rb"); fseek(fp,0,SEEK_END); lengths[0] = ftell(fp); binaries[0] = (unsigned char*)malloc(sizeof(unsigned char)*lengths[0]); rewind(fp); fread(binaries[0],lengths[0],1,fp); fclose(fp); program = clCreateProgramWithBinary(context, 1, device_list, lengths, (const unsigned char **)binaries, status, &error); clBuildProgram(program,1,device_list,options,NULL,NULL);
If the clBuildProgram function executes successfully, it returns CL_SUCCESS. - Create kernel objects from the program executable using the clCreateKernelsInProgram or clCreateKernel function.
-
Include the kernel execution function to instruct the host
runtime to execute the scheduled kernel(s) on the FPGA.
- To enqueue a command to execute an NDRange kernel, use clEnqueueNDRangeKernel.
- To enqueue a single work-item kernel, use clEnqueueTask.
Attention:Intel® recommends that you release an event object when it is not in use. The SDK keeps an event object live until you explicitly instruct it to release the event object. Keeping an unused event object live causes unnecessary memory usage.
To release an event object, call the clReleaseEvent function.
You can load multiple FPGA programs into memory, which the host then uses to reprogram the FPGA as required.
6.6.4.1. Programming Multiple FPGA Devices
Linking your host application to FCD allows you to target multiple FPGA devices from different Custom Platforms. However, this feature has limited support for Custom Platforms that are compatible with SDK versions prior to 16.1.
You can present up to 128 FPGA devices to your system in the following manner:
- Multiple FPGA accelerator boards, each consisting of a single FPGA.
- Multiple FPGAs on a single accelerator board that connects to the host system via a PCIe® switch.
- Combinations of the above.
The host runtime can load kernels onto each and every one of the FPGA devices. The FPGA devices can then operate in a parallel fashion.
Probing the OpenCL FPGA Devices
- To query a list of FPGA devices installed in your machine, invoke the aocl diagnose command.
-
To direct the host to identify the number of OpenCL FPGA
devices, add the following lines of code to your host application:
//Get the platform ciErrNum = clGetPlatformID(&cpPlatform); //Get the devices ciErrNum = clGetDeviceIDs(cpPlatform, CL_DEVICE_TYPE_ALL, 0, NULL, &ciDeviceCount); cdDevices = (cl_device_id * )malloc(ciDeviceCount * sizeof(cl_device_id)); ciErrNum = clGetDeviceIDs(cpPlatform, CL_DEVICE_TYPE_ALL, ciDeviceCount, cdDevices, NULL);
Querying Device Information
char buf[1024]; for (unsigned i = 0; i < ciDeviceCount; i++); { clGetDeviceInfo(cdDevices[i], CL_DEVICE_NAME, 1023, buf, 0); printf("Device %d: '%s'\n", i, buf); }
Device <N>: <board_name>: <name_of_FPGA_board>
Where:
- <N> is the device number.
- <board_name> is the board designation you use to target your FPGA device when you invoke the aoc command.
- <name_of_FPGA_board> is the advertised name of the FPGA board.
For example, if you have two identical FPGA boards on your system, the host generates an output that resembles the following:
Device 0: board_1: Stratix V FPGA Board Device 1: board_1: Stratix V FPGA Board
Loading Kernels for Multiple FPGA Devices
The following host code demonstrates the usage of the clCreateProgramWithBinary and createMultiDeviceProgram functions to program multiple FPGA devices:
cl_program createMultiDeviceProgram(cl_context context, const cl_device_id *device_list, cl_uint num_devices, const char *aocx_name); // Utility function for loading file into Binary String // unsigned char* load_file(const char* filename, size_t *size_ret) { FILE *fp = fopen(aocx_name,"rb"); fseek(fp,0,SEEK_END); size_t len = ftell(fp); char *result = (unsigned char*)malloc(sizeof(unsigned char)*len); rewind(fp); fread(result,len,1,fp); fclose(fp); *size_ret = len; return result; } //Create a Program that is compiled for the devices in the "device_list" // cl_program createMultiDeviceProgram(cl_context context, const cl_device_id *device_list, cl_uint num_devices, const char *aocx_name) { printf("creating multi device program %s for %d devices\n", aocx_name, num_devices); const unsigned char **binaries = (const unsigned char**)malloc(num_devices*sizeof(unsigned char*)); size_t *lengths=(size_t*)malloc(num_devices*sizeof(size_t)); cl_int err; for(cl_uint i=0; i<num_devices; i++) { binaries[i] = load_file(aocx_name,&lengths[i]); if (!binaries[i]) { printf("couldn't load %s\n", aocx_name); exit(-1); } } cl_program p = clCreateProgramWithBinary(context, num_devices, device_list, lengths, binaries, NULL, &err); free(lengths); free(binaries); if (err != CL_SUCCESS) { printf("Program Create Error\n"); } return p; } // main program main () { // Normal OpenCL setup } program = createMultiDeviceProgram(context, device_list, num_devices, "program.aocx"); clBuildProgram(program,num_devices,device_list,options,NULL,NULL);
6.6.5. Termination of the Runtime Environment and Error Recovery
The runtime environment is a library that is compiled as part of the host application. When the host application terminates, the runtime environment also terminates along with any tracking activity that it performs. If you restart the host application, a new runtime environment and its associated tracking activities reinitializes. The initialization functions reset the kernel's hardware state.
In same cases, unexpected termination of the host application causes the configuration of certain hardware (for example, PCIe® hard IP) to be incomplete. To restore the configuration of these hardware, the host needs to reprogram the FPGA.
If you use a Custom Platform that implements customized hardware blocks, be aware that restarting the host application and resetting these blocks might have design implications:
- When the host application calls the clGetPlatformIDs function, all kernels and channels are reset for all available devices.
- When the host application calls the clGetPlatformIDs function, it resets FIFO buffers and channels as it resets the device.
- The host application initializes memory buffers via the clCreateBuffer and clEnqueueWriteBuffer function calls. You cannot access the contents of buffers from a previous host execution within a new host execution.
6.7. Allocating Shared Memory for OpenCL Kernels Targeting SoCs
- Mark the shared buffers between kernels as volatile to ensure that buffer modification by one kernel is visible to the other kernel.
- To access shared memory, you only need to modify the host code. Modifications to the kernel code are unnecessary.
- You cannot use the library function
malloc or the
operator new to
allocate physically shared memory. Also, the CL_MEM_USE_HOST_PTR flag
does not work with shared memory.
In DDR memory, shared memory must be physically contiguous. The FPGA cannot consume virtually contiguous memory without a scatter-gather direct memory access (SG-DMA) controller core. The malloc function and the new operator are for accessing memory that is virtually contiguous.
- CPU caching is disabled for the shared memory.
- When you use shared memory, one copy of the data is used for both the host and the kernel. When this memory is used, OpenCL memory calls are done as zero-copy transfers for buffer reads, buffer writers, maps, and unmaps.
-
To allocate and access shared memory,
structure your host code in a similar manner as the
following example:
cl_mem src = clCreateBuffer(…, CL_MEM_ALLOC_HOST_PTR, size, …); int *src_ptr = (int*)clEnqueueMapBuffer (…, src, size, …); *src_ptr = input_value; //host writes to ptr directly clSetKernelArg (…, src); clEnqueueNDRangeKernel(…); clFinish(); printf (“Result = %d\n”, *dst_ptr); //result is available immediately clEnqueueUnmapMemObject(…, src, src_ptr, …); clReleaseMemObject(src); // actually frees physical memory
You can include the CONFIG_CMA_SIZE_MBYTES kernel configuration option to control the maximum total amount of shared memory available for allocation. In practice, the total amount of allocated shared memory is smaller than the value of CONFIG_CMA_SIZE_MBYTES.Important:- If your target board has multiple DDR memory banks, the clCreateBuffer(..., CL_MEM_READ_WRITE, ...) function allocates memory to the nonshared DDR memory banks. However, if the FPGA has access to a single DDR bank that is shared memory, then clCreateBuffer(..., CL_MEM_READ_WRITE, ...) allocates to shared memory, similar to using the CL_MEM_ALLOC_HOST_PTR flag.
- The shared memory that you request with the clCreateBuffer(..., CL_MEM_ALLOC_HOST_PTR, size, ...) function is allocated in the Linux OpenCL kernel driver, and it relies on the contiguous memory allocator (CMA) feature of the Linux kernel. For detailed information on enabling and configuring the CMA, refer to the Recompiling the Linux Kernel for the Intel® Arria® 10 SoC Development Kit and Compiling and Installing the OpenCL Linux Kernel Driver sections of the Intel® FPGA SDK for OpenCL™ Intel® Arria® 10 SoC Development Kit Reference Platform Porting Guide .
-
To transfer data from shared hard processor
system (HPS) DDR to FPGA DDR efficiently, include a kernel
that performs the memcpy
function, as shown below.
__attribute__((num_simd_work_items(8))) mem_stream(__global uint * src, __global uint * dst) { size_t gid = get_global_id(0); dst[gid] = src[gid]; }
Attention: Allocate the src pointer in the HPS DDR as shared memory using the CL_MEM_ALLOC_HOST_PTR flag. -
If the host allocates constant memory to
shared HPS DDR system and then modifies it after kernel
execution, the modifications might not take effect. As a
result, subsequent kernel executions might use outdated
data. To prevent kernel execution from using outdated
constant memory, perform one of the following tasks:
- Do not modify constant memory after its initialization.
- Create multiple constant memory buffers if you require multiple __constant data sets.
- If available, allocate constant memory to the FPGA DDR on your accelerator board.
6.8. Sharing Multiple Devices Across Multiple Host Programs
You can then pass these devices to clCreateContext function where the devices are locked by that OpenCL program until it either calls clReleaseContext function or terminates.

Multiple processes or multiple trusted users can arbitrate between devices in a multi-device system using this locking mechanism. In case users have decided ahead of time which device (by name) each person uses, they can use clGetDeviceInfo function to select the cl_device_id with the name that a user is assigned to. To arbitrate more dynamically in case each of the N users want Di devices, use the following scheme:
- Each user queries the clGetDeviceIDs function to obtain a list of devices.
- Each user chooses Di devices (ideally randomly to minimize collisions) and passes those to the clCreateContext function.
It is possible that during step 2, another user may have already called clCreateContext function with that same device, in which case, the clCreateContext function call fails. The user should then repeat steps 2 (optionally changing the device selection) until it succeeds.
Consider the following example code snippet:
do { for(i = num_devices - 1; i >= 0; i--) { context = clCreateContext(0, 1, &(device_ids[i]), NULL, NULL, &status); if (status != CL_SUCCESS) { printf("Failed to get context with %d (error: %d), waiting\n", i, status sleep(1); } else { device_id = device_ids[i]; break; } } } while (status != CL_SUCCESS); // Exit this loop only when we’ve succeeded in creating a context
7. Compiling Your OpenCL Kernel
Before you compile an OpenCL™ kernel, verify that the QUARTUS_ROOTDIR_OVERRIDE environment variable points to the the Intel® Quartus® Prime Pro Edition software.
7.1. Compiling Your Kernel to Create Hardware Configuration File
Intel® recommends that you use this one-step compilation strategy under the following circumstances:
- After you optimize your kernel via the Intel® FPGA SDK for OpenCL™ design flow, and you are now ready to create the .aocx file for deployment onto the FPGA.
- You have one or more simple kernels that do not require any optimization.
To compile the kernel and generate the .aocx file in one step, invoke the aoc <your_kernel_filename1>.cl [<your_kernel_filename2>.cl ...] command.
Where [ <your_kernel_filename2>.cl ...] are the optional space-delimited file names of kernels that you can compile in addition to <your_kernel_filename1>.cl.
The Intel® FPGA SDK for OpenCL™ Offline Compiler groups the .cl files into a temporary file. It then compiles this file to generate the .aocx file.
7.2. Compiling Your Kernel without Building Hardware (-c)
- A .aoco file for each .cl kernel source file. The offline compiler creates the .aoco file(s) in a matter of seconds to minutes.
7.3. Compiling and Linking Your Kernels or Object Files without Building Hardware (-rtl)
-
To compile one or more kernel source files, at a command
prompt, invoke the
aoc
-rtl
<your_kernel_filename1>.cl
[<your_kernel_filename2>.cl
...] command.
Where [ <your_kernel_filename2>.cl ...] are the optional space-delimited file names of kernels that you can compile in addition to <your_kernel_filename1>.cl.When you invoke the aoc command with the -rtl flag, the offline compiler compiles the kernels and creates the following files and directories:
- An intermediate .aoco file for each .cl kernel source file. It is not presented unless you specify the -save-temps aoc command option. The offline compiler then links them and generates a .aocr file. It takes the offline compiler a matter of seconds to minutes to create a .aoco file or the .aocr file.
- A <your_kernel_filename> folder or subdirectory. It contains intermediate files that the SDK uses to build the hardware configuration file necessary for FPGA programming.
-
To compile one or more .aoco object files, at a command prompt, invoke the
aoc
-rtl
<your_kernel_filename>.aoco
[<your_kernel_filename2>.aoco
...] command.
Where [ <your_kernel_filename2>.aoco ...] are the optional space-delimited file names of object files that you can compile in addition to <your_kernel_filename1>.aoco.When you invoke the aoc command with the -rtl flag, the offline compiler creates the following files and directories:
- The offline compiler links all the .aoco files and generates a .aocr file.
- A <your_kernel_filename> folder or subdirectory. It contains intermediate files that the SDK uses to build the hardware configuration file necessary for FPGA programming.
7.4. Specifying the Location of Header Files (-I=<directory>)
If the header files are in the same directory as your kernel, you do not need to include the -I=<directory> option in your aoc command. The offline compiler automatically searches the current folder or directory for header files.
For Windows systems, ensure that your include path does not contain any trailing slashes. The offline compiler considers a trailing forward slash (/) or backward slash (\) as illegal.
The offline compiler generates an error message if you invoke the aoc command in the following manner:
aoc -I=<drive>\<folder>\<subfolder>\ <your_kernel_filename>.cl
or
aoc -I=<drive>/<folder>/<subfolder>/ <your_kernel_filename>.cl
The correct way to specify the include path is as follows:
aoc -I=<drive>\<folder>\<subfolder> <your_kernel_filename>.cl
or
aoc -I=<drive>/<folder>/<subfolder> <your_kernel_filename>.cl
7.5. Specifying the Name of an Intel FPGA SDK for OpenCL Offline Compiler Output File (-o <filename>)
-
If you implement the multistep compilation flow, specify the
names of the output files in the following manner:
- To specify the name of the .aoco file that the offline compiler creates during an intermediate compilation step, invoke the aoc -rtl -o <your_object_filename>.aocr <your kernel_filename>.cl -save-temps command.
- To specify the name of the .aocx file that the offline compiler creates during the final compilation step, invoke the aoc -o <your_executable_filename>.aocx <your_object_filename>.aocr command.
- If you implement the one-step compilation flow, specify the name of the .aocx file by invoking the aoc -o <your_executable_filename>.aocx <your_kernel_filename>.cl command.
7.6. Compiling a Kernel for a Specific FPGA Board and Custom Platform (-board=<board_name>) and (-board-package=<board_package_path>)
When you compile your kernel by including the -board=<board_name> option in the aoc command, the Intel® FPGA SDK for OpenCL™ Offline Compiler defines the preprocessor macro AOCL_BOARD_<board_name> to be 1, which allows you to compile device-optimized code in your kernel.
-
To obtain the names of the available FPGA boards in your Custom
Platform, invoke the
aoc
-list-boards
command.
For example, the offline compiler generates the following output:
Board List: FPGA_board_1
where FPGA_board_1 is the <board_name>.
You can also list out all the available FPGA boards from a specific Custom Platform. Include the -board-package=<custom_platform_path> option in the aoc command. At the command prompt, invoke the following command:
aoc –board-package=<custom_platform_path> -list-boards=<board_name>
The Intel® FPGA SDK for OpenCL™ Offline Compiler lists the available boards within the specific Custom Platform.
-
To compile your OpenCL kernel for FPGA_board_1, invoke the
aoc
-board=FPGA_board_1
<your_kernel_filename>.cl command.
The offline compiler defines the preprocessor macro AOCL_BOARD_FPGA_board_1 to be 1 and compiles kernel code that targets FPGA_board_1.
-
If there are multiple Custom Platforms (board packages) installed, you can
compile your kernel with the board variant from a specific Custom Platform by
including -board-package=<custom_platform_path> option with
-board=<board_name>. At the command prompt, invoke the
following command:
aoc -board-package=<custom_platform_path> -board=<board_name>
The Intel® FPGA SDK for OpenCL™ Offline Compiler compiles the kernel with the board specified in the <custom_platform_path>.
-
To list Custom Platforms available in the system, include the
-list-board-packages option in the aoc
command. At a command prompt, invoke the
aoc
-list-board-packages
command. The
Intel® FPGA SDK for OpenCL™ Offline Compiler generates an output that
resembles the following:
Installed board packages: <board_package_1> ...
Where <board_package_N> is the board package of the Custom Platform installed in your system or shipped within the Intel® FPGA SDK for OpenCL™ .
To readily identify compiled kernel files that target a specific FPGA board, Intel® recommends that you rename the kernel binaries by including the -o option in the aoc command.
- To target your kernel to FPGA_board_1 in the one-step
compilation flow, invoke the following
command:
aoc -board=FPGA_board_1 <your_kernel_filename>.cl -o <your_executable_filename>_FPGA_board_1.aocx
-
To target your kernel to FPGA_board_1 in the multistep compilation flow, perform the following tasks:
- Invoke the following command to generate the .aoco
file:
aoc -rtl -board=FPGA_board_1 <your_kernel_filename>.cl -o <my_object_filename>_FPGA_board_1.aocr -save-temps
- Invoke the following command to generate the
.aocx
file:
aoc -board=FPGA_board_1 <your_object_filename>_FPGA_board_1.aocr -o <your_executable_filename>_FPGA_board_1.aocx
- Invoke the following command to generate the .aoco
file:
- If you have an accelerator board consisting of two FPGAs, each FPGA device
has an equivalent "board" name (for example, board_fpga_1 and board_fpga_2).
To target a kernel_1.cl to board_fpga_1
and a kernel_2.cl to board_fpga_2,
invoke the following
commands:
aoc -board=board_fpga1 kernel_1.cl aoc -board=board_fpga2 kernel_2.cl
7.7. Resolving Hardware Generation Fitting Errors during Kernel Compilation (-high-effort)
When kernel compilation fails because of a fitting constraint problem, the Intel® FPGA SDK for OpenCL™ Offline Compiler displays the following error message:
Error: Kernel fit error, recommend using -high-effort. Error: Cannot fit kernel(s) on device
After you invoke the command, the offline compiler displays the following message:
High-effort hardware generation selected, compile time may increase significantly.
The offline compiler makes three attempts to recompile your kernel and generate hardware. Modify your kernel if compilation still fails after the -high-effort attempt.
7.8. Specifying Schedule Fmax Target for Kernels (-clock=<clock_target>)
You can use one or both of the following options to specify the kernel specific fmax target:
- By using the __attribute__((scheduler_target_fmax_mhz(__x))) source-level attribute.
- By directing the Intel® FPGA SDK for OpenCL™ Offline Compiler to globally compile all kernels with -clock=<clock target in Hz/KHz/MHz/GHz or s/ms/us/ns/ps> option in the aoc command.
kernel void k1(){ ... } __attribute__((scheduler_target_fmax_mhz(200))) kernel void k2(){ ... }
In you direct the offline compiler to compile the above code with -clock=300MHz in the aoc command, the compiler schedules kernel k1 at 300 MHz and kernel k2 at 200 MHz.
7.9. Defining Preprocessor Macros to Specify Kernel Parameters (-D<macro_name>)
- To pass a preprocessor macro definition to the offline compiler, invoke the aoc -D <macro_name> <kernel_filename>.cl command.
-
To override the existing value of a defined preprocessor macro,
invoke the
aoc
-D
<macro_name>=<value>
<kernel_filename>.cl
command.
Consider the following code snippet for the kernel sum:
#ifndef UNROLL_FACTOR #define UNROLL_FACTOR 1 #endif __kernel void sum (__global const int * restrict x, __global int * restrict sum) { int accum = 0; #pragma unroll UNROLL_FACTOR for(size_t i = 0; i < 4; i++) { accum += x[i + get_global_id(0) * 4]; } sum[get_global_id(0)] = accum; }
To override the UNROLL_FACTOR of 1 and set it to 4, invoke the aoc -DUNROLL_FACTOR=4 sum.cl command. Invoking this command is equivalent to replacing the line #define UNROLL_FACTOR 1 with #define UNROLL_FACTOR 4 in the sum kernel source code.
-
To use preprocessor macros to control how the offline compiler
optimizes your kernel without modifying your kernel source code, invoke the
aoc
-o
<hardware_filename>.aocx -D
<macro_name>=<value>
<kernel_filename>.cl
Where:
-o is the offline compiler option you use to specify the name of the .aocx file that the offline compiler generates.
<hardware_filename> is the name of the .aocx file that the offline compiler generates using the preprocessor macro value you specify.
Tip: To preserve the results from both compilations on your file system, compile your kernels as separate binaries by using the -o flag of the aoc command.For example, if you want to compile the same kernel multiple times with required work-group sizes of 64 and 128, you can define a WORK_GROUP_SIZE preprocessor macro for the kernel attribute reqd_work_group_size, as shown below:__attribute__((reqd_work_group_size(WORK_GROUP_SIZE,1,1))) __kernel void myKernel(...) for (size_t i = 0; i < 1024; i++) { // statements }
Compile the kernel multiple times by typing the following commands:
aoc –o myKernel_64.aocx –DWORK_GROUP_SIZE=64 myKernel.cl
aoc –o myKernel_128.aocx –DWORK_GROUP_SIZE=128 myKernel.cl
7.10. Generating Compilation Progress Report (-v)
-
To direct the offline compiler to report on the progress of a
full compilation, invoke the
aoc
-v
<your_kernel_filename>.cl command.
The offline compiler generates a compilation progress report similar to the following example:
aoc: Environment checks are completed successfully. You are now compiling the full flow!! aoc: Selected target board a10gx aoc: Running OpenCL parser.... aoc: OpenCL parser completed successfully. aoc: Compiling.... aoc: Linking with IP library ... aoc: First stage compilation completed successfully. aoc: Setting up project for CvP revision flow.... aoc: Hardware generation completed successfully.
-
To direct the offline compiler to report on the progress of an
intermediate compilation step that does not build hardware, invoke the
aoc
-rtl
-v
<your_kernel_filename>.cl command.
The offline compiler generates a compilation progress report similar to the following example:
aoc: Environment checks are completed successfully. aoc: Selected target board a10gx aoc: Running OpenCL parser.... aoc: OpenCL parser completed successfully. aoc: Compiling.... aoc: Linking with IP library ... aoc: First stage compilation completed successfully. aoc: To compile this project, run "aoc <your_kernel_filename>.aoco"
-
To direct the offline compiler to report on the progress of a
compilation for emulation, invoke the
aoc
-march=emulator
-v
<your_kernel_filename>.cl command.
The offline compiler generates a compilation progress report similar to the following example:
aoc: Environment checks are completed successfully. You are now compiling the full flow!! aoc: Selected target board a10gx aoc: Running OpenCL parser....ex aoc: OpenCL parser completed successfully. aoc: Compiling for Emulation .... aoc: Emulator Compilation completed successfully. Emulator flow is successful.
7.11. Displaying the Estimated Resource Usage Summary On-Screen (-report)
You can review the estimated resource usage summary without performing a full compilation. To review the summary on-screen prior to generating the hardware configuration file, include the -rtl option in your aoc command.
+--------------------------------------------------------------------+ ; Estimated Resource Usage Summary ; +----------------------------------------+---------------------------+ ; Resource + Usage ; +----------------------------------------+---------------------------+ ; Logic utilization ; 35% ; ; ALUTs ; 22% ; ; Dedicated logic registers ; 15% ; ; Memory blocks ; 29% ; ; DSP blocks ; 0% ; +----------------------------------------+---------------------------;
7.12. Suppressing Warning Messages from the Intel FPGA SDK for OpenCL Offline Compiler (-W)
7.13. Converting Warning Messages from the Intel FPGA SDK for OpenCL Offline Compiler into Error Messages (-Werror)
7.14. Removing Debug Data from Compiler Reports and Source Code from the .aocx File (-g0)
7.15. Disabling Burst-Interleaving of Global Memory (-no-interleaving=<global_memory_type>)
-
To direct the offline compiler to disable burst-interleaving
for the default global memory, invoke the
aoc
<your_kernel_filename>.cl -no-interleaving=default
command.
Your accelerator board might include multiple global memory types. To identify the default global memory type, refer to board vendor's documentation for your Custom Platform.
-
For a heterogeneous memory system, to direct the offline
compiler to disable burst-interleaving of a specific global memory type, perform
the following tasks:
- Consult the board_spec.xml file of your Custom Platform for the names of the available global memory types (for example, DDR and quad data rate (QDR)).
-
To disable burst-interleaving for one of the memory
types (for example, DDR), invoke the
aoc
<your_kernel_filename>.cl
-no-interleaving=DDR
command.
The offline compiler enables manual partitioning for the DDR memory bank, and configures the other memory bank in a burst-interleaved fashion.
-
To disable burst-interleaving for more than one type of
global memory buffers, include a
-no-interleaving=<global_memory_type>
option for each global memory type.
For example, to disable burst-interleaving for both DDR and QDR, invoke the aoc <your_kernel_filename>.cl -no-interleaving=DDR -no-interleaving=QDR command.
7.16. Forcing Ring Interconnect for Global Memory (-global-ring)
To override the compiler's choice and force a ring topology, use the -global-ring option in your aoc command. This can improve your kernel fmax.
Example: aoc -global-ring <your_kernel_filename>.cl
7.17. Forcing a Single Store Ring to Reduce Area at the Expense of Write Throughput to Global Memory (-force-single-store-ring)
When the Intel® FPGA SDK for OpenCL™ Offline Compiler implements a ring topology for the global memory interconnect (either by automatic choice or by forcing the ring through -global-ring), it widens the interconnect by default to allow more writes to occur in parallel. This allows for the saturation of global memory throughput using write-only traffic. The -force-single-store-ring option allows you to save area if you do not require that much write bandwidth.
Example: aoc -force-single-store-ring <your_kernel_filename>.cl
7.18. Forcing Fewer Read Data Reorder Units to Reduce Area at the Expense of Read Throughput to Global Memory (-num-reorder)
When the Intel® FPGA SDK for OpenCL™ Offline Compiler implements a ring topology for the global memory interconnect (either by automatic choice or by forcing the ring through -global-ring), it widens the interconnect by default to allow more reads to occur in parallel. This allows for the saturation of global memory throughput using read-only traffic. For example, if on a two-bank BSP you require only one bank's worth of read bandwidth, set -num-reorder=1.
Example: aoc -num-reorder=1 <your_kernel_filename>.cl
7.19. Configuring Constant Memory Cache Size (-const-cache-bytes=<N>)
The default constant cache size is 16 kB.
7.20. Relaxing the Order of Floating-Point Operations (-ffp-reassoc)
This flag turns on the same optimizations as the deprecated –fp-relaxed flag for all instructions, unless denoted otherwise by the fp reassoc pragma. For information about the fp reassoc pragma, refer to Floating Point Optimizations (fp contract and fp reassoc Pragma).
To direct the offline compiler to execute a balanced tree hardware implementation, invoke the aoc -ffp-reassoc <your_kernel_filename>.cl command.
7.21. Reducing Floating-Point Rounding Operations (-ffp-contract=fast)
This flag turns on the same optimizations as the deprecated –fpc flag for all instructions, unless denoted otherwise by the fp contract pragma. For information about the fp contract pragma, refer to Floating Point Optimizations (fp contract and fp reassoc Pragma).
7.22. Speeding Up Your OpenCL Compilation (-fast-compile)
The -fast-compile feature achieves significant savings in compilation time by lowering optimization efforts.
At the command prompt, invoke the aoc -rtl <your_kernel_filename1>.cl -fast-compile command.
Enabling the -fast-compile feature might cause some performance issues such as:
- Higher resource use
- Lower fmax and as a result lower application performance
- Lower power efficiency
Intel® recommends that you use the -fast-compile option for internal development only.
- You can only use the -fast-compile compiler option to compile OpenCL designs targeting Intel® Arria® 10 and newer devices.
- After you finalize a design, compile your OpenCL* kernel without the -fast-compile option over multiple seeds to obtain the best performance.
- Regardless of whether the -fast-compile
feature is enabled, the initial compilation of any OpenCL system on a new
board and with a new version of
Intel® FPGA SDK for OpenCL™
Pro Edition takes an additional 45 to 60
minutes to complete. The additional time is used to cache some parts of the
compilation for future compilations (this behavior does not affect kernel performance). To create this cache, define the
environment variable $AOCL_TMP_DIR to a
writable directory that you can share. By default, this cache is stored in
/var/tmp/aocl/$USER on Linux and
%USERPROFILE%\AppData\Local\aocl on Windows.
You can share this writable directory by setting it to a shared network
location.
After you create the cache, you do not need to create it again for the current version of the Intel® FPGA SDK for OpenCL™ and the current targeted board.
7.23. Compiling Your Kernel Incrementally (-incremental)
If you have a large, mutli-kernel system and you only want to modify a single kernel, the Intel® FPGA SDK for OpenCL™ Offline Compiler can reuse the results from a previous compilation, and only synthesize, place, and route the kernel(s) that you have modified. Leveraging this incremental compilation feature allows you to dramatically reduce compilation time.
Example incremental compilation flow:
aoc -incremental <your_kernel_filename>.cl /*****Update kernels in your OpenCL design*****/ aoc -incremental -fast-compile <your_kernel_filename>.cl
-
Perform an
initial setup compilation in a clean directory, with the incremental mode
enabled, by invoking the
aoc
-incremental
<your_kernel_filename>.cl
command.
Note: You must enable the -incremental flag when performing the setup compilation.
This setup compilation does not reuse any result from a previous compilation. When performing a setup compilation, do not include the -fast-compile offline compiler command option in the aoc command because it increases the probability of encountering errors in future incremental compilations.
Tip: Intel® recommends that you perform a fresh setup compilation whenever compilation time is not a concern because it reduces the probability of compilation failures in future incremental compilations. Performing many consecutive incremental compilations increases the probability of compilation failures. It also decreases the hardware performance and efficiency of the generated .aocx file. -
Modify the kernels in your OpenCL design.
Your design may contain multiple .cl files.
-
Perform an incremental compilation on your design. For optimal
compilation speed, also include the -fast-compile flag in your aoc
command:
aoc -incremental -fast-compile <your_kernel_filename>.cl
-
Review the Incremental
compile section of the report.html file to verify the changes that the offline compiler
has detected.
The report.html file is in the <your_kernel_filename>/reports directory.
7.23.1. The Incremental Compile Report
The Incremental compile report provides the following metrics on your OpenCL design:
-
The <%> of design not preserved metric at the bottom of the report provides a quick summary of the overall changes to your design. It is the best predictor of compilation time.
Note:The FPGA resources listed in the Incremental compile report are calculated based on the estimated area models that the Intel® FPGA SDK for OpenCL™ Offline Compiler produces. The area numbers represent an estimate of the area usage in a standard (that is, non-incremental) compilation. You can use these numbers to gauge the area your design consumes in a standard compilation.
The FPGA resource information might not fully match the final area in the Intel® Quartus® Prime Pro Edition software compilation reports.


7.23.2. Additional Command Options for Incremental Compilation
Grouping Multiple Kernels into Partitions (-incremental-grouping=<filename>)
By default, the Intel® FPGA SDK for OpenCL™ Offline Compiler places each kernel in your design into a separate partition during incremental compilation. You have the option to group multiple kernels into a single partition by including the -incremental-grouping=<partition_filename> command option in your aoc command. In general, compilation speed is faster if your design contains fewer partitions.
If your grouped kernels perform many load and store operations, you can speed up compilation further by also including the -incremental=aggressive option in your aoc command.
The partition file that you pass to the -incremental-grouping option is a plain text file. Each line in the file specifies a new partition containing a semi-colon (;)-delimited list of kernel names. For example, the following lines in a partition file specify three partitions, each containing four kernels:
reader0;reader1;reader2;reader3 accum0;accum1;accum2;accum3 writer0;writer1;writer2;writer3
Compiling a Design in Aggressive Mode (-incremental=aggressive)
To increase the speed of an incremental compilation at the expense of area usage and throughput, include the -incremental=aggressive command option in your aoc command.
This feature is especially effective when the kernels in your design perform load and store operations to many buffers, or when you have grouped multiple kernels together using the -incremental-grouping command option.
Example: aoc -incremental=aggressive -incremental-grouping=<partition_filename> <your_kernel_filename>.cl
- Enabling the aggressive mode might result in throughput degradations that are larger than what the Fmax degradation indicates.
- For each OpenCL design, avoid changing the compilation mode between incremental compilations. If you compile your design in aggressive mode, enable aggressive mode for all subsequent incremental compilations that you perform on this design. Each time you switch the incremental compilation mode, compilation takes longer to complete.
Specifying a Custom Input Directory (-incremental-input-dir=<path_to_directory>)
During incremental compilation, the offline compiler creates a default <your_kernel_filename> project directory in the current working directory to store intermediate compilation files. To base your incremental compilation on a nondefault project directory, specify the directory by including the -incremental-input-dir=<path_to_directory> command option in your aoc command.
You must include the -incremental-input-dir option if you compile your design in one or both of the following scenarios:
- Run the aoc command from a different working directory than the previous compilation.
- Included the -o <filename> command option in the previous compilation.
Consider the following example where there is a mykernel.cl file in the initial working directory and another revision of the same mykernel.cl file in the new_rev subdirectory:
aoc -incremental mykernel.cl cd new_rev aoc -incremental -fast-compile mykernel.cl -incremental-input-dir=../mykernel
In this scenario, the offline compiler reuses the files in mykernel project directory from the first compilation as the basis for the second compilation. The offline compiler creates a new_rev/mykernel project directory for the second compilation without modifying any file in the original mykernel directory.
The -incremental-input-dir command option is useful if multiple developers share the same incremental setup compilation. Each developer can run subsequent incremental compilations in their own workspace without overwriting other developers' compilation results.
Disabling Automatic Retry (-incremental-flow=no-retry)
By default, the offline compiler automatically retries a failed incremental compilation by performing a second compilation without preserving any partitions. This second compilation takes longer to complete because it recompiles the entire design.
To disable the offline compiler's automatic retry mechanism, include the -incremental-flow=no-retry command option in your aoc command. If you enable this feature, the offline compiler does not perform another incremental compilation after the first attempt fails. In addition, the offline compiler does not generate a .aocx file.
Enabling this feature allows you to implement your own failure mitigation strategies such as:
- Compiling multiple seeds in parallel to increase the probability of at least one compilation succeeding without retrying.
- Executing a non-incremental fast compilation instead of an incremental fast compilation (that is, aoc -fast-compile <your_kernel_filename>.cl).
7.23.3. Limitations of the Incremental Compilation Feature
In addition to device support, the incremental compilation has the following limitations:
- You will experience area, Fmax, and power degradations when you enable the incremental compilation feature ( -incremental ) or the fast compilation feature ( -fast-compile ), or both.
- In congested designs, incremental compilations can experience severe (that is, 25% or more) Fmax reductions compared to the initial setup compilation. If the Fmax reduction is unacceptable, perform a non-incremental fast compilation instead to reduce the amount of Fmax degradation while preserving some of the savings in compilation time.
-
The offline compiler does not detect changes in RTL libraries that you have included by invoking the -l <library_name>.aoclib offline compiler command option. After you modify an RTL library, you must perform a setup compilation again.
The offline compiler prints a warning message as a reminder to rerun the setup compilation.
- When compiling an OpenCL kernel containing calls to HLS tasks, incremental compile may trigger recompilation for unaffected kernels. However, this is not a functional bug. It may result in a more conservative incremental compile.
7.24. Compiling Your Kernel with Memory Error Correction Coding (-ecc)
The ECC implementation has single error correction and double error detection capabilities for each 32-bit word.
7.25. Disabling Hardware Kernel Invocation Queue (-no-hardware-kernel-invocation-queue)
Example: aoc -no-hardware-kernel-invocation-queue <your_kernel_filename>.cl
Using this option may result in longer kernel execution time as the kernel invocation queue allows OpenCL runtime environment to queue kernel launches in accelerator so that the accelerator can start execution on the next invocation as soon as previous invocation of the same kernel is complete.
Refer to Utilizing Hardware Kernel Invocation Queue topic in Intel FPGA SDK for OpenCL Pro Edition: Best Practices Guide for more information about how to utilize the kernel invocation queue.
7.26. Modifying the Handshaking Protocol (-hyper-optimized-handshaking)
The -hyper-optimized-handshaking option can be set to one of the following values:
- auto
- The default behavior without the option specified. The compiler will enable the optimization if it is possible to do so, else it will be set to off.
- Use this value when you want to achieve a higher fmax. When you enable the optimization, the Intel® FPGA SDK for OpenCL™ Offline Compiler adds pipeline registers to the handshaking paths of the stallable nodes. As a result, you will observe higher fmax at the cost of increased area and latency.
-
Example: aoc -hyper-optimized-handshaking <your_kernel_filename>.cl
- off
- The compiler attempts to optimize for lower latency at the potential cost of lower fmax. Disabling hyper-optimized handshaking might also decrease area. This is useful for smaller designs where you are willing to give up fmax for typically lower latency and area.
-
Example: aoc -hyper-optimized-handshaking=off <your_kernel_filename>.cl
8. Emulating and Debugging Your OpenCL Kernel
The Intel® FPGA SDK for OpenCL™ Emulator generates a .aocx file that executes on x86-64 Windows or Linux host. This feature allows you to emulate the functionality of your kernel and iterate on your design without executing it on the actual FPGA each time. For Linux platform, you can also use the Emulator to perform functional debug.
- Setting up the Emulator
If you installed the Intel® FPGA SDK for OpenCL™ Pro edition with administrator privileges, no additional setup is needed. If you did not install the Intel® FPGA SDK for OpenCL™ with administrator privileges, you must perform some additional steps to enable the emulator. - Modifying Channels Kernel Code for Emulation
To emulate applications with a channel that reads or writes to an I/O channel, modify your kernel to add a read or write channel that replaces the I/O channel, and make the source code that uses it is conditional. - Compiling a Kernel for Emulation (-march=emulator)
To compile an OpenCL™ kernel for emulation, include the -march=emulator option in your aoc command. - Emulating Your OpenCL Kernel
To emulate your OpenCL™ kernel, run the emulation .aocx file on the platform on which you built your kernel. The OpenCL Emulator uses a different OpenCL platform than when targeting FPGA hardware. - Debugging Your OpenCL Kernel on Linux
For Linux systems, you can direct the Intel® FPGA SDK for OpenCL™ Emulator to run your OpenCL kernel in the debugger and debug it functionally as part of the host application. - Limitations of the Intel FPGA SDK for OpenCL Emulator
The Intel® FPGA SDK for OpenCL™ Emulator feature has some limitations. - Discrepancies in Hardware and Emulator Results
When you emulate a kernel, your OpenCL system might produce results different from that of the kernel compiled for hardware. You can further debug your kernel before you compile for hardware by running your kernel through simulation. - Emulator Environment Variables
Several environment variables are available to modify the behavior of the emulator. - Extensions Supported by the Emulator
The emulator offers varying levels of support for different OpenCL extensions. - Emulator Known Issues
A few known issues might affect your use of the emulator. Review these issues to avoid possible problems when using the emulator.
8.1. Setting up the Emulator
If you installed the Intel® FPGA SDK for OpenCL™ Pro edition with administrator privileges, no additional setup is needed. If you did not install the Intel® FPGA SDK for OpenCL™ with administrator privileges, you must perform some additional steps to enable the emulator.
-
Linux: Ensure that the file /etc/OpenCL/vendors/Intel_FPGA_SSG_Emulator.icd matches the
file found in the directory that the environment variable INTELFPGAOCLSDKROOT specifies. The INTELFPGAOCLSDKROOT environment variable points to the location of the SDK
installation.
If the files do not match, or if it is missing from /etc/OpenCL/vendors, copy the Intel_FPGA_SSG_Emulator.icd file from the location specified by the INTELFPGAOCLSDKROOT environment variable to the /etc/OpenCL/vendors directory.
- Windows: Ensure that the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors contains the following value: [HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors] "intelocl64_emu.dll"=dword:00000000
The emulator in Intel® FPGA SDK for OpenCL™ Pro Edition is built with GCC 7.2.0 as part of the offline compiler. When executing the host program for an emulated OpenCL device, the version of libstdc++.so must be at least that of GCC 7.2.0. In other words, the LD_LIBRARY_PATH environment variable must ensure that the correct version of libstdc++.so is found.
If the correct version of libstdc++.so is not found, the call to clGetPlatformIDs function fails to load the FPGA emulator platform and returns CL_PLATFORM_NOT_FOUND_KHR (error code -1001). Depending on which version of libstdc++.so is found, the call to clGetPlatformIDs may succeed, but a later call to the clCreateContext function may fail with CL_DEVICE_NOT_AVAILABLE (error code -2).
If the LD_LIBRARY_PATH does not point to a compatible libstdc++.so, use the following syntax to invoke the host program:
env LD_LIBRARY_PATH=<path to compatible libstdc++.so>:$LD_LIBRARY_PATH <host> [host arguments]
8.2. Modifying Channels Kernel Code for Emulation
channel unlong4 inchannel __attribute__((io("eth0_in"))); __kernel void send (int size) { for (unsigned i = 0; i < size; i++) { ulong4 data = read_channel_intel(inchannel); //statements } }
To enable the Emulator to emulate a kernel with a channel that interfaces with an I/O channel, perform the following tasks:
-
Modify the kernel code in one of the following manner:
- Add a matching write_channel_intel call
such as the one shown
below.
#ifdef EMULATOR __kernel void io_in (__global char * restrict arr, int size) { for (unsigned i = 0; i < size; i++) { ulong4 data = arr[i]; //arr[i] being an alternate data source write_channel_intel(inchannel, data); } } #endif
- Replace the I/O channel access with a memory access, as shown
below:
__kernel void send (int size) { for (unsigned i = 0; i < size; i++) { #ifndef EMULATOR ulong4 data = read_channel_intel(inchannel); #else ulong4 data = arr[i]; //arr[i] being an alternate data source #endif //statements } }
- Add a matching write_channel_intel call
such as the one shown
below.
- Modify the host application to create and start this conditional kernel during emulation.
8.2.1. Emulating a Kernel that Passes Pipes or Channels by Value
You may emulate a kernel that passes a channel or pipe by value, as shown in the following example:
channel uint my_ch; void my_function (channel uint ch, __global uint * dst, int i) { dst[i] = read_channel_intel(ch); } __kernel void consumer (__global uint * restrict dst) { for (int i=0;i<5;i++) { my_function(my_ch, dst, i ); } }
8.2.2. Emulating Channel Depth
When you compile your OpenCL* kernel for emulation, the default channel depth is different from the default channel depth generated when your kernel is compiled for hardware. You can change this behavior when you compile your kernel for emulation with the CL_CONFIG_CHANNEL_DEPTH_EMULATION_MODE environment variable.
- ignoredepth
- All channels are given a channel depth chosen to provide the fastest
execution time for your kernel emulation. Any explicitly set channel depth attribute is
ignored.
This value is used by default if CL_CONFIG_CHANNEL_DEPTH_EMULATION_MODE environment variable is not set.
- default
- Channels with an explicit depth attribute have their specified depth. Channels without a specified depth are given a default channel depth that is chosen to provide the fastest execution time for your kernel emulation.
- strict
- All channel depths in the emulation are given a depth that matches the depth given for the FPGA compilation.
8.2.3. Emulating Applications with a Channel That Reads or Writes to an I/O Channel
- Modify your kernel to add a read or write channel that replaces the I/O channel.
- Make the source code that uses the read or write channel conditional.
-
For input I/O channels
- Store input data to be transferred to the channel in a file with a name
matching the io attribute in the
channel declaration. Consider the following example:
channel ulong4 inchannel __attribute__((io("eth0_in")));
- Create a file named eth0_in.
- Store the test input data in the eth0_in file.
- Store input data to be transferred to the channel in a file with a name
matching the io attribute in the
channel declaration. Consider the following example:
-
For output I/O channels
Output data is automatically written into a file with name equal to the channel io attribute.
8.3. Compiling a Kernel for Emulation (-march=emulator)
- Before you perform kernel emulation, perform the following
tasks:
- Verify that the environment variable QUARTUS_ROOTDIR_OVERRIDE points to Intel® Quartus® Prime Pro Edition software installation folder.
- Verify that the LD_LIBRARY_PATH environment variable setting includes all the paths described in the Setting the Intel® FPGA SDK for OpenCL™ Pro Edition User Environment Variables section in the Intel® FPGA SDK for OpenCL™ Pro Edition Getting Started Guide.
-
To create kernel programs that are executable on x86-64 host
systems, invoke the following command:
aoc -march=emulator <your_kernel_filename>.cl
-
For Linux systems, the
Intel® FPGA SDK for OpenCL™ Offline Compiler offers symbolic debug support for the
debugger.
The offline compiler debug support allows you to pinpoint the origins of functional errors in your kernel source code.
8.4. Emulating Your OpenCL Kernel
To emulate your kernel, perform the following steps:
- Required:
Modify your host program to select the emulator OpenCL
platform.
Select the emulation OpenCL platform in your host program by selecting platform with the following name:
Intel(R) FPGA Emulation Platform for OpenCL(TM)
- Required: Build a host application and link your host application to the Khronos ICD Loader Library. For more information, see Linking Your Host Application to the Khronos ICD Loader Library.
- If necessary, move the <your_kernel_filename>.aocx file to a location where the host can find it easily, preferably the current working directory.
-
To run the host application for emulation:
- For Windows:
- Define the number of emulated devices by invoking the set CL_CONFIG_CPU_EMULATE_DEVICES=<number_of_devices> command.
- Run the host application.
- Invoke set CL_CONFIG_CPU_EMULATE_DEVICES= to unset the variable.
- For Linux, invoke the env CL_CONFIG_CPU_EMULATE_DEVICES=<number_of_devices> <host_application_filename> command.
This command specifies the number of identical emulation devices that the Emulator needs to provide.Remember: The emulation OpenCL platform (Intel(R) FPGA Emulation Platform for OpenCL(TM)) does not provide access to physical boards. Only the emulated devices are available.Tip: If you want to use only one emulator device, you do not need to set the CL_CONFIG_CPU_EMULATE_DEVICES environment variable. - For Windows:
- If you change your host or kernel program and you want to test it, only recompile the modified host or kernel program and then rerun emulation.
- The emulator in
Intel® FPGA SDK for OpenCL™
Pro
Edition is built with
GCC
7.2.0 as part of the offline compiler. When executing the
host program for an emulated OpenCL device, the version of libstdc++.so must be at least that of
GCC
7.2.0. In other words, the
LD_LIBRARY_PATH
environment variable must ensure that the correct version of libstdc++.so is found.
If the correct version of libstdc++.so is not found, the call to clGetPlatformIDs function fails to load the FPGA emulator platform and returns CL_PLATFORM_NOT_FOUND_KHR (error code -1001). Depending on which version of libstdc++.so is found, the call to clGetPlatformIDs may succeed, but a later call to the clCreateContext function may fail with CL_DEVICE_NOT_AVAILABLE (error code -2).
If LD_LIBRARY_PATH does not point to a sufficiently new libstdc++.so, use the following syntax to invoke the host program:
env LD_LIBRARY_PATH=<path to sufficiently new libstdc++.so>:$LD_LIBRARY_PATH <host> [host arguments]
- To enable debugging of kernel code, optimizations are disabled by default
for the FPGA emulator. This can lead to sub-optimal execution speed when
emulating kernel code.
You can pass the -g0 flag to the aoc compile command to disable debugging and enable optimizations. This enables faster emulator execution.
8.5. Debugging Your OpenCL Kernel on Linux
To compile your OpenCL kernel for debugging, perform the following steps:
- To generate a .aocx file for debugging that targets a specific accelerator board, invoke the aoc -march=emulator -g <your_kernel_filename>.cl command.
- Build the host application and link your host application to the Khronos ICD Loader Library. For details, see Linking Your Host Application to the Khronos ICD Loader Library.
- Ensure that the <your_kernel_filename>.aocx file is in a location where the host expects to find it.
- To run the application, invoke the command env CL_CONFIG_CPU_EMULATE_DEVICES=<number_of_devices> gdb --args <your_host_program_name> [<host_program_arguments>], where <number_of_devices> is the number of identical emulation devices that the Emulator needs to provide.
- If you change your host or kernel program and you want to test it, only recompile the modified host or kernel program and then rerun the debugger.
- During program execution, the debugger cannot step from the host
code to the kernel code. You must set a breakpoint before the actual kernel
invocation by adding these lines:
-
break <your_kernel>
This line sets a breakpoint before the kernel.
-
continue
If you have not begun debugging your host, then type start instead.
-
break <your_kernel>
- The
debugger does not recognize kernel names until the host actually loads the
kernel functions. As a result, the debugger generates the following warning for
the breakpoint you set before the execution of the first kernel:
Function "<your_kernel>" not defined.
Make breakpoint pending on future shared library load? (y or [n])
Answer y. After initial program execution, the debugger recognizes the function and variable names, and line number references for the duration of the session.
8.6. Limitations of the Intel FPGA SDK for OpenCL Emulator
- Execution model
The Emulator supports the same compilation modes as the FPGA variant. As a result, you must call the clCreateProgramBinary function to create cl_program objects for emulation.
- Concurrent execution
Modeling of concurrent kernel executions has limitations. During execution, the Emulator does not actually run interacting work-items in parallel. Therefore, some concurrent execution behaviors, such as different kernels accessing global memory without a barrier for synchronization, might generate inconsistent emulation results between executions.
- Same address space execution
The Emulator executes the host runtime and the kernels in the same address space. Certain pointer or array usages in your host application might cause the kernel program to fail, and vice versa. Example usages include indexing external allocated memory and writing to random pointers. You may use memory leak detection tools such as Valgrind to analyze your program. However, the host might encounter a fatal error caused by out-of-bounds write operations in your kernel, and vice versa.
- Conditional channel operations
Emulation of channel behavior has limitations, especially for conditional channel operations where the kernel does not call the channel operation in every loop iteration. In these cases, the Emulator might execute channel operations in a different order than on the hardware.
- GCC version
Emulator host programs on Linux* must be run with a version of libstdc++.so from GCC 7.2.0 or later. This can be achieved either by installing GCC 7.2.0 or later on your system, or setting the LD_LIBRARY_PATH environment variable such that a compatible libstdc++.so is identified.
8.7. Discrepancies in Hardware and Emulator Results
When you emulate a kernel, your OpenCL system might produce results different from that of the kernel compiled for hardware. You can further debug your kernel before you compile for hardware by running your kernel through simulation.
The most common reasons for differences in emulator and hardware results are as follows:
- Your OpenCL kernel code is using the #pragma ivdep directive. The Emulator does not model your OpenCL system when a true dependence is broken by a pragma ivdep directive. During a full hardware compilation, you observe this as an incorrect result.
- Your OpenCL kernel code is relying on uninitialized data. Examples of uninitialized data include uninitialized variables and uninitialized or partially initialized global buffers, local arrays, and private arrays.
- Your OpenCL kernel code behavior depends on the precise results
of floating point operations. The Emulator uses floating point computation
hardware of the CPU whereas the hardware run uses floating point cores
implemented as FPGA cores. The use of -ffp-reassoc
aoc option in your OpenCL kernel code might change the order
of operations leading to further divergence in the floating point results. Note: The OpenCL standard allows one or more least significant bits of floating point computations to differ between platforms, while still being considered correct on both such platforms.
- Your OpenCL kernel code behavior depends on the order of channel accesses in different kernels. The emulation of channel behavior has limitations, especially for conditional channel operations where the kernel does not call the channel operation in every loop iteration. In such cases, the Emulator might execute channel operations in an order different from that on the hardware.
- Your OpenCL kernel or host code is accessing global memory
buffers out-of-bounds.Attention:
- Uninitialized memory read and write behaviors are platform-dependent. Verify sizes of your global memory buffers when using all addresses within kernels, allocating clCreateBuffer function call, and transferring clEnqueueReadBuffer and clEnqueueWriteBuffer function calls.
- You can use software memory leak detection tools, such as Valgrind, on the emulated version of your OpenCL system to analyze memory related problems. Absence of warnings from such tools does not mean the absence of problems. It only means that the tool could not detect any problem. In such a scenario, Intel recommends manual verification of your OpenCL kernel or host code.
- Your OpenCL kernel code is accessing local or private variables
out-of-bounds. For example, accessing a local or private array out-of-bounds or
accessing a private variable after it has gone out of scope. Attention: In software terms, these issues are referred to as stack corruption issues because accessing variables out-of-bounds usually affects unrelated variables located close to the variable being accessed on a software stack. Emulated OpenCL kernels are implemented as regular CPU functions, and have an actual stack that can be corrupted. When targeting hardware, no stack exists and hence, the stack corruption issues are guaranteed to manifest differently. You may use memory leak analyzer tools, such as Valgrind, when a stack corruption is suspected. However, stack related issues are usually difficult to identify. Intel recommends manual verification of your OpenCL kernel code to debug a stack related issue.
- Your OpenCL kernel code is using shifts that are larger than the type being shifted. For example, shifting a 64-bit integer by 65 bits. According to the OpenCL specification version 1.0, the behavior of such shifts is undefined.
- When you compile your OpenCL kernel for emulation, the default channel depth is different from the default channel depth generated when your kernel is compiled for hardware. This difference in channel depths might lead to scenarios where execution on the hardware hangs while kernel emulation works without any issue. Refer to Emulating Channel Depth for information on how to fix the channel depth difference.
- In terms of ordering the printed lines, the output of the printf function might be ordered differently on the Emulator and hardware. This is because, in the hardware, printf data is stored in a global memory buffer and flushed from the buffer only when the kernel execution is complete, or when the buffer is full. In the Emulator, the printf function uses the x86 stdout.
- If you perform an unaligned load/store through upcasting of
types, the FPGA and emulator might produce different results. A load/store of
this type is undefined in the C99 specification.For example, the following operation might produce unexpected results:
int tmp = *((int *) (my_ptr + 5));
8.8. Emulator Environment Variables
Several environment variables are available to modify the behavior of the emulator.
CL_CONFIG_CPU_EMULATE_DEVICES
Controls the number of identical emulator devices provided by the emulator platform. If not set, a single emulator device is available. Therefore, set this variable only if you want to emulate multiple devices.
"
OCL_TBB_NUM_WORKERS
Indicates a maximum number of threads that can be used by the emulator. The default value is 32, and the maximum value is 255. Each thread can run a single kernel.
If the application requires several kernels to be executing simultaneously, the OCL_TBB_NUM_WORKERS should be set appropriately (to the number of kernels used or a higher value).
CL_CONFIG_CPU_FORCE_LOCAL_MEM_SIZE
Set the amount of available OpenCL local memory, with units, for example: 8MB, 256KB, or 1024B.
CL_CONFIG_CPU_FORCE_PRIVATE_MEM_SIZE
Set the amount of available OpenCL private memory, with units, for example: 8MB, 256KB, or 1024B.
CL_CONFIG_CHANNEL_DEPTH_EMULATION_MODE
When you compile your OpenCL kernel for emulation, the channel depth is different from the channel depth generated when your kernel is compiled for hardware. You can change this behavior with the CL_CONFIG_CHANNEL_DEPTH_EMULATION_MODE environment variable. For details, see Emulating Channel Depth.
8.9. Extensions Supported by the Emulator
The emulator offers varying levels of support for different OpenCL extensions.
- cl_intel_fpga_host_pipe
- cl_khr_byte_addressable_store
- cl_khr_icd
- cles_khr_int64
- cl_intel_channels
- cl_khr_local_int32_base_atomics
- cl_khr_local_int32_extended_atomics
- cl_khr_global_int32_base_atomics
- cl_khr_global_int32_extended_atomics
- cl_khr_fp64
- cl_khr_fp16
8.10. Emulator Known Issues
A few known issues might affect your use of the emulator. Review these issues to avoid possible problems when using the emulator.
Autorun Kernels
Autorun kernels shut down only after a host program exits, not after a clReleaseProgram() call.
Compiler Diagnostics
Some compiler diagnostics are not yet implemented for the emulator.
CL_OUT_OF_RESOURCES Error Returned From clEnqueueNDRangeKernel()
This can occur when the kernel used more __private or __local memory than the emulator supports by default.
Try setting the CL_CONFIG_CPU_FORCE_PRIVATE_MEM_SIZE or the CL_CONFIG_CPU_FORCE_LOCAL_MEM_SIZE environment variables, as described in Emulator Environment Variables.
CL_INVALID_VALUE Error Returned From clCreateKernel()
It is possible a call to the clBuildProgram() was missed.
This call is required by the OpenCL specification, even if a program is created from a binary. See section 5.4.2 of the OpenCL Specification version 1.0 for details.
9. Developing OpenCL Applications Using Third-party IDEs
9.1. FPGA Workflows in Microsoft Visual Studio
Plugins for Microsoft Visual Studio* 2017 and 2019 versions are automatically installed as part of the Intel® FPGA SDK for OpenCL™ installation. For more information about the system requirements, refer to the Prerequisites for the Intel® FPGA SDK for OpenCL™ Pro Edition.
9.1.1. Preparing the Visual Studio Environment
Before you run Visual Studio, ensure that you have performed these steps:
- Downloaded the Intel® FPGA SDK for OpenCL™ Pro Edition software.
- Installed the Intel® FPGA SDK for OpenCL™ Pro Edition software.
- Set the user environment variables.
- Installed an FPGA Board and set the QUARTUS_ROOTDIR_OVERRIDE user environment variable.
Visual Studio project displays an error if you have not set the environment. If you set the environment while Visual Studio is running, then reload Visual Studio.
Once you have completed the prerequisites, verify the environment by opening Edit the system environment variables > Environment Variables and search for environment through System variables.
9.1.2. Creating an FPGA OpenCL Template
To create an FPGA OpenCL Template, perform these steps:
- Navigate to the File > New > Project > Visual C++ > OpenCL menu option.
-
Choose OpenCL Project for Intel
FPGA. A new project is created. This project supports Intel® FPGA by default.
Figure 17. Creating a New Project in Visual Studio
9.1.3. Configuring the Build Targets
A default project supported by an FPGA has four different configurations, as shown in the following image:

In Figure 18, Emu stands for FPGA Emulator configurations and HW stands for FPGA hardware configurations. These configurations allow you to configure different build targets the compiler supports.
For more information about Emulation, refer to Verifying Host Runtime Functionality via Emulation topic in the Intel FPGA SDK for OpenCL Getting Started Guide and Emulating and Debugging Your OpenCL Kernel.
9.1.4. Configuring Build Options for a Project
To configure FPGA-specific build options, perform these steps:
- Navigate to Project > Properties > Configuration Properties > FPGA Build Options > General. Build options are displayed.
-
Modify the build options as you desire.
For more information about the build options, refer to Compiling Your OpenCL Kernel.
9.1.5. Generating the High-level Design Report
To generate the high-level design report, perform these steps:
- Locate the kernel.cl file under the OpenCL Kernel Files folder.
-
Right-click
on the kernel.cl file to view the
context-sensitive menu.
-
In
the FPGA Device context-sensitive menu
option, select the FPGA board name you have installed from the drop-down list.
If you have installed a board but you do not see the board name in the drop-down list, then enter the board name manually.
Note: To obtain a list of the available devices, you can run the aoc.exe -list-boards command. -
In the same context-sensitive menu, select the Create Compiler
Report option to generate the high-level design report. After
the report is generated, it opens in your default browser.
Note: If you select multiple OpenCL Kernel files, the Create Compiler Report option creates a combined report.
9.1.6. Building and Running the FPGA Template
Building and running the FPGA template with Visual Studio follows the same basic workflow as that of the common Visual Studio projects. For more information, refer to the Build and run a C++ console app project topic in the Visual Studio C++ Tutorials documentation.
9.2. FPGA Workflows in Eclipse
Extension for Eclipse is automatically installed as part of the Intel® FPGA SDK for OpenCL™ installation. For more information about the system requirements, refer to Prerequisites for the Intel® FPGA SDK for OpenCL™ Pro Edition.
9.2.1. Preparing the Eclipse Environment
To apply transient environment variable settings, you must source INTELFPGAOCLSDKROOT/init_opencl.sh script from the Intel® FPGA SDK for OpenCL™ installation directory.
9.2.2. Creating a Simple FPGA application
To create a simple Hello World application using CDT, perform the following general steps:
9.2.2.1. Creating a Project
Perform the following steps to create a project using Eclipse CDT:
-
Select
File > New > Project menu option.
Figure 19. Eclipse CDT File Menu
-
Select the type of project you want to create. For this
tutorial, expand the C/C++ folder, select
C++ Project, and click the Next button.
Figure 20. New Project WizardThe C++ Project wizard opens. By default, the CDT filters the Project Type and Toolchain based on the language supported for the C++ Project wizard you selected for this tutorial.
-
In the C++ Project
wizard, perform these steps:
Figure 21. C++ Project Wizard
- In the Project name field, type a name for the project. For example, HelloWorld.
- In the Project type list, expand Executable folder and select OpenCL FPGA Project. This project type provides a simple Hello World application in OpenCL and the makefile is automatically created by the CDT.
- Under Toolchains, select OpenCL FPGA project.
- Click the Next button.
The Select Configurations dialog box displays a list of configurations based on the project type and toolchain selected earlier.Figure 22. Select Configurations DialogNote: Board debug and Board release configuration options are equivalent to HW Debug and HW Release build target options in the Microsoft Visual Studio IDE. When you use these options, board compiles for hardware and executes on a board. - Optional: If you want to change the default project settings, click the Advanced settings button. The Project Properties dialog launches for your new project, allowing you to change any of the project specific settings, such as includes paths, compiler options, and libraries.
-
Click the Finish
button.
Note: If the C++ perspective is not currently set as the default, you are prompted to determine if you want this project to be associated with the C/C++ perspective. Click Yes.
A project is created with the default settings and a full set of configurations based on the project type and toolchain you selected. You should now see the new project in the Project Explorer view.

9.2.2.2. Reviewing the Code and Building the Project
Perform these steps to review your host code and build your project:
-
From
the Project Explorer view, double-click
the .c file created for your project (for
example, hostcode.c).
You can find the .c file within the project src folder. It opens in a default editor and contains C++ template host code for the Hello World example project you selected earlier. In addition, the Outline view is also populated with objects created from your code.Figure 24. Reviewing Code in the Project ExplorerTip: You can specify a different editor and add or modify existing code templates by modifying your preferences in Window > Preferences.
- Optional: Type additional code in this file and save the changes either by clicking File > Save or pressing CTRL+S keys combination.
-
Build your project using one of the following options:
- Press CTRL+B keys combination.
- Select the project in the Project Explorer view and navigate to Project > Build Project.
Note:If a build generates any error or warning, you can view it in the Problems view. If you encounter difficulty, refer to the following topics in the Eclipse* documentation:
-
Review the build messages in the Console view. The
project should build successfully.
You can observe that the Outline view has also been populated with objects created from your code. If you select an item from the Outline view, the corresponding text in the editor is highlighted.
9.2.2.3. Running the Application
To run the application, follow these steps:
- Within the C/C++ Perspective, click Project > Build Configurations > Set Active and select the required configuration. For this tutorial, select the Emulator debug option.
- Build your project either by clicking Project > Build All or by pressing CTRL+B keys combination.
-
Run the application either by clicking Run > Run or by pressing CTRL+F11 keys
combination.
Now, you should have the Hello World application running in the Console view. The Console's title bar also displays which application is running.
9.2.3. Creating a Makefile Project
To create a makefile project, perform the following steps:
-
Select
File > New > Project.
When you create a new project, you are required to specify the project type. This project type determines the toolchain, data, and tabs that the CDT uses or displays.
-
Select the type of project you want to create. For this
tutorial, expand the C/C++ folder and
select the C++ Project. The C++ Project
wizard launches.
By default, the CDT filters the Project types and Toolchain list based on the language supported for the C++ Project wizard you selected for this tutorial.
- In the Project name field, type HelloWorld.
- Leave the Use Default Location option selected.
- From the Project types list, expand the Makefile project and select OpenCL FPGA Makefile Project. This project allows you to enter the source file and the makefile.
- From the Toolchain list, select the FPGA makefile.
- Click Next.
-
Click Finish.
If a message box prompts you to change perspectives, click Yes.
Your new project displays in the Project Explorer view. The project is empty because you have not yet created files for your project. You may see an error since there is nothing to build yet for your project. You can now start writing the code for your HelloWorld program.
9.2.4. Building a Project
To build a project, perform these steps:
- In the Project Explorer view, select your project. For this tutorial, you can select the HelloWorld project you created earlier.
-
To build the project, either click Project > Build Project or click the Build icon on
the toolbar.
In the Console view, you can view the output and results of the build command.
-
Click
the Console view's tab to bring the view
forward if it is not currently visible.
For some unknown reason, if you are unable to view the Console, then open it by selecting Window > Show View > Console.
Figure 25. Eclipse Workspace
- The Console view displays the make output and build progress information.
- The Make Targets view displays the makefile actions.
- The Problems view displays compile warnings or errors.
9.3. Limitations
The following are the limitations when working with third-party IDEs:
- The Intel® FPGA SDK for OpenCL™ plug-in for Visual Studio does not provide all of the build information in the Output window from the compiler by default. To view the full output from the compiler, go to Tools > Options > Projects and Solutions > Build and Run and set the output and log file verbosity to Detailed.
- The Intel® FPGA SDK for OpenCL™ extension for Eclipse requires you to launch an Eclipse instance from the terminal session by sourcing the init_opencl.sh script.
- The C/C++ code editor in Eclipse does not include highlighting of several OpenCL-specific keywords. Additionally, it reports syntax errors for OpenCL functions in the host code.
- The compiler reports are generated for any build configuration in the Intel® FPGA SDK for OpenCL™ extension for Eclipse. These reports are not available in the Project Explorer window by default. You must refresh the file tree manually.
10. Developing OpenCL Applications Using Intel Code Builder for OpenCL
The Intel® Code Builder for OpenCL™ provides a set of Microsoft Visual Studio and Eclipse plug-ins that enable capabilities for creating, building, debugging, and analyzing Windows and Linux applications accelerated with OpenCL.
10.1. Configuring the Intel Code Builder for OpenCL Offline Compiler Plug-in for Microsoft Visual Studio
To enable the Intel® Code Builder for OpenCL™ offline compiler plug-in for Microsoft Visual Studio, perform the following steps:
- In the Visual Studio software, select Project > Properties.
- In the Project > Properties > Code Builder page, change the Device to your desired FPGA device.
- In the C/C++ > General property page, under Additional Include Directories, enter the full path to the directory where the OpenCL code header files are located ($(INTELFPGAOCLSDKROOT)\include).
- In the Linker > General property page, under Additional Library Directories, enter the full path to the directory where the OpenCL code run-time import library file is located. For example, for 64-bit application, add $(INTELFPGAOCLSDKROOT)\lib\x64:
- In the Linker > Input property page, under Additional Dependencies, enter the name of the OpenCL ICD import library file as OpenCL.lib.
10.2. Configuring the Intel Code Builder for OpenCL Offline Compiler Plug-in for Eclipse
To enable the Intel® Code Builder for OpenCL™ offline compiler plug-in for Eclipse IDE, perform the following steps:
-
Copy the CodeBuilder_<version>.jar plug-in file
from $INTELFPGAOCLSDKROOT/eclipse-plug-in to
<ECLIPSE_ROOT_FOLDER>/dropins.
Attention: In Linux, you must add $INTELFPGAOCLSDKROOT\bin to the LD_LIBRARY_PATH environment variable.
- Run the Eclipse IDE.
- Select Windows > Preferences.
- Switch to the Intel® OpenCL dialog box.
-
Set the OpenCL binary directory to $INTELFPGAOCLSDKROOT/bin.
Once the offline compiler is configured, you can use the Code-Builder menu to perform the following basic operations:
- Create a new session
- Open an existing session
- Save a session
- Build a session
- Compile a session
- Configure a session
For more information about the Intel® Code Builder for OpenCL™ , refer to Developer Guide for Intel SDK for OpenCL Applications. For information about how to configure the Intel® Code Builder for OpenCL™ for Microsoft Visual Studio, refer to Intel Code Builder for OpenCL API for Microsoft Visual Studio. For information about how to configure the Intel® Code Builder for OpenCL™ for Eclipse, refer to Intel Code Builder for OpenCL API for Eclipse.
10.3. Creating a Session in the Intel Code Builder for OpenCL
Perform the following steps to create a session in the Intel® Code Builder for OpenCL™ :
- Select Code-Builder > OpenCL Kernel Development > New Session.
- Specify the session name, path to the folder to store the session file and the content of the session (can be either an empty session or with a predefined OpenCL code).
- Click Done.
Once the session is created, the new session appears in the Code Builder Sessions Explorer view.

10.4. Configuring a Session
You can configure a session by right-clicking the session in the Code Builder Session Explorer and selecting Session Options. Alternatively, you can also open the Session Settings dialog box by selecting Code-Builder > OpenCL Kernel Development > Session Options.
The Session Settings dialog box allows you to configure:
- Device options such as target machine, OpenCL platform, and OpenCL device.
- Build options such as offline compiler flags and build architecture.
- Build artifacts such as .aocx and .aoco files, and static reports.
- General options such as job architecture and network settings.
In the Device Options tab, ensure to select Intel® FPGA SDK for OpenCL™ in the OpenCL platform drop-down list.
Under the Build Options tab, in the OpenCL Build Options section, enter the Intel® FPGA SDK for OpenCL™ Offline Compiler flags manually.
For more information about configuring a session and variable management, refer to the Developer Guide for Intel SDK for OpenCL Applications.
11. Intel FPGA SDK for OpenCL Advanced Features
11.1. OpenCL Library
You can create an OpenCL library in OpenCL or register transfer level (RTL). You can then include this library file and use the functions inside your OpenCL kernels or in HLS components. For information about HLS libraries, refer to Intel High Level Synthesis Compiler: Reference Manual .
You may use a previously-created library or create your own library. To use an OpenCL library, you do not require in-depth knowledge in hardware design or in the implementation of library primitives. To create an OpenCL library, you need to create the following files:
File or Component | Description |
---|---|
RTL Components | |
RTL source files | Verilog, System Verilog, or VHDL files that define the RTL
component. Additional files such as Intel® Quartus® Prime IP File (.qip), Synopsys Design Constraints File (.sdc), and Tcl Script File (.tcl) are not allowed. |
eXtensible Markup Language File (.xml) | Describes the properties of the RTL component. The Intel® FPGA SDK for OpenCL™ Offline Compiler uses these properties to integrate the RTL component into the OpenCL pipeline. |
Header file (.hcl) | A header file that contains valid OpenCL kernel language and declares the signatures of function(s) that are implement by the RTL component. |
OpenCL emulation model file (.cl) | Provides C model for the RTL component that is used only for emulation. Full hardware compilations use the RTL source files. |
OpenCL Functions | |
OpenCL source files (.cl) | Contains definitions of the OpenCL functions. These functions are used during emulation and full hardware compilations. |
Header file (.hcl) | A header file describing the functions to be called from OpenCL in the OpenCL kernel language syntax. |
HLS Functions | |
HLS source files (.cpp) | Contains definitions of the OpenCL functions. These functions are used during emulation and full hardware compilations. |
Header file (.hcl) | A header file describing the functions to be called from OpenCL in the OpenCL kernel language syntax. |
There is no difference in the header file used for RTL, OpenCL, and HLS library functions. A single header file can have all types of functions declared. A single library can contain any of the supported sources. You can create a library from mixed sources (OpenCL, HLS, or RTL) and target these Intel® high-level design products:
- Intel® FPGA SDK for OpenCL™ Pro Edition
- Intel® High Level Synthesis Compiler Pro Edition
Creating a OpenCL library is a two-step process:
- Each object file is created from input source files using the fpga_crossgen command. The required input source files depend
on the type of source code you are creating the object from.
- An object is effectively an intermediate representation of your source code with both a CPU representation and an FPGA representation of your code.
- An object can be targeted for use with only one Intel® high-level design product. If you want to target more than one high-level design product, you must generate a separate object for each target product.
- Object files are combined into a library file using the fpga_libtool command.
Objects created from different types of source code can be combined into a library, provided all objects target the same high-level design product.
A library is assigned a version number, and can be used only with the targeted high-level design product with the same version number (for example, Intel® FPGA SDK for OpenCL™ Pro Edition version 20.4).
- Creating Library Objects From OpenCL Code
You can create a library from object files from your OpenCL source code. An OpenCL-based object file includes code for CPU as well as hardware execution (CPU-capturing testbench and emulation use). - Understanding RTL Modules and the OpenCL Pipeline
This section provides an overview of how the Intel® FPGA SDK for OpenCL™ Offline Compiler integrates RTL modules into the Intel® FPGA SDK for OpenCL™ pipeline architecture. - Packaging an OpenCL Helper Function File for an OpenCL Library
Before creating an OpenCL™ library file, package each OpenCL source file with helper functions into a .aoco file. - Packaging an RTL Component for an OpenCL Library
Before creating an OpenCL™ library file, package each RTL component into a .aoco file. - Verifying the RTL Modules
The creator of an OpenCL™ library is responsible for verifying the RTL modules within the library, both as stand-alone entities and as part of an OpenCL system. - Specifying an OpenCL Library when Compiling an OpenCL Kernel
To use an OpenCL™ library in an OpenCL kernel, specify the library file name and directory when you compile the kernel. - Debugging Your OpenCL Library Through Simulation (Preview)
The Intel® FPGA SDK for OpenCL™ simulator assesses the functionality of your OpenCL* library. - Using an OpenCL Library that Works with Simple Functions (Example 1)
Intel® provides an OpenCL™ library design example of a simple kernel that uses a library containing RTL implementations of three double-precision functions: sqrt, rsqrt, and divide. - Using an OpenCL Library that Works with External Memory (Example 2)
Intel® provides an OpenCL™ library design example of a simple kernel that uses a library containing two RTL modules that communicate with global memory. - OpenCL Library Command-Line Options
Both the Intel® FPGA SDK for OpenCL™ Offline Compiler's set of commands and the SDK utility include options you can invoke to perform OpenCL library-related tasks.
11.1.1. Creating Library Objects From OpenCL Code
You can create a library from object files from your OpenCL source code. An OpenCL-based object file includes code for CPU as well as hardware execution (CPU-capturing testbench and emulation use).
A library can contain multiple object files. You can create object files for use in different Intel high-level design tools from the same OpenCL source code. Depending on the target high-level design tool, your source code might require adjustments to support tool-specific data types or constructs.
Intel® FPGA SDK for OpenCL™
No additional work is needed in your Intel® FPGA SDK for OpenCL™ source code when you use the code to create objects for the offline compiler libraries.
Intel® HLS Compiler
The Intel® FPGA SDK for OpenCL™ supports language constructs that are not natively supported by C++. Your component might need modifications to support those constructs (it is always preferred to allow OpenCL data types as library function call parameters).
The Intel® HLS Compiler supports a limited set of OpenCL* language constructs through the ocl_types.h header file. For details, review Intel High Level Synthesis Compiler: Reference Manual .
11.1.1.1. Creating an Object File From OpenCL Code
Use the fpga_crossgen command to create library objects from your OpenCL code. An object created from OpenCL code contains information required both for emulating the functions in the object and synthesizing the hardware for the object functions.
The fpga_crossgen command creates one object file from one input source file. The object created can be used only in libraries that target the same Intel high-level design tool. Also, objects are versioned. That is, each object is assigned a compiler version number and be used only with Intel high-level design tools with the same version number.
Create a library object using the following command:
fpga_crossgen <source_file> --target target_HLD_tool [-o <object_file>]
where, target_HLD_tool is the target Intel® high-level design tool for this library. This parameter can have one of the following values:
-
aoc
Target this object to be included in libraries for kernels developed with the Intel® FPGA SDK for OpenCL™ .
Objects built for the Intel® FPGA SDK for OpenCL™ are not operating system-specific. The objects are combined as Intel® FPGA SDK for OpenCL™ object files (.aoco).
-
hls
Target this object to be included in libraries for components developed with the Intel® HLS Compiler.
Objects built for the Intel® HLS Compiler are combined as operating system specific object files (.o on Linux). You cannot use objects created on one operating system with the Intel® HLS Compiler running on a different operating system.
If you do not specify an object file name with the -o option, the object file name defaults to be the same name as the source code file name.
11.1.1.2. Packaging Object Files into a Library File
Gather the object files into a library file so that others can incorporate the library into their projects and call the functions that are contained in the objects in the library. To package object files into a library, use the fpga_libtool command.
All objects that you want to package into a library must have the same version number (for example, Intel® FPGA SDK for OpenCL™ Pro Edition version 19.3). The fpga_libtool command creates libraries encapsulated in operating system-specific archive files (.a on Linux and .lib on Windows). You cannot use libraries created on one operating system with an Intel® high-level design product running on a different operating system.
Create the OpenCL library file using the following command:
fpga_libtool --target target_HLD_tool --create library_name object_file_1 [object_file_2 ... object_file_n]
Parameter | Description |
---|---|
target_HLD_tool |
The target
Intel®
high-level design tool for
this library. This parameter can have one of the following
values:
|
library_name |
The name of the library file. Specify the file extension of the library
files as follows, depending on the target high-level design
tool:
|
You can specify one or more object files to include in the library.
fpga_libtool --create libdemo.a prim1.o prim2.o prim3.o --target aoc
11.1.2. Understanding RTL Modules and the OpenCL Pipeline
Use RTL modules under the following circumstances:
- You want to use optimized and verified RTL modules in OpenCL kernels without rewriting the modules as OpenCL functions.
- You want to implement OpenCL kernel functionality that you cannot express effectively in OpenCL.
11.1.2.1. Overview: Intel FPGA SDK for OpenCL Pipeline Approach
Assume each level of operation is one stage in the pipeline. At each stage, the Intel® FPGA SDK for OpenCL™ Offline Compiler executes all operations in parallel by the thread existing at that stage. For example, thread 2 executes Load A, Load B, and copies the current global ID (via gid) to the next pipeline stage. Similar to the pipelined execution on instructions in reduced instruction set computing (RISC) processors, the SDK's pipeline stages also execute in parallel. The threads advances to the next pipeline stage only after all the stages have completed execution.
Some operations are capable of stalling the Intel FPGA SDK for OpenCL pipeline. Examples of such operations include variable latency operations like memory load and store operations. To support stalls, ready and valid signals need to propagate throughout the pipeline so that the offline compiler can schedule the pipeline stages. However, ready signals are not necessary if all operations have fixed latency. In these cases, the offline compiler optimizes the pipeline to statically schedule the operations, which significantly reduces the logic necessary for pipeline implementation.
11.1.2.2. Integration of an RTL Module into the Intel FPGA SDK for OpenCL Pipeline
The depicted RTL module has a balanced latency where the threads of the RTL module match the number of pipeline stages. A balanced latency allows the threads of the RTL module to execute without stalling the SDK's pipeline.
Setting the latency of the RTL module in the RTL specification file allows the offline compiler to balance the pipeline latency. RTL supports Avalon® streaming interfaces; therefore, the latency of the RTL module can be variable (that is, not fixed). However, the variability in the latency should be small in order to maximize performance. In addition, specify the latency in the OpenCL library object manifest file so that the RTL module experiences a good approximation of the actual latency in steady state.
11.1.2.3. Stall-Free RTL
- To instruct the offline compiler to remove stall logic around the RTL
module, if appropriate, set the IS_STALL_FREE attribute
under the FUNCTION element to "yes".This modification informs the offline compiler that the RTL module produces valid data every EXPECTED_LATENCY cycle(s).Note: EXPECTED_LATENCY is an attribute you specify in the .xml file under the FUNCTION element.
- Specify a value for EXPECTED_LATENCY
such that the latency equals the number of pipeline stages in the module. CAUTION:An inaccurate EXPECTED_LATENCY value causes the RTL module to be out of sync with the rest of the pipeline.
A stall-free RTL module might receive an invalid input signal (that is, ivalid is low). In this case, the module ignores the input and produces invalid data on the output. For a stall-free RTL module without an internal state, it might be easier to propagate the invalid input through the module. However, for an RTL module with an internal state, you must handle an ivalid=0 input carefully.
11.1.2.4. RTL Module Interfaces
For an RTL module to properly interact with other compiler-generated operations, you must support a simplified Avalon® streaming interface at both input and output of an RTL module.
The following diagram shows the complete interface of the myMod RTL module shown in Figure 30.
In this diagram, myMod interacts with the upstream module through data signals, A and B, and control signals, ivalid (input) and oready (output). The ivalid control signal equals 1 (ivalid = 1) if and only if data signal A and data signal B contain valid data. When the control signal oready equals 1 (oready = 1), it indicates that the myMod RTL module can process the data signals A and B if they are valid (that is, ivalid = 1). When ivalid = 1 and oready = 0, the upstream module is expected to hold the values of ivalid, A, and B in the next clock cycle.
myMod interacts with the downstream module through data signal C and control signals, ovalid (output) and iready (input). The ovalid control signal equals 1 (ovalid = 1) if and only if data signal C contains valid data. The iready control signal equals 1 (ivalid = 1) indicates that the downstream module is able to process data signal C if it is valid. When ovalid = 1 and iready = 0, the myMod RTL module is expected to hold the valid of the ovalid and C signals in the next clock cycle.
myMod module asserts oready for a single clock cycle to indicate it is ready for an active cycle. Cycles during which myMod module is ready for data are called ready cycles. During ready cycles, the module above myMod module can assert ivalid to send data to myMod.
For a detailed explanation of data transfer under backpressure, refer to " Data Transfer with Backpressure " in Avalon Interface Specifications. Ignore the information pertaining to readyLatency option.
11.1.2.5. Avalon Streaming Interface
The offline compiler expects the RTL module to support Avalon® streaming interface with readyLatency = 0, at both input and output.
- ivalid and iready as the input Avalon® streaming interface
- ovalid and oready as the output Avalon® streaming interface

For an RTL module with a fixed latency, the output signals (ovalid and oready) can have constant high values, and the input ready signal (iready) can be ignored.
A stall-free RTL module might receive an invalid input signal (ivalid is low). In this case, the module ignores the input and produces invalid data on the output. For a stall-free RTL module without an internal state, it might be easier to propagate the invalid input through the module. However, for an RTL module with an internal state, you must handle an ivalid = 0 input carefully.
Example Timing Diagram of a Stall-free RTL Component
Consider the following example timing diagram of a stall-free RTL component:

- IS_STALL_FREE value = "yes"
- IS_FIXED_LATENCY value = "yes"
- EXPECTED_LATENCY value = "2"
Example Timing Diagram of a Non-stall-free RTL Component
Consider the following example timing diagram of a stallable RTL component:

- IS_STALL_FREE value = "no"
- IS_FIXED_LATENCY value = "no"
- EXPECTED_LATENCY value = "4"
Performing Advanced Compiler Optimizations
Both ALLOW_MERGING and HAS_SIDE_EFFECTS parameters allow the offline compiler to perform advanced optimizations. Consider the following combinations to understand their impact completely:
Combination | Description |
---|---|
ALLOW_MERGING value = "no" HAS_SIDE_EFFECTS value = "no" |
Each call to an RTL library corresponds to one distinct instance in the hardware. Calls might be optimized away by the compiler if deemed redundant or unnecessary. Calls might be vectorized, with multiple instances in the hardware created for a single RTL library call. |
ALLOW_MERGING value = "no" HAS_SIDE_EFFECTS value = "yes" |
Each call to an RTL library corresponds to one distinct instance in hardware. Calls are not optimized away by the compiler. The compiler errors out if the attribute num_simd_work_items is greater than 1 for the kernel calling the RTL library. |
ALLOW_MERGING value = "yes" HAS_SIDE_EFFECTS value = "no" |
Multiple calls to an RTL library might be merged into one call, and hence correspond to one instance in the hardware. Calls might be optimized away by the compiler if deemed redundant or unnecessary. Calls might be vectorized, with multiple instances in the hardware created for a single RTL library call. |
ALLOW_MERGING value = "yes" HAS_SIDE_EFFECTS value = "yes" |
Multiple calls to an RTL library might be merged into one call, and hence correspond to one instance in hardware. Calls are not optimized away by the compiler. The compiler errors out if the attribute num_simd_work_items is greater than 1 for the kernel calling the RTL library. |
11.1.2.6. RTL Reset and Clock Signals
Because of the common clock and reset drivers, an RTL module runs in the same clock domain as the OpenCL kernel. The module is reset only when the OpenCL kernel is first loaded onto the FPGA, either via Intel® FPGA SDK for OpenCL™ program utility or the clCreateProgramwithBinary host function. In particular, if the host restarts a kernel via successive clEnqueueNDRangeKernel or clEnqueueTask invocations, the associated RTL modules does not reset in between these restarts.
The following steps outline the process of setting the kernel clock frequency:
- The Intel® Quartus® Prime software's Fitter applies an aggressive constraint on the kernel clock.
- The Intel® Quartus® Prime software's Timing Analyzer performs static timing analysis to determine the frequency that the Fitter actually achieves.
- The phase-locked loop (PLL) that drives the kernel clock sets the frequency determined in Step 2 to be the kernel clock frequency.
Optionally, an RTL module can access a system-wide clock that runs at twice the frequency of the OpenCL™ kernel clock. This system-wide clock can be connected to an input signal of the RTL module by including an AVALON element of type clock2x. The phase relationship between the clock and clock2x signals is such that the rising and falling edges of clock are coincident with rising edges of clock2x.
Timing failures may occur if one or more signals in your design are not able to satisfy all of the timing requirements of the device. All timing small timing failure can cause problems, so binaries that failed timing should not be used for development and production builds.
If your design failed timing, you have the following options:
- Timing failures can depend on how a design is placed on the FPGA, so running a sweep of different seeds (which results in different component placements) might lead to a passing binary.
- Decreasing the size of the design makes the component placement easier and timing failures less likely.
- Timing failures may be indicative of BSP problems, so if you are using a custom BSP, discuss with your BSP vendor. If you want to investigate it further, the Intel® Quartus® Prime Static Timing Analyzer outputs a *.sta.rpt file that contains more details about the timing analysis performed.
11.1.2.6.1. Intel Stratix 10 Design-Specific Reset Requirements for Stall-Free and Stallable RTL Modules
Reset Requirements for Stall-Free RTL Modules
A stall-free RTL module is a fixed-latency module for which the Intel® FPGA SDK for OpenCL™ Offline Compiler can optimize away stall logic.
- When creating a stall-free RTL module for a Intel® Stratix® 10 design, use synchronous clear signals only.
- After deassertion of the reset signal to the stall-free RTL module, the module must be operational within 15 clock cycles. If the reset signal is pipelined within the module, this requirement limits the reset pipelining to no more than 15 stages.
Reset Requirements for Stallable RTL Modules
A stallable RTL module has a variable latency, and it relies on backpressured input and output interfaces to function correctly.
- When creating a stallable RTL module for a Intel® Stratix® 10 design, use synchronous clear signals only.
- After assertion of the reset signal to the stallable RTL module, the module must deassert its oready and ovalid interface signals within 40 clock cycles.
- After deassertion of the reset signal to the stallable RTL module, the module must be fully operational within 40 clock cycles. The module signals its readiness by asserting the oready interface signal.
11.1.2.7. Object Manifest File Syntax of an RTL Module
The following object manifest file is for an RTL module named my_fp_sqrt_double (line 2) that implements an OpenCL™ helper function named my_sqrtfd (line 2).
1: <RTL_SPEC> 2: <FUNCTION name="my_sqrtfd" module="my_fp_sqrt_double"> 3: <ATTRIBUTES> 4: <IS_STALL_FREE value="yes"/> 5: <IS_FIXED_LATENCY value="yes"/> 6: <EXPECTED_LATENCY value="31"/> 7: <CAPACITY value="1"/> 8: <HAS_SIDE_EFFECTS value="no"/> 9: <ALLOW_MERGING value="yes"/> 10: </ATTRIBUTES> 11: <INTERFACE> 12: <AVALON port="clock" type="clock"/> 13: <AVALON port="resetn" type="resetn"/> 14: <AVALON port="ivalid" type="ivalid"/> 15: <AVALON port="iready" type="iready"/> 16: <AVALON port="ovalid" type="ovalid"/> 17: <AVALON port="oready" type="oready"/> 18: <INPUT port="datain" width="64"/> 19: <OUTPUT port="dataout" width="64"/> 20: </INTERFACE> 21: <C_MODEL> 22: <FILE name="c_model.cl" /> 23: </C_MODEL> 24: <REQUIREMENTS> 25: <FILE name="my_fp_sqrt_double_s5.v" /> 26: <FILE name="fp_sqrt_double_s5.vhd" /> 27: </REQUIREMENTS> 28: <RESOURCES> 29: <ALUTS value="2057"/> 30: <FFS value="3098"/> 31: <RAMS value="15"/> 32: <MLABS value="43"/> 33: <DSPS value="1.5"/> 34: </RESOURCES> 35: </FUNCTION> 36: </RTL_SPEC>
XML Element | Description |
---|---|
RTL_SPEC | Top-level element in the object manifest file. There can only be one such top-level element in the file. In this example, the name RTL_SPEC is historic and carries no file-specific meaning. |
FUNCTION |
Element that defines the OpenCL function that the RTL module implements. The name attribute within the FUNCTION element specifies the function's name. You may have multiple FUNCTION elements, each declaring a different function that you can call from the OpenCL kernel. The same RTL module can implement multiple functions by specifying different parameters. |
ATTRIBUTES | Element containing other XML elements that
describe various characteristics (for example, latency) of the RTL
module. The example RTL module takes one PARAMETER setting named WIDTH, which has a value of 32. Refer to Table 14 for more details other ATTRIBUTES-specific elements. Note: If you create multiple OpenCL helper functions
for different modules, or use the same RTL module with different
PARAMETER settings, you
must create a separate FUNCTION
element for each function.
|
INTERFACE | Element containing other XML elements that describe the RTL module's
interface. The example
object
manifest file shows the
Avalon® streaming interface signals that every RTL module must provide
(that is, clock, resetn, ivalid, iready,
ovalid, and oready). The resetn signal is active low. Its
synchronicity depends on the target device:
|
C_MODEL | Element specifying one or more files that implement OpenCL C model for the function. The model is used only during emulation. However, the C_MODEL element and the associated file(s) must be present when you create the library file. |
REQUIREMENTS | Element specifying one or more RTL resource files (that is, .v, .sv, .vhd,
.hex, and .mif). The specified paths to these
files are relative to the location of the
object
manifest file. Each RTL resource file becomes part
of the associated Platform Designer
component that corresponds to the entire OpenCL system. Note: The
Intel® FPGA SDK for OpenCL™
library feature does not
support .qip files. An
Intel® FPGA SDK for OpenCL™ Offline Compiler
error occurs if you compile an OpenCL kernel while using a
library that includes an unsupported resource file
type.
|
RESOURCES | Optional element specifying the FPGA resources that the RTL module uses. If you do not specify this element, the FPGA resources that the RTL module uses defaults to zero. |
11.1.2.7.1. XML Elements for ATTRIBUTES
XML Element | Description |
---|---|
IS_STALL_FREE |
Instructs the Intel® FPGA SDK for OpenCL™ Offline Compiler to remove all stall logic around the RTL module. Set IS_STALL_FREE to "yes" to indicate that the module neither generates stalls internally nor can it properly handle incoming stalls. The module simply ignores its stall input. If you set IS_STALL_FREE to "no", the module must properly handle all stall and valid signals. Note: If you set IS_STALL_FREE to "yes", you must also set IS_FIXED_LATENCY to "yes". Also, if the RTL module has an internal
state, it must properly handle ivalid=0 inputs.
An incorrect IS_STALL_FREE setting leads to incorrect results in hardware. |
IS_FIXED_LATENCY |
Indicates whether the RTL module has a fixed latency. Set IS_FIXED_LATENCY to "yes" if the RTL module always takes a known number of clock cycles to compute its output. The value you assign to the EXPECTED_LATENCY element specifies the number of clock cycles. The safe value for IS_FIXED_LATENCY is "no". When you set IS_FIXED_LATENCY="no", the EXPECTED_LATENCY value must be at least 1. Note: For a given module, you may set IS_FIXED_LATENCY to "yes" and IS_STALL_FREE to "no". Such a module produces its output in a fixed
number of clock cycles and handles stall signals
properly.
|
EXPECTED_LATENCY |
Specifies the expected latency of the RTL module. If you set IS_FIXED_LATENCY to "yes", the EXPECTED_LATENCY value indicates the number of pipeline stages inside the module. In this case, you must set this value to be the exact latency of the module. Otherwise, the offline compiler generates incorrect hardware. For a module with variable latency, the offline compiler balances the pipeline around this module to the EXPECTED_LATENCY value that you specify. For modules that can stall and require use of signals such as iready, the EXPECTED_LATENCY value must be set to at least 1. The specified value and the actual latency might differ, which might affect the number of stalls inside the pipeline. However, the resulting hardware will be correct. |
CAPACITY |
Specifies the number of multiple inputs that this module can process simultaneously. You must specify a value for CAPACITY if you also set IS_STALL_FREE="no" and IS_FIXED_LATENCY="no". Otherwise, you do not need to specify a value for CAPACITY. If CAPACITY is strictly less than EXPECTED_LATENCY, the offline compiler automatically inserts capacity-balancing FIFO buffers after this module when necessary. The safe value for CAPACITY is 1. |
HAS_SIDE_EFFECTS | Indicates whether the RTL module has side
effects. Modules that have internal states or communicate with
external memories are examples of modules with side effects. Set HAS_SIDE_EFFECTS to "yes" to indicate that the module has side effects. Specifying HAS_SIDE_EFFECTS to "yes" ensures that optimization efforts do not remove calls to modules with side effects. Stall-free modules with side effects (that is, IS_STALL_FREE="yes" and HAS_SIDE_EFFECTS="yes") must properly handle ivalid=0 input cases because the module might receive invalid data occasionally. The safe value for HAS_SIDE_EFFECTS is "yes". |
ALLOW_MERGING | Instructs the offline compiler to merge multiple
instances of the RTL module. Set ALLOW_MERGING to "yes" to allow merging of multiple instances of the module. Intel® recommends setting ALLOW_MERGING to "yes". The safe value for ALLOW_MERGING is "no". Note: Marking the module with HAS_SIDE_EFFECTS="yes" does not prevent
merging.
|
PARAMETER |
Specifies the value of an RTL module parameter. PARAMETER attributes:
Note: The value for an RTL module parameter can be specified using
either a value or a type
attribute.
|
11.1.2.7.2. XML Elements for INTERFACE
XML Element | Description |
---|---|
INPUT |
Specifies the input parameter of the RTL module. INPUT attributes:
The input parameters are concatenated to form the input stream. Aggregate data structures such as structs and arrays are not supported as input parameters. |
OUTPUT |
Specifies the output parameter of the RTL module. OUTPUT attributes:
The return value from the input stream is sent out via the output parameter on the output stream. Aggregate data structures such as structs and arrays are not supported as input parameters. |
XML Element | Description |
---|---|
MEM_INPUT |
Describes a pointer input to the RTL module. MEM_INPUT attributes:
Because all pointers to external memory must be 64 bits, there is no width attribute associated with MEM_INPUT. |
AVALON_MEM |
Declares the Avalon® memory-mapped interface for your RTL module. AVALON_MEM attributes:
|
For the AVALON_MEM element defined in the code example above, the corresponding RTL module ports are as follows:
output avm_port0_enable, input [511:0] avm_port0_readdata, input avm_port0_readdatavalid, input avm_port0_waitrequest, output [31:0] avm_port0_address, output avm_port0_read, output avm_port0_write, input avm_port0_writeack, output [511:0] avm_port0_writedata, output [63:0] avm_port0_byteenable, output [4:0] avm_port0_burstcount,
There is no assumed correspondence between pointers that you specify with MEM_INPUT and the Avalon® memory-mapped interfaces that you specify with AVALON_MEM. An RTL module can use a single pointer to address zero to multiple Avalon® memory-mapped interfaces.
11.1.2.7.3. XML Elements for RESOURCES
XML Element | Description |
---|---|
ALUTS | Specifies the number of combinational adaptive look-up tables (ALUTs) that the module uses. |
FFS | Specifies the number of dedicated logic registers that the module uses. |
RAMS | Specifies the number of block RAMs that the module uses. |
DSPS | Specifies the number of digital signal processing (DSP) blocks that the module uses. |
MLABS | Specifies the number of memory logic arrays (MLABs) that the module uses. This value is equal to the number of adaptive logic modules (ALMs) that is used for memory divided by 10 because each MLAB consumes 10 ALMs. |
11.1.2.8. Interaction between RTL Module and External Memory
Allow your RTL module to interact with external memory only if the interaction is necessary and unavoidable.
The following examples demonstrate how to structure code in an RTL module for easy integration into an OpenCL library:
Complex RTL Module | Simplified RTL Module |
---|---|
// my_rtl_fn does: // out_ptr[idx] = fn(in_ptr[idx]) my_rtl_fn (in_ptr, out_ptr,idx); |
int in_value = in_ptr[idx]; // my_rtl_fn now does: out = fn(in) int out_value = my_rtl_fn (in_value); out_ptr[idx] = out_value; |
The complex RTL module on the left reads a value from external memory, performs a scalar function on the value, and then writes the value back to global memory. Such an RTL module is difficult to describe when you integrate it into an OpenCL library. In addition, this RTL module is harder to verify and causes very conservative pointer analysis in the Intel® FPGA SDK for OpenCL™ Offline Compiler.
The simplified RTL module on the right provides the same overall functionality as the complex RTL module. However, the simplified RTL module only performs a scalar-to-scalar calculation without connecting to global memory. Integrating this simplified RTL module into the OpenCL library makes it much easier for the offline compiler to analyze the resulting OpenCL kernel.
There are times when an RTL module requires an Avalon® memory-mapped interface port to communicate with external memory. This Avalon® memory-mapped interface port connects to the same arbitration network to which all other global load and store units in the OpenCL kernels connect.
If an RTL module receives a memory pointer as an argument, the offline compiler enforces the following memory model:
- If an RTL module writes to a pointer, nothing else in the OpenCL kernel can read from or write to this pointer.
- If an RTL module reads from a pointer, the rest of the OpenCL kernel and other RTL modules may also read from this pointer.
- You may set the access field of the MEM_INPUT attribute to specify how the RTL module uses the memory pointer. Ensure that you set the value for access correctly because there is no way to verify the value.
11.1.2.9. Order of Threads Entering an RTL Module
11.1.2.10. OpenCL C Model of an RTL Module
Example OpenCL C model file for a square root function:
double my_sqrtfd (double a) { return sqrt(a); }
Intel® recommends that you emulate your OpenCL system. If you decide not to emulate your OpenCL system, no C model is required.
11.1.2.11. Potential Incompatibility between RTL Modules and Partial Reconfiguration
Consider a situation where you create and verify your library on a device that does not support Partial Reconfiguration (PR). If a library user then uses the library's RTL module inside a PR region, the module might not function correctly after PR.
- The RTL modules do not use memory logic array blocks (MLABs) with initialized content.
- The RTL modules do not make any assumptions regarding the power-up values of any logic.
For complete PR coding guidelines, refer to Creating a Partial Reconfiguration Design in the Partial Reconfiguration User Guide.
11.1.3. Packaging an OpenCL Helper Function File for an OpenCL Library
In general, you do not need to create a library to share helper functions written in OpenCL. You can distribute a helper function in source form (for example, <shared_file>.cl) and then insert the line #include "<shared_file>.cl" in the OpenCL kernel source code.
Consider creating a library under the following circumstances:
- The helper functions are in multiple files and you want to simplify distribution.
- You do not want to expose the helper functions' source code.
The helper functions are stored as LLVM IR, an assembly-like language, without comments inside the associated library.
Hardware generation is not necessary for the creation of a .aoco file. Compile the OpenCL source file using the -c offline compiler command option.
11.1.4. Packaging an RTL Component for an OpenCL Library
Hardware generation is not necessary for the creation of a .aoco file. Compile the OpenCL source file using the -c Intel® FPGA SDK for OpenCL™ Offline Compiler command option.
11.1.4.1. Restrictions and Limitations in RTL Support for the Intel FPGA SDK for OpenCL Library Feature
When creating your RTL module, ensure that it operates within the following restrictions:
- An RTL module must use a single input
Avalon® streaming interface. That
is, a single pair of ready and valid logic must control all the inputs.
You have the option to provide the necessary Avalon® streaming interface ports but declare the RTL module as stall-free. In this case, you do not have to implement proper stall behavior because the Intel® FPGA SDK for OpenCL™ Offline Compiler creates a wrapper for your module. Refer to XML Syntax of an RTL Module and Using an OpenCL Library that Works with Simple Functions (Example 1) for more syntax and usage information, respectively.
Note: You must handle ivalid signals properly if your RTL module has an internal state. Refer to Stall-Free RTL for more information. - The RTL module must work correctly regardless of the kernel clock frequency.
- RTL modules cannot connect to external I/O signals. All input and output signals must come from an OpenCL kernel.
- An RTL module must have a clock port, a resetn port, and Avalon-ST input and output ports (that is, ivalid, ovalid, iready, oready). Name the ports as specified here.
- RTL modules that communicate with external memory must have Avalon® memory-mapped interface port parameters that match the corresponding Custom Platform parameters. The offline compiler does not perform any width or burst adaptation.
- RTL modules that communicate with external memory must behave as
follows:
- They cannot burst across the burst boundary.
- They cannot make requests every clock cycle and stall the hardware by monopolizing the arbitration logic. An RTL module must pause its requests regularly to allow other load or store units to execute their operations.
- RTL modules cannot act as stand-alone OpenCL kernels. RTL modules can only be helper functions and be integrated into an OpenCL kernel during kernel compilation.
- Every function call that corresponds to RTL module instantiation is completely independent of other instantiations. There is no hardware sharing.
- Do not incorporate kernel code (that is, functions marked as kernel) into a .aoclib library file. Incorporating kernel code into the library file causes the offline compiler to issue an error message. You may incorporate helper functions into the library file.
- An RTL component must receive all its inputs at the same time. A single ivalid input signifies that all inputs contain valid data.
- You can only set RTL module parameters in the <RTL module description file name>.xml specification file, not the OpenCL kernel source file. To use the same RTL module with multiple parameters, create a separate FUNCTION tag for each parameter combination.
- You can only pass data inputs to an RTL module by value via the OpenCL
kernel code. Do not pass data inputs to an RTL module via pass-by reference, structs, or
channels. In the case of channel data, extract the data from the channel first and then pass
the extracted the scalar data to the RTL module.Note: Passing data inputs to an RTL module via pass-by reference or structs causes a fatal error to occur in the offline compiler.
- The debugger (for example, GDB for Linux) cannot step into a library function during emulation if the library is built without the debug information. However, irrespective of whether the library is built with or without the debug data, optimization and area reports are not mapped to the individual code line numbers inside a library.
- Names of RTL module source files cannot conflict with the file names of Intel® FPGA SDK for OpenCL™ Offline Compiler IP. Both the RTL module source files and the offline compiler IP files are stored in the <kernel file name>/system/synthesis/submodules directory. Naming conflicts causes existing offline compiler IP files in the directory to be overwritten by the RTL module source files.
- The SDK does not
support .qip files. You must manually parse nested
.qip files to create a flat list of RTL files.Tip: It is very difficult to debug an RTL module that works correctly on its own but works incorrectly as part of an OpenCL kernel. Double check all parameters under the ATTRIBUTES element in the <RTL module description file name>.xml file.
- All offline compiler area estimation tools assume that RTL module area is 0. The SDK does not currently support the capability of specifying an area model for RTL modules.
11.1.5. Verifying the RTL Modules
- Verify each RTL module using standard hardware verification methods.
-
Modify one of
Intel® FPGA SDK for OpenCL™
library
design examples to test your RTL modules inside the overall OpenCL system.
This testing step is critical to prevent library users from encountering hardware problems.
It is crucial that you set the values for the ATTRIBUTES elements in the object manifest file correctly. Because you cannot simulate the entire OpenCL system, you likely cannot discover problems caused by interface-level errors until hardware runs.
-
Invoke the
aocl
library [<command option>] command.
Note: The Intel® FPGA SDK for OpenCL™ library utility performs consistency checks on the object manifest file and source files, with some limitations.
- For a list of supported <command options>, invoke the aocl library command.
- The library utility does not detect errors in values assigned to elements within the ATTRIBUTES, MEM_INPUT, and AVALON_MEM elements in the object manifest file.
- The library utility does not detect RTL syntax errors. You must check the <your_kernel_filename>/quartus_sh_compile.log file for RTL syntax errors. However, parsing the errors might be time consuming.
11.1.6. Specifying an OpenCL Library when Compiling an OpenCL Kernel
You may include multiple instances of -l <library file name> and -L <library directory> in the offline compiler command.
For example, if you create a library that includes the functions my_div_fd(), my_sqrtfd(), and myrsqrtfd(), the OpenCL kernel code might resemble the following:
#include “lib_header.hcl” kernel void test_lib ( global double * restrict in, global double * restrict out, int N) { int i = get_global_id(0); for (int k =0; k < N; k++) { double x = in[i*N + k]; out[i*N + k] = my_divfd (my_rsqrtfd(x), my_sqrtfd(my_rsqrtfd (x))); } }
The corresponding lib_header.h file might resemble the following:
double my_sqrtfd (double x); double my_rsqrtfd(double x); double my_divfd(double a, double b);
11.1.7. Debugging Your OpenCL Library Through Simulation (Preview)
The Intel® FPGA SDK for OpenCL™ simulator generates a .aocx file that runs on an x86-64 Windows or a Linux host. This feature allows you to simulate the functionality of your kernel and iterate on your design without needing to compile your library to hardware and running on the FPGA each time.
Use the simulator when you want insight into the dynamic performance of your OpenCL* library and more information about the functional correctness of your OpenCL* library than emulation or the OpenCL reporting tools provide.
The simulator is cycle accurate, has a netlist identical to generate hardware, and can provide full waveforms for debugging. View the waveforms with Mentor Graphics* ModelSim* software.
11.1.7.1. Compiling a Library for Simulation (-march=simulator)
- Before you perform library simulation, perform the following tasks:
- Install a Custom Platform from your board vendor for your FPGA accelerator boards.
- Verify that the environment variable QUARTUS_ROOTDIR_OVERRIDE points to Intel® Quartus® Prime Pro Edition software installation folder.
- To simulate library on Windows systems, you need the Microsoft linker and
additional compilation time libraries. Verify that the PATH environment variable setting includes all the paths
described in the Setting the
Intel® FPGA SDK for OpenCL™
Pro Edition User Environment
Variables section of the
Intel® FPGA SDK for OpenCL™
Pro Edition Getting
Started Guide.
The PATH environment variable setting must include the path to the LINK.EXE file in Microsoft Visual Studio.
- Ensure that your LIB
environment variable setting includes the path to the Microsoft compilation time
libraries.
The compilation time libraries are available with Microsoft Visual Studio.
- Verify that the LD_LIBRARY_PATH environment variable setting includes all the paths described in the Setting the Intel® FPGA SDK for OpenCL™ Pro Edition User Environment Variables section in the Intel® FPGA SDK for OpenCL™ Pro Edition Getting Started Guide.
- To compile a simulation that targets a specific board, invoke the aoc -march=simulator -ghdl -board=<board_name> <your_kernel_filename>.cl command.
-
For Linux systems, the
Intel® FPGA SDK for OpenCL™ Offline Compiler offers symbolic debug support for the
debugger.
The offline compiler debug support allows you to pinpoint the origins of functional errors in your kernel source code.
11.1.7.2. Simulating Your OpenCL Library
If you want to view the waveforms generated during simulation, you must install and configure Mentor Graphics* ModelSim* software.
You can also run the emulator and simulator from separate terminal or command prompt sessions.
To run your OpenCL* library through the simulator:
- Run the utility command aocl linkflags to find out which libraries are necessary for building a host application. The software lists the libraries for both emulation and regular kernel compilation flows.
-
Build a host application and link it to the libraries from
Step 1.
Tip: To emulate multiple devices alongside other OpenCL SDKs, link your host application to the Khronos ICD Loader Library before you link it to the host runtime libraries. Link the host application to the ICD Loader Library by modifying the Makefile for the host application. For more information, see Linking Your Host Application to the Khronos ICD Loader Library.
- If necessary, move the .aocx file to a location where the host can find easily, preferably the current working directory.
-
Set the CL_CONTEXT_MPSIM_DEVICE_INTELFPGA environment variable to enable
the simulation device:
- Windows:
set CL_CONTEXT_MPSIM_DEVICE_INTELFPGA=1
- Linux:
env CL_CONTEXT_MPSIM_DEVICE_INTELFPGA=1
Remember: When the environment variable CL_CONTEXT_MPSIM_DEVICE_INTELFPGA is set, only the simulation devices are available. That is, access to physical boards and the emulation device is disabled.You might need to set CL_CONTEXT_COMPILER_MODE_INTELFPGA=3 if the host program cannot find the simulator device.
- Windows:
-
Run your host program.
To debug your host code and device, you can run your host code in gdb or Eclipse.
Running the host program gives you a waveform file, vsim.wif, that you can view in ModelSim* software as your host code executes. The vsim.wif file is written to the same directory that you run your host program from.
- If you change your host or kernel program and you want to test it, only recompile the modified host or kernel program and then rerun simulation.
11.1.7.3. Troubleshooting Simulator Issues
Review this section to troubleshoot simulator problems you might have when attempting to run a simulation.
Windows Compilation Fails - Host Program Reports Corrupt .aocx file
During the compilation of the device.cl file, your directory path is likely too long. Use the -o compiler option to output your compilation results to a shorter path.
A socket=-11 Error Is Logged to transcript.log
Message: "src/hls_cosim_ipc_socket.cpp:202: void IPCSocketMaster::connect(): Assertion `sockfd != -1 && "IPCSocketMaster::connect() call to accept() failed"' failed."
An example of mixing ModelSim* resources is compiling a device with ModelSim* SE and then running the host program in ModelSim* - Intel® FPGA Edition.
Running the Host Program Generates a Segmentation Fault
If you receive a segmentation fault when you run your host program, you might be running the emulator and the simulator from the same terminal or command prompt sessions. Remember to unset emulator environment variables before trying to run the simulator.
Try to avoid compiling your device and your host program in the same terminal or command prompt sessions. By using separate sessions, you can avoid possible environment variable conflicts.
Simulator Backward Compatibility
In software releases prior to Intel® Quartus® Prime Pro Edition software version 19.3, the simulator does not work with the Platform Designer.
Compatibility with ModelSim* - Intel® FPGA Starter Edition Software
ModelSim* - Intel® FPGA Starter Edition software has limitations on design size that prevent it from simulating OpenCL™ designs. When trying to launch a simulation using ModelSim* - Intel® FPGA Starter Edition software, you may encounter the following error message:
Error: The simulator's process ended unexpectedly.
Simulate the designs with ModelSim* - Intel® FPGA Edition or ModelSim SE software.
11.1.8. Using an OpenCL Library that Works with Simple Functions (Example 1)
The library_example1 includes a library, a kernel, and a host system. The example1.cl kernel source file includes two kernels. The kernel test_lib uses library functions; the kernel test_builtin uses built-in functions. The host runs both kernels and then compares their outputs and runtimes. Intel® recommends that you use the same strategy to verify your own library functions.
To compile this design example, perform the following tasks:
- Obtain the library_example1 from the OpenCL design examples in the $INTELFPGAOCLSDKROOT/examples_aoc directory.
- Copy it to a local directory.
-
Follow the instructions in the README.html file, which is located in the top-level of the
example directory.
When you run the compiled host program, it should produce the following output:
Loading example1.aocx ... Create buffers Generate random data for conversion... Enqueuing both library and builtin in kernels 4 times with global size 65536 Kernel computation using library function took 5.35333 seconds Kernel computation using built-in function took 5.39949 seconds Reading results to buffers... Checking results... Library function throughput is within 5% of builtin throughput. PASSED
11.1.9. Using an OpenCL Library that Works with External Memory (Example 2)
The library_example2 includes a library, a kernel, and a host system. In this example, the RTL code that communicates with global memory is Custom Platform- or Reference Platform-dependent. Ensure that the compilation targets the board that corresponds to the Stratix® V Network Reference Platform.
Intel® generated the RTL modules copyElement() and sumOfElements() using the Intel® FPGA SDK for OpenCL™ Offline Compiler, which explains the extra inputs in the code.
The example2.cl kernel source file includes two kernels. The kernel test6 is an NDRange kernel that calls the copyElement() RTL function, which copies data from B[] to A[] and then stores global_id+100 in C[]. The kernel test11 is a single work-item kernel that uses an RTL function. The sumOfElements() RTL function determines the sum of the elements of A[] in range [i, N] and then adds the rest to C[i].
To compile this design example, perform the following tasks:
- Obtain the library_example2 from the OpenCL design examples in the $INTELFPGAOCLSDKROOT/examples_aoc directory.
- Copy it into a local directory.
-
Follow the instructions in the README.html file, which is located in the top-level of the
example directory.
When you run the compiled host program, it should produce the following output:
Loading example2.aocx ... Running test6 Launching the kernel test6 with globalsize=128 localSize=16 Loading example2.aocx ... Running test11 Launching the kernel test11 with globalsize=1 localSize=1 PASSED
11.1.10. OpenCL Library Command-Line Options
Command Option | Description |
---|---|
-shared |
In conjunction with the -rtl command option, compiles an OpenCL source file into an object file (.aoco) that you can then include into a library. aoc -rtl -shared <OpenCL source file name>.cl -o <OpenCL object file name>.aoco |
-I=<library_directory> | Adds <library directory> to
the header file search path. aocl -I <library_header_file_directory> -l <library_file_name>.aoclib <kernel_file_name>.cl |
-L=<library directory> | Adds <library directory> to
the OpenCL library search path. Space after "-L" is optional. aoc -l=<library_file_name>.aoclib [-L=<library directory>] <kernel file name>.cl |
-l=<library_file_name>.aoclib | Specifies the OpenCL library file (
<library_file_name>.aoclib). Space after -l is optional. aoc -l=<library_file_name>.aoclib [-L=<library directory>] <kernel file name>.cl |
-library-debug | Generates debug output that relates to libraries. Part of the
additional output appears in stdout, the other part appears in the
<kernel_file_name>/<kernel_file_name>.log file. aoc -l=<library_file_name>.aoclib -library-debug <kernel_file_name>.cl |
Command Option | Description |
---|---|
hdl-comp-pkg <XML_specification_ file>.xml |
Packages a single HDL component into a .aoco file that you then include into a library. Invoking this command option is similar to invoking aoc -rtl <XML_specification_file>.xml. However, the processing time is faster because the aocl utility does not perform any environment checks. aocl library hdl-comp-pkg <XML_specification_ file>.xml -o <output_file>.aoco |
-rtl <XML_specification_ file>.xml |
Same function as hdl-comp-pkg <XML_specification_ file>.xml. aocl library -rtl <XML_specification_ file>.xml |
create |
Creates a library file from the .aoco files that you created by invoking the hdl-comp-pkg utility option or the aoc -shared command, and any other .aoclib libraries. aocl library create [-name <library_name>] [-vendor <library_vendor>] [-version <library_version>] [-o <output_file>.aoclib] [.aoco...] [.aoclib...] where -name, -vendor, and -version are optional information strings you can specify and add to the library. |
list <library_name> |
Lists all the RTL components in the library. Currently, this option is not available for use to list OpenCL functions. aocl library list <library_name> |
help | Prints the list of
Intel® FPGA SDK for OpenCL™
library
utility options and their descriptions on screen. aocl library help |
11.2. Memory Attributes for Configuring Kernel Memory Systems
Attribute | Description |
---|---|
register | Specifies that the variable or array must be carried through the pipeline in registers. Registers can be implemented either exclusively in FFs or in a combination of FFs and RAM-based FIFOs. |
memory("impl_type") | Specifies that the variable or array
must be implemented in a memory system. Including the memory kernel
attribute is equivalent to declaring the variable or array with the
__local qualifier. You can pass an optional string argument to specify the memory implementation type. Specify impl_type as either BLOCK_RAM or MLAB to implement the memory using memory blocks (such as M20K) or memory logic array blocks (MLABs), respectively. |
numbanks(N) | Specifies that the memory system implementing the variable or array must have N banks, where N is a power-of-2 integer value greater than zero. |
bankwidth(N) | Specifies that the memory system implementing the variable or array must have banks that are N bytes wide, where N is a power-of-2 integer value greater than zero. |
singlepump | Specifies that the memory system implementing the variable or array must be clocked at the same rate as the component accessing it. |
doublepump | Specifies that the memory system implementing the variable or array must be clocked at twice the rate as the component accessing it. |
merge("label", "direction") | Forces two or more variables or arrays to be implemented
in the same memory system. label is an arbitrary string. Assign the same label to all variables that you want to merge. Specify direction as either width or depth to identify whether the memories should be merged width-wise or depth-wise, respectively. |
bank_bits(b 0 , b 1 , ..., b n ) | Forces the memory system to split into 2n banks, with {b
0
, b
1
, ..., b
n
} forming the bank-select bits. Important:
b
0
, b
1
, ..., b
n
must be consecutive, positive
integers.
Note: If you specify the numbanks(n) attribute without the bank_bits attribute, the compiler
automatically infers the bank-select bits based on the memory
access pattern.
|
private_copies(N) |
Specifies that the variable or array declared or accessed inside a pipelined loop has a maximum of N private copies to allow N simultaneous iterations of the loop at any given time, where N is an unsigned integer value. Apply this attribute when the scope of a variable (through its declaration or access pattern) is limited to a loop. If the loop also has a #pragma max_concurrency M , the number of private copies created is min(M,N). |
max_replicates(N) | Specifies that the memory implementing the variable or array has no more than N replicates, where N is an integer value greater than 0, to enable simultaneous reads from the datapath. |
simple_dual_port_memory | Specifies that the memory implementing the variable or array should have no port that services both reads and writes. |
force_pow2_depth(N) | Specifies that the memory implementing the variable or array has a power-of-2 depth. This option is enabled if N is 1 and disabled if N is 0. The default value is 1. |
Example Use Case | Syntax |
---|---|
Implements a variable in a register |
int __attribute__((register)) a[12]; |
Implements a memory system with eight banks, each with a width of 8 bytes |
int __attribute__((memory, numbanks(8), bankwidth(8)) b[16]; |
Implements a double-pumped memory system with one 128-byte wide bank, and a maximum of two replicates. |
int __attribute__((memory, numbanks(1), bankwidth(128), doublepump, max_replicates(2))) c[32]; |
You can also apply memory attributes to data members of a struct. Specify attributes for struct data members in the struct declaration. If you apply attributes to an object instantiation of a struct, then those attributes override the attributes specified in the declaration for struct data members. For example, consider the following code:
struct State { int array[100] __attribute__((__memory__)); int reg[4] __attribute__((__register__)); }; __kernel void sum(...) { struct State S1; struct State S2 __attribute__((__memory__)); // some uses }
The offline compiler splits S1 into two variables as S1.array[100] (implemented in memory) and S1.reg[4] (implemented in registers). However, the compiler ignores attributes applied at struct declaration for object S2 and does not split it as the S2 has the attribute memory applied to it.
11.2.1. Restrictions on the Use of Variable-specific Attributes
Unsupported uses of variable-specific attributes that cause compilation errors:
- You use the kernel attributes in declarations other than constant, local, or private variable declarations (for example, declarations for function parameters, global variable declarations, or function declarations).
- You use the register attribute in conjunction with any of the other variable-specific attributes.
- You include both the singlepump and doublepump attributes in the same variable declaration.
Incorrect memory configurations that cause the offline compiler to issue warnings during compilation:
- The memory configuration that is defined by the variable-specific attributes exceeds the available storage size (for example, specifying eight banks of local memory for an integer variable).
Incorrect memory configurations that cause compilation errors:
- The bank width is smaller than the data access size (for example, bank width is 2 bytes for an array of 4-byte integers).
11.3. Kernel Attributes for Reducing the Overhead on Hardware Usage
11.3.1. Hardware for Kernel Interface
Hardware around the kernel pipeline is necessary for functions such as the following:
- Dispatching IDs for work-items and work-groups
- Communicating with the host regarding kernel arguments and work-group sizes
Figure 34 illustrates the hardware that the offline compiler generates when it compiles the following kernel:
__kernel void my_kernel(global int* arg) { … int sum = 0; for(unsigned i = 0; i < n; i++) { if(sum < m) sum += val; } *arg = sum; … }
11.3.1.1. Omit Hardware that Generates and Dispatches Kernel IDs
Semantically, the max_global_work_dim(0) kernel attribute specifies that the global work dimension of the kernel is zero. Setting this kernel attribute means that the kernel does not use any global, local, or group IDs. The presence of this attribute in the kernel code serves as a guarantee to the offline compiler that the kernel is a single work-item kernel.
When compiling the following kernel, the offline compiler generates interface hardware as illustrated in Figure 35.
channel int chan_in; channel int chan_out; __attribute__((max_global_work_dim(0))) __kernel void plusK (int N, int k) { for (int i = 0; i < N; ++i) { int data_in = read_channel_intel(chan_in); write_channel_intel(chan_out, data_in + k); } }
If your current kernel implementation has multiple work-items but does not use global, local, or group IDs, you can use the max_global_work_dim(0) kernel attribute if you modify the kernel code accordingly:
- Wrap the kernel body in a for loop that iterates as many times as the number of work-items.
- Launch the modified kernel with only one work-item.
11.3.1.2. Omit Communication Hardware between the Host and the Kernel
The autorun kernel attribute notifies the offline compiler that the kernel runs on its own and will not be enqueued by any host.
To leverage the autorun attribute, a kernel must meet all of the following criteria:
- Does not use I/O channelsNote: Kernel-to-kernel channels are supported.
- Does not have any arguments
- Has either the max_global_work_dim(0)
attribute or the reqd_work_group_size(X,Y,Z) attributeNote: The parameters of the reqd_work_group_size(X,Y,Z) attribute must be divisors of 232.
As mentioned above, kernels with the autorun attribute cannot have any arguments and start executing without the host launching them explicitly. As a result, the offline compiler does not need to generate the logic for communication between the host and the kernel. Omitting this logic reduces logic utilization and allows the offline compiler to apply additional performance optimizations.
A typical use case for the autorun attribute is a kernel that reads data from one or more kernel-to-kernel channels, processes the data, and then writes the results to one or more channels. When compiling the kernel, the offline compiler generates hardware as illustrated in Figure 36.
channel int chan_in; channel int chan_out; __attribute__((max_global_work_dim(0))) __attribute__((autorun)) __kernel void plusOne () { while(1) { int data_in = read_channel_intel(chan_in); write_channel_intel(chan_out, data_in + 1); } }
11.3.1.3. Omit Hardware to Support the global_work_offset Argument in the clEnqueueNDRangeKernel API
This kernel attribute is recommended for all kernels that are always enqueued with a zero or NULL global_work_offset argument. When this kernel attribute is set, the Intel FPGA host runtime returns with CL_INVALID_GLOBAL_OFFSET error code if a non-zero or non-null global_work_offset argument is used to enqueue the kernel.
11.4. Kernel Replication Using the num_compute_units(X,Y,Z) Attribute
As mentioned in Specifying Number of Compute Units, including the num_compute_units(N) kernel attribute in your kernel instructs the Intel® FPGA SDK for OpenCL™ Offline Compiler to generate multiple compute units to process data. The num_compute_unit(N) attribute instructs the offline compiler to generate N identical copies of the kernel in hardware.
11.4.1. Customization of Replicated Kernels Using the get_compute_id() Function
Retrieving compute IDs is a convenient alternative to replicating your kernel in source code and then adding specialized code to each kernel copy. When a kernel uses the num_compute_units(X,Y,Z) attribute and calls the get_compute_id() function, the Intel® FPGA SDK for OpenCL™ Offline Compiler assigns a unique compute ID to each compute unit. The get_compute_id() function then retrieves these unique compute IDs. You can use the compute ID to specify how the associated compute unit should behave differently from the other compute units that are derived from the same kernel source code. For example, you can use the return value of get_compute_id() to index into an array of channels to specify which channel each compute unit should read from or write to.
The num_compute_units attribute accepts up to three arguments (that is, num_compute_units(X,Y,Z)). In conjunction with the get_compute_id() function, this attribute allows you to create one-dimensional, two-dimensional, and three-dimensional logical arrays of compute units. An example use case of a 1D array of compute units is a linear pipeline of kernels (also called a daisy chain of kernels). An example use case of a 2D array of compute units is a systolic array of kernels.
__attribute__((max_global_work_dim(0))) __attribute__((autorun)) __attribute__((num_compute_units(4,4))) __kernel void PE() { row = get_compute_id(0); col = get_compute_id(1); … }
For a 3D array of compute units, you can retrieve the X, Y, and Z coordinates of a compute unit in the logical compute unit array using get_compute_id(0), get_compute_id(1), and get_compute_id(2), respectively. In this case, the API is very similar to the API of the work-item's intrinsic functions (that is, get_global_id(), get_local_id(), and get_group_id()).
Global IDs, local IDs, and group IDs can vary at runtime based on how the host invokes the kernel. However, compute IDs are known at compilation time, allowing the offline compiler to generate optimized hardware for each compute unit.
11.4.2. Using Channels with Kernel Copies
The example code below implements channels within multiple compute units.
#define N 4 channel int chain_channels[N+1]; __attribute__((max_global_work_dim(0))) __kernel void reader(global int *data_in, int size) { for (int i = 0; i < size; ++i) { write_channel_intel(chain_channels[0], data_in[i]); } } __attribute__((max_global_work_dim(0))) __attribute__((autorun)) __attribute__((num_compute_units(N))) __kernel void plusOne() { int compute_id = get_compute_id(0); int input = read_channel_intel(chain_channels[compute_id]); write_channel_intel(chain_channels[compute_id+1], input + 1); } __attribute__((max_global_work_dim(0))) __kernel void writer(global int *data_out, int size) { for (int i = 0; i < size; ++i) { data_out[i] = read_channel_intel(chain_channels[N]);; } }
11.5. Intra-Kernel Registered Assignment Built-In Function
In general, it is not necessary to include the __fpga_reg() function in your kernel code to achieve desired performance.
Prototype of the __fpga_reg() built-in function:
T __fpga_reg(T op)
where T may be any sized type, such as standard OpenCL device data types, or a user-defined struct containing OpenCL types.
Use the __fpga_reg() function for the following purposes:
- Break the critical paths between spatially distant portions of a data path, such as between processing elements of a large systolic array.
- Reduce the pressure on placement and routing efforts caused by spatially distinct portions of the kernel implementation.
The __fpga_reg() function directs the Intel® FPGA SDK for OpenCL™ Offline Compiler to insert at least one hardware pipelining register on the signal path that assigns the operand to the return value. This built-in function operates as an assignment in the OpenCL programming language, where the operand is assigned to the return value. The assignment has no implicit semantic or functional meaning beyond a standard C assignment. Functionally, you can think of the __fpga_reg() function being always optimized away by the offline compiler.
You may introduce nested __fpga_reg() function calls in your kernel code to increase the minimum number of registers that the offline compiler inserts on the assignment path. Because each function call guarantees the insertion of at least one register stage, the number of calls provides a lower limit on the number of registers.
Consider the following example:
int out=__fpga_reg(__fpga_reg(in));
This line of code directs the offline compiler to insert at least two registers on the assignment path. The offline compiler may insert more than two registers on the path.
A. Support Statuses of OpenCL Features
A.1. Support Statuses of OpenCL 1.0 Features
The following sections outline the support statuses of the OpenCL™ features described in the OpenCL Specification version 1.0.
A.1.1. OpenCL 1.0 C Programming Language Implementation
Support Status column legend:
Symbol | Description |
---|---|
● | The feature is supported, and there might be a clarification for the supported feature in the Notes column |
○ | The feature is supported with exceptions identified in the Notes column. |
X | The feature is not supported. |
Section | Feature | Support Status | Notes |
---|---|---|---|
6.1.1 | Built-in Scalar Data Types | ||
double precision float | ○ | Preliminary support for all double precision float
built-in scalar data type. This feature might not conform with the
OpenCL Specification version 1.0. Currently, the following double precision floating-point functions are expected to conform with the OpenCL Specification version 1.0: add / subtract / multiply / divide / ceil / floor / rint / trunc / fabs / fmax / fmin / sqrt / rsqrt / exp / exp2 / exp10 / log / log2 / log10 / sin / cos / asin / acos / sinh / cosh / tanh / asinh / acosh / atanh / pow / pown / powr / tanh / atan / atan2 / ldexp / log1p / sincos |
|
half precision float | ○ | Support for scalar addition,
subtraction and multiplication. Support for conversions to and from
single-precision floating point. This feature might not conform with
the OpenCL Specification version 1.0. This feature is supported in the Emulator. |
|
6.1.2 | Built-in Vector Data Types | ○ |
Preliminary support for vectors with three elements. Three-element vector support is a supplement to the OpenCL Specification version 1.0. |
6.1.3 | Other Built-in Data Types | ○ | The SDK does not support image or sampler types because the SDK does not support images. |
6.2.1 | Implicit Conversions | ● | Refer to Section 6.2.6: Usual Arithmetic Conversions in the OpenCL Specification version 1.2 for an important clarification of implicit conversions between scalar and vector types. |
6.2.2 | Explicit Casts | ● | The SDK allows scalar data casts to a vector with a different element type. |
6.5 | Address Space Qualifiers | ○ | Function scope__constant variables are not supported. |
6.6 | Image Access Qualifiers | X | The SDK does not support images. |
6.7 | Function Qualifiers | ||
6.7.2 | Optional Attribute Qualifiers | ● | Refer to the
Intel® FPGA SDK for OpenCL™
Best Practices Guide
for tips on using reqd_work_group_size to improve kernel performance. The SDK parses but ignores the vec_type_hint and work_group_size_hint attribute qualifiers. |
6.9 | Preprocessor Directives and Macros | ||
#pragma directive: #pragma unroll | ● | The
Intel® FPGA SDK for OpenCL™ Offline Compiler supports only #pragma unroll. You may assign an
integer argument to the unroll directive to control the extent of
loop unrolling. For example, #pragma unroll 4 unrolls four iterations of a loop. By default, an unroll directive with no unroll factor causes the offline compiler to attempt to unroll the loop fully. Refer to the Intel® FPGA SDK for OpenCL™ Best Practices Guide for tips on using #pragma unroll to improve kernel performance. |
|
__ENDIAN_LITTLE__ defined to be value 1 | ● | The target FPGA is little-endian. | |
__IMAGE_SUPPORT__ | X | __IMAGE_SUPPORT__ is undefined; the SDK does not support images. | |
6.10 | Attribute Qualifiers—The offline compiler parses attribute qualifiers as follows: | ||
6.10.3 | Specifying Attributes of Variables—endian | X | |
6.10.4 | Specifying Attributes of Blocks and Control-Flow-Statements | X | |
6.10.5 | Extending Attribute Qualifiers | ● | The offline compiler can parse
attributes on various syntactic structures. It reserves some
attribute names for its own internal use. Refer to the Intel® FPGA SDK for OpenCL™ Best Practices Guide for tips on how to optimize kernel performance using these kernel attributes. |