Intel® Distribution of OpenVX* Implementation Developer Guide
- Intel® Distribution of OpenVX* Implementation offers CPU kernels which are multi-threaded (with Intel® Threading Building Blocks) and vectorized (with Intel® Integrated Performance Primitives). Also, GPU support is backed by optimized OpenCL™ implementation.
- To perform most read/write operations on local values, the implementation supports automatic data tiling for input, intermediate, and output data.
- The SDK also extends the original OpenVX standard with specific APIs and numerous Kernel extensions. Refer to the Intel's Extensions to the OpenVX* Primitives chapter for details.
- The implementation also allows you to add performance efficient (for example, tiled) versions of your own algorithms to the processing pipelines, refer to the Intel's Extensions to the OpenVX* API: Advanced Tiling chapter on the CPU-efficient way and the Intel's Extensions to the OpenVX* API: OpenCL™ Custom Kernels chapter for GPU (OpenCL) specific info.
- Support for both task and data parallelism to maximize utilization of the compute resources such as CPU and GPU.
- General system-level device affinities, as well as fine-grain API for orchestrating the individual nodes via notion of the targets. Refer to the Heterogeneous Computing with the Intel® Distribution of OpenVX* Implementation chapter for details.