Intel® Distribution of OpenVX* Implementation Developer Guide
Intel® Distribution of OpenVX* Implementation delivers OpenVX* library optimized for running on Intel® hardware (CPU, GPU).
Intel's OpenVX* Implementation: Key Features
Performance:
- Intel® Distribution of OpenVX* Implementation offers CPU kernels which are multi-threaded (with Intel® Threading Building Blocks) and vectorized (with Intel® Integrated Performance Primitives). Also, GPU support is backed by optimized OpenCL™ implementation.
- To perform most read/write operations on local values, the implementation supports automatic data tiling for input, intermediate, and output data.
Extensibility:
- The SDK also extends the original OpenVX standard with specific APIs and numerous Kernel extensions. Refer to the Intel's Extensions to the OpenVX* Primitives chapter for details.
- The implementation also allows you to add performance efficient (for example, tiled) versions of your own algorithms to the processing pipelines, refer to the Intel's Extensions to the OpenVX* API: Advanced Tiling chapter on the CPU-efficient way and the Intel's Extensions to the OpenVX* API: OpenCL™ Custom Kernels chapter for GPU (OpenCL) specific info.
Heterogeneity:
- Support for both task and data parallelism to maximize utilization of the compute resources such as CPU and GPU.
- General system-level device affinities, as well as fine-grain API for orchestrating the individual nodes via notion of the targets. Refer to the Heterogeneous Computing with the Intel® Distribution of OpenVX* Implementation chapter for details.
Intel® Distribution of OpenVX* Implementation uses a modular approach with a common runtime and individual device-specific plugins.
