Skip To Main Content
Support Knowledge Base

OpenVINO™ Memory Sharing for NPU on Lunar Lake (LNL) Machine

Content Type: Product Information & Documentation   |   Article ID: 000100965   |   Last Reviewed: 05/22/2025

Description

  • Worked on the interop between GPU and NPU on LNL using Windows.
  • The NPU remote tensor plugin only supports creating an L0 context from the OpenVINO™ core or a compiled model rather than accepting an existing context like the GPU.
  • Unable to find on how to use the share SYCL with NPU for inference.

Resolution

The SYCL is a wrapper over OpenCL*. The Level Zero API and OpenCL* API allow sharing memory using dma-buf (on the Linux* platform) or NT handle (on the Windows platform). Users can import and export them. Here are the details on how to share such a memory with the NPU through the remote tensor feature.

If a host-level zero tensor from NPU is created, it can be used without memory on NPU only in the same Level Zero Context that was used for creating it. Please note that the same ov::core object needs to be used for the same Level Zero context. Creating different ov::core objects will just create different Level Zero Contexts.

Related Products

This article applies to 3 products.
Intel® Xeon Phi™ Processor Software OpenVINO™ toolkit Performance Libraries