Accelerated AI Inference for Unreal Engine-5

author-image

By

Introduction

Intel® has developed an Unreal Engine 5 plugin, NNERuntimeOpenVINO, to accelerate AI inference in collaboration with Epic Games, Inc. By leveraging existing technology, Intel® OpenVINO™, users can expect increased performance, support for additional model formats, and wider support for hardware.

Setup

The plugin can be downloaded GitHub.com. Once the plugin has been downloaded, you can enable it via the Unreal Editor for your project by navigating to Edit > Plugins and searching for “NNERuntimeOpenVINO”.

How It Works

The plugin provides an OpenVINO™ runtime for the Neural Network Engine (NNE). NNE API calls are mapped to Intel OpenVINO™ API calls. Developers can seamlessly switch the underlying runtime by developing for the NNE API without changing existing code for running inference.

Documentation for the respective APIs can be found in the following locations:

OpenVINO™: https://docs.openvino.ai/2025/index.html

NNE: https://dev.epicgames.com/documentation/en-us/unreal-engine/neural-network-engine-in-unreal-engine

Importing Models

Models can be imported via the Content Drawer of your project. Note that for models with multiple files, i.e. OpenVINO™ IR, the collection of files must follow the same naming convention with alternating file extensions (i.e. MyModel.xml + MyModel.bin). Once imported, models are stored as NNEModelData assets. At cook time, models are packaged in their native pre-compiled format to ensure the widest availability. At runtime, models are compiled, optimized, and cached on demand for the specific device the user requests.

The compile step occurs when a ModelInstance is instantiated. Compiled models are cached and managed by their ModelInstance. You may run as many inference requests against a given ModelInstance as desired.

Plugin Architecture

The plugin is comprised of three modules:
 

  1. NNERuntimeOpenVINO

The runtime component responsible for mapping NNE tensors and inference to the OpenVINO™ equivalent.
 

  1. NNERuntimeOpenVINOEditor

The editor component responsible for handling the import and cook of model data assets.
 

  1. OpenVINO™

OpenVINO™ headers and SDK setup.

Features

Device and Operating System Support

While OpenVINO™ runs on many different devices, the plugin aims to map as close to the NNE equivalent as possible. In doing so, the following devices are support:
 

Windows and Linux are supported with prebuilt binaries included.

For a full list of supported devices and Operating Systems, please refer to the OpenVINO™ documentation: https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html

Model Formats

The following model formats are supported by the plugin:
 

  • ONNX
  • OpenVINO™ IR

While OpenVINO™ supports additional model formats not listed here, those will need to be converted prior to import.

Python support for model conversion is provided with the standalone toolkit: https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html

Special Thanks

Nico Ranieri of Epic Games, Inc.

1