One major challenge in using computer-generated image processing (CGI) for motion pictures, especially for high-resolution animation or visual effects sequences, is efficiently denoising ray-traced images.
Noise obscures the desired high-fidelity image, introducing specks, granularity, or variations in light, color, and shadows.
Even worse, noise impacts time, quality, and budget. Fighting noise in renders can create a lot of extra work and tedious effort that slows film production, costs time and money, and impedes the creative process.
Intel Principal Engineer Attila Áfra and his colleagues first started working closely with Cinesite* 5 years ago, assisting with rendering optimizations for the Addams Family 2 release using Intel Open Image Denoise as the key ingredient in their collaboration.
Since then, more studios and VFX and animation solution providers in the film and entertainment industry have taken note of Intel Open Image Denoise and started leveraging it for their projects as well. For its contribution to the motion picture industry, its recognition with a Technical Achievement Award by the Academy of Motion Picture Arts and Sciences was announced earlier in the year, and Attila was honored for his contributions this week on Tuesday, April 29, at its annual Scientific and Technical Awards ceremony.
Intel Open Image Denoise is an open source library of high-performance, high-quality denoising filters for images rendered with ray tracing. Released under the permissive Apache 2.0 license and is available in a GitHub open source repository.
It filters out the unwanted Monte Carlo noise inherent to stochastic ray tracing methods like path tracing, reducing the number of necessary samples per pixel by potentially multiple orders of magnitude. This results in accelerated real-time previews during the creative process and reduced final rendering times. Thanks to its simple but flexible C/C++ API, the library can be easily integrated into most existing or new rendering solutions.
It is integrated with many tools, such as Autodesk Arnold*, Blender*, Chaos V-Ray*, Chaos Corona*, Foundry Modo*, and Maxon Cinema 4D*. This follows an open design philosophy, delivering great performance and optimizations on CPUs and GPUs for Intel architecture and beyond, including rich cross-vendor support.
The Award
The contributions recognized and celebrated at the Academy’s annual Scientific and Technical Awards ceremony are selected for their significant value to the process of making motion pictures.
⇒ The 2025 Technical Achievement Awards Honorees
Their international committee of film industry experts reviews key innovations and their impact. This year, the increased adoption and accelerating technical evolution of more efficient application of computer graphics Image (CGI) rendering with the help of deep learning based denoising filters caught their attention.
CGI is becoming an increasingly essential part of animation, augmentation, and post-processing. This makes fast, customizable, and high-quality rendering with ray tracing at its center a core ingredient to film production. Ray tracing is a powerful algorithm capable of producing very realistic images but has very high computational requirements. To reduce rendering times, a sophisticated denoiser like Intel Open Image Denoise can assist, allowing fewer rays to be traced without sacrificing image quality.
Intel Open Image Denoise's strength resides in its combination of industry-leading, easy-to-use C/C++ denoising rendering APIs with deep learning algorithms. This is integrated with the common U-Net convolutional neural network (CNN) architecture, widely used for image segmentation and diffusion models for iterative image denoising.
Although the library ships with pre-trained filter models, using them is not mandatory. The model can be trained using the included Python*-based training toolkit and user-provided image datasets to optimize a filter for a specific renderer, sample count, content type, or scene. Thus, Creators and studios can retrain the included denoising neural networks for their renderers and styles.
The Academy of Motion Picture Arts and Sciences describes the contribution of Intel Open Image Denoise as follows:
Open Image Denoise is an open-source library that provides an elegant API and runs on diverse hardware, leading to broad industry adoption. Its core technology is provided by the widely adopted U-Net architecture that improves efficiency and preserves detail, raising the quality of CG imagery across the industry.
How It Works
Let us look closely at how it all works and what makes Open Image Denoise so popular. At its heart, it is a collection of efficient deep learning based denoising filters trained to handle a wide range of samples per pixel (spp), from 1 spp to almost fully converged, making it suitable for preview and final-frame rendering. The filters can denoise images using only the noisy color (beauty) buffer, or, to preserve as much detail as possible, can optionally utilize auxiliary feature buffers (e.g., albedo, normal). Most renderers support such buffers as arbitrary output variables (AOVs) or they can be implemented with little effort.
Intel Open Image Denoise exploits modern instruction sets like AVX-512 and NEON on CPUs, Intel® Xe Matrix Extensions (Intel® XMX) on Intel GPUs, and tensor cores on third-party GPUs.
The latest Intel Open Image Denoise sources are available at the Intel Open Image Denoise GitHub repository.:
$ git clone --recursive https://github.com/OpenImageDenoise/oidn.git
Note: Git Large File Storage (LFS) extension is required to clone the repository.
The build environment uses CMake and Python, along with a C++11 or C99 and LLVM-compatible compiler. Check out the Intel® Implicit SPMD Program Compiler (ISPC) and, for SYCL* device support, the Intel® oneAPI DPC++/C++ Compiler.
Intel® oneAPI Threading Building Blocks (oneTBB) are being used to fully leverage scalable parallelism on your rendering platform.
⇒ Please consult the Intel Open Image Denoise online documentation (pdf) for details.
API Overview
The API is structured so that it operates on the following objects:
- device objects (OIDNDevice type)
- buffer objects (OIDNBuffer type)
- filter objects (OIDNFilter type)
All objects are reference-counted, and handles can be released by calling the appropriate release function (oidnReleaseDevice) or retained by incrementing the reference count (oidnRetainDevice).
Object parameter updates only take effect once they are explicitly committed to a given object. This allows for batching up multiple small changes and specifying exactly when in the rendering timeline changes to objects will occur.
All API calls are thread-safe, but operations using the same device will be serialized. Thus, it is best to minimize API calls from different threads.
Basic Denoising (C++11) API Usage Flow
#include <OpenImageDenoise/oidn.hpp>
...
// Create an Open Image Denoise device
oidn::DeviceRef device = oidn::newDevice(); // CPU or GPU if available
// oidn::DeviceRef device = oidn::newDevice(oidn::DeviceType::CPU);
device.commit();
// Create buffers for input/output images accessible by both host (CPU) and device (CPU/GPU)
oidn::BufferRef colorBuf = device.newBuffer(width * height * 3 * sizeof(float));
oidn::BufferRef albedoBuf = ...
// Create a filter for denoising a beauty (color) image using optional auxiliary images too
// This can be an expensive operation, so try not to create a new filter for every image!
oidn::FilterRef filter = device.newFilter("RT"); // generic ray tracing filter
filter.setImage("color", colorBuf, oidn::Format::Float3, width, height); // beauty
filter.setImage("albedo", albedoBuf, oidn::Format::Float3, width, height); // auxiliary
filter.setImage("normal", normalBuf, oidn::Format::Float3, width, height); // auxiliary
filter.setImage("output", colorBuf, oidn::Format::Float3, width, height); // denoised beauty
filter.set("hdr", true); // beauty image is HDR
filter.commit();
// Fill the input image buffers
float* colorPtr = (float*)colorBuf.getData();
...
// Filter the beauty image
filter.execute();
// Check for errors
const char* errorMessage;
if (device.getError(errorMessage) != oidn::Error::None)
std::cout << "Error: " << errorMessage << std::endl;
Performance on CPUs and Intel® GPUs using SYCL
With Open Image Denoise, you can leverage SYCL, which opens denoise acceleration up to a wide range of integrated and dedicated Intel GPUs.
To add SYCL device support, ensure the following CMake option is set in addition to OIDN_DEVICE_CPU:
- OIDN_DEVICE_SYCL
You can explicitly create SYCL devices for denoising acceleration using the library’s device API:
OIDNDevice oidnNewDevice(OIDNDeviceType type);
with OIDN_DEVICE_TYPE_SYCL
Using other Accelerator Devices
Intel Open Image Denoise works efficiently with CPU-native C/C++ applications and SYCL devices, and its API is also open to use with a wide range of other accelerator devices using the same OIDNDevice logical device concept.
Parameters of these physical devices can be queried using
bool oidnGetPhysicalDeviceBool (int physicalDeviceID, const char* name);
int oidnGetPhysicalDeviceInt (int physicalDeviceID, const char* name);
unsigned int oidnGetPhysicalDeviceUInt (int physicalDeviceID, const char* name);
const char* oidnGetPhysicalDeviceString(int physicalDeviceID, const char* name);
const void* oidnGetPhysicalDeviceData (int physicalDeviceID, const char* name,
size_t* byteSize);
Shared Memory
Using the Open Image Denoise API, we can directly pass pointers to commonly supported unified shared memory (USM) allocators of accelerator device APIs (e.g., sycl::malloc_device, cudaMalloc).
If our accelerator device does not support USM, if we are, for instance, using DirectX* 12 or Vulkan*, we may have to use buffers, avoiding expensive copying through host memory. External buffers can be imported from graphics APIs with the following library function calls:
- oidnNewSharedBufferFromFD
- oidnNewSharedBufferFromWin32Handle
- oidnNewSharedBufferFromMetal
⇒ The Intel Open Image Denoise memory access API architecture is open and flexible, ensuring its ability to be performant, interoperable, and coexist with the existing device or accelerator programming framework of your choice.
Denoising in Action
Filtering and raytracing are at the center of this versatile denoising library.
It starts with a generic ray tracing denoising filter suitable for denoising images rendered with Monte Carlo ray tracing methods like unidirectional and bidirectional path tracing. It also supports depth of field and motion blur. The filter is based on a convolutional neural network (CNN) and comes with a set of pre-trained models that work well with a wide range of ray tracing-based renderers and noise levels.
The typical results speak for themselves:
Before
Scenes by Evermotion*.
After
For denoising beauty images, it accepts either a low dynamic range (LDR) or high dynamic range (HDR) images as the main input image. It also accepts auxiliary feature images using albedo and normal as optional inputs.
⇒ Check out some denoising code samples.
Using auxiliary feature images like albedo and normal helps preserve fine details and textures in the image, thus significantly improving denoising quality.
Datasets for Training
If Intel Open Image Denoise's pre-trained models are not exactly right for your project, you can also train one yourself. The source distribution includes a PyTorch-based neural network training toolkit.
It comes with a set of ready-to-use training scripts:
- preprocess.py: Preprocesses training and validation datasets.
- train.py: Trains a model using preprocessed datasets.
- infer.py: Performs inference on a dataset using the specified training result.
- export.py: Exports a training result to the runtime model weights format.
- find_lr.py: Tool for finding the optimal minimum and maximum learning rates.
- visualize.py: Invokes TensorBoard for visualizing statistics of a training result.
- split_exr.py: Splits a multi-channel EXR image into multiple feature images.
- convert_image.py: Converts a feature image to a different image format.
- compare_image.py: Compares two feature images using the specified quality metrics.
Use It for Your Next Movie
It is all about performance, open standards, and flexibility.
Whether you are a hobbyist or a VFX and CGI expert, whether you are editing a short film or a feature-length movie, Intel Open Image Denoise has you covered.
Paired with the Intel® oneAPI DPC++/C++ Compiler and the Intel® oneAPI Threading Building Blocks, you can accelerate filtering denoising even the most compute-intensive and artifact-ridden images. Download
today, to get started.
Let us know what you think. Do you have suggestions, or are you looking for assistance?
If you encounter any issues, please contact us via the Intel Open Image Denoise GitHub Issue Tracker.
For missing features, please get in touch with us via email at openimagedenoise@googlegroups.com.
Join our mailing list for release announcements and major news regarding Intel Open Image Denoise.
Additional Resources
Open Image Denoise News
- Intel Open Image Denoise Wins Scientific and Technical Achievement Award
- Academy of Motion Picture Arts and Sciences Scientific & Technical Awards 2025
- Dilapidated Elegance: Cinesite Brings Creepy & Kooky Storytelling to Life
Open Image Denoise Details