AI Denoiser Filter
Filter
RIF_IMAGE_FILTER_AI_DENOISE
Description
The AI denoiser has been trained on RPR data. It can work in two different modes:
Color Only; or
Color + Albedo + Normal + Depth. This mode gives the best results.
These filters use AMD’s MIOpen machine learning library and its OpenCL backend, DirectML as its backend on Microsoft® and Metal as its backend on OSX.
The models and database for MIOpen fastest convolution are located in the model directory. This folder needs to be in the working directory of the application using the AI denoiser.
The first time the denoiser is run for a specific size, MIOpen will search for the fastest algorithms to resolve the model. The information will be stored in a database by this path on Microsoft Windows: %LOCALAPPDATA%\miopen\kernels.
Parameters
Parameter |
Type |
Input/Output |
Description |
---|---|---|---|
colorImg |
image |
input |
The image containing color vectors. LDR values in [0, 1] or HDR values in [0, +∞). This input is mandatory. |
normalsImg |
image |
input |
The image containing normal vectors. Normals need to be mapped within the range of [0, 1]. If coming from RPR, there is nothing to do as the data are within the correct range. The parameter is optional. If it is used, the depth and albedo parameters are also required as well. |
depthImg |
image |
input |
The image containing depth vectors. Depth needs to be normalized. If coming from RPR, please use rprcontextresolveframebuffer(context, rpr_fb, DepthNormBuffer, true); to transform the depth data in the correct space. Then use the remapping filter to remap the depth within the range [0;1]. The parameter is optional. However, if it is used, the normal and albedo parameters are also required. |
albedoImg |
image |
input |
The image containing albedo vectors. LDR values in [0, 1]. The parameter is optional. If used, the normal and depth parameters are required as well. |
useHDR |
uint |
input |
Specifies if the color and albedo inputs are in the HDR or LDR format. By default, the parameter is set to |
modelPath |
string |
input |
Path to machine learning model files (default is |
Usage Example
rif_image_filter denoiseFilter = nullptr;
rifContextCreateImageFilter(context, RIF_IMAGE_FILTER_AI_DENOISE, &denoiseFilter);
rif_image_filter remapNormalsFilter = nullptr;
rifContextCreateImageFilter(context, RIF_IMAGE_FILTER_REMAP_RANGE, &remapNormalsFilter);
rif_image_filter remapDepthFilter = nullptr;
rifContextCreateImageFilter(context, RIF_IMAGE_FILTER_REMAP_RANGE, &remapDepthFilter);
// 1 - Color
rifImageFilterSetParameterImage(denoiseFilter, "colorImg", colorImg);
rifImageFilterSetParameter1u(denoiseFilter, "useHDR", useHDR); //to define if the colorImg and albedo are in HDR or LDR format
if (!useColorOnly)
{
// 2 - normals
rifImageFilterSetParameterImage(denoiseFilter, "normalsImg", normalsImg);
// 3 - depth
rifImageFilterSetParameterImage(denoiseFilter, "depthImg", depthImg);
// 4 - albedo
rifImageFilterSetParameterImage(denoiseFilter, "albedoImg", albedoImg);
rifImageFilterSetParameter1f(remapNormalsFilter, "dstLo", 0.0f);
rifImageFilterSetParameter1f(remapNormalsFilter, "dstHi", 1.0f);
rifCommandQueueAttachImageFilter(queue, remapNormalsFilter, normalsImg, normalsImg);
rifImageFilterSetParameter1f(remapDepthFilter, "dstLo", 0.0f);
rifImageFilterSetParameter1f(remapDepthFilter, "dstHi", 1.0f);
rifCommandQueueAttachImageFilter(queue, remapDepthFilter, depthImg, depthImg);
}
rifCommandQueueAttachImageFilter(queue, denoiseFilter, colorImg, outputImage);
rifContextExecuteCommandQueue(context, queue, nullptr, nullptr, nullptr);