Nvidia Brings AI to Ray Tracing
May 12, 2017

Nvidia Brings AI to Ray Tracing

Nvidia CEO and founder Jensen Huang showed at the GPU Technology Conference how the company is advancing the iterative design process to accurately predict final renderings by applying artificial intelligence to ray tracing. (Ray tracing is a technique that uses complex math to realistically simulate how light interacts with surfaces in a specific space.)
The ray tracing process generates highly realistic imagery but is computationally intensive, and can leave a certain amount of noise in an image. Removing this noise while preserving sharp edges and texture detail, is known in the industry as denoising. Using Nvidia Iray, Huang showed how Nvidia is the first to make high-quality denoising operate in real time by combining deep learning prediction algorithms with Pascal architecture-based Nvidia Quadro GPUs.

It’s a complete gamechanger for graphics-intensive industries like entertainment, product design, manufacturing, architecture, engineering and many others, he pointed out.

The technique can be applied to ray-tracing systems of many kinds. Nvidia is already integrating deep learning techniques to its own rendering products, starting with Iray.

How Iray Interactive Denoising Works
Existing algorithms for high-quality denoising consume seconds to minutes per frame, which makes them impractical for interactive applications.

By predicting final images from only partly finished results, Iray AI produces accurate, photorealistic models without having to wait for the final image to be rendered.

Designers can iterate on and complete final images 4x faster, for a far quicker understanding of a final scene or model. The cumulative time savings can significantly accelerate a business’s go-to-market plans.

To achieve this, Nvidia researchers and engineers turned to a class of neural networks called an autoencoder. Autoencoders are used for increasing image resolution, compressing video and many other image processing algorithms.

Using the Nvidia DGX-1 AI supercomputer, the team trained a neural network to translate a noisy image into a clean reference image. In less than 24  hours, the neural network was trained using 15,000 image pairs with varying amounts of noise from 3,000 different scenes. Once trained, the network takes a fraction of a second to clean up noise in almost any image — even those not represented in the original training set.

With Iray, there’s no need to worry about how the deep learning functionality works. Nvidia already trained the network and uses GPU-accelerated inference on Iray output. Creatives just click a button and enjoy interactivity with the improved image quality with any Pascal or better GPU.
Iray deep learning functionality will be included with the Iray SDK supplied to software companies, and exposed in Iray plug-in products Nvidia will produce later this year. Nvidia also plan to add an AI mode to its Mental Ray.