Nvidia Showing Off New AR Capabilities & Research
July 31, 2017

Nvidia Showing Off New AR Capabilities & Research

LOS ANGELES — Nvidia is at the annual SIGGRAPH ACM computer graphics conference, bringing the power of AI to computer graphics with a range of capabilities that will ease content creation, speed work flows and reduce costs for content creators. 
During a press event on Monday, Nvidia announced supercharged rendering with OptiX 5.0’s new AI accelerated denoising running on Nvidia DGX Station, which delivers the rendering performance of 150 servers. Nvidia also showed new Quadro and Titan xP external GPU solutions that bring new creative power to artists and designers. In addition, the company revealed new research in AI in the areas of facial animation, denoising, anti-aliasing and light transport.
 
Also at SIGGRAPH, Nvidia is revealing its research in the Emerging Tech area, demonstrating their work in Optic and Haptics for AR and VR. The company is presenting work in two areas: what researchers call “varifocal displays,” which give users the ability to focus more naturally while enjoying VR and AR experiences; and haptics, which enhances VR and AR with touch and feel. This represents the latest in a growing body of research they’ve shared over the past decade at industry events such as SIGGRAPH, as well as academic venues.

Nvidia is demonstrating a pair of techniques that address vergence-accommodation conflict. That’s caused when a viewer’s eyes, accustomed to focusing on objects in 3D space, are presented with stereo images with parallax depth cues, but which are presented on a flat screen at a constant optical distance. Both aim to solve this in different ways by varying the focus of virtual images in front of a user, depending on where they’re looking.

The first, Varifocal Virtuality, is a new optical layout for near-eye display. It uses a new transparent holographic back-projection screen to display virtual images that blend seamlessly with the real world. This use of holograms could lead to VR and AR displays that are radically thinner and lighter than today’s headsets.

This demonstration makes use of new research from UC Berkeley’s Banks lab, led by Martin Banks, which offers evidence to support the idea that the our brains use what a photographer would call a chromatic aberration — causing colored fringes to appear on the edges of an object — to help understand where an image is in space.

Nvidia’s demonstration shows how to take advantage of this effect to better orient a user. Virtual objects at different distances, which should not be in focus, are rendered with a sophisticated simulated defocus blur that accounts for the internal optics of the eye.

So, when a user is looking at a distant object, it will be in focus. A nearby object they are not looking at will be more blurry just as it is in the real world. When the user looks at the nearby object, the situation is reversed.

The second demonstration, Membrane VR, is a collaboration between University of North Carolina, Nvidia, Saarland University, and the Max-Planck Institutes, and uses a deformable membrane mirror for each eye that, in a commercial system, could be adjusted based on where a gaze tracker detects a user is looking.

The effort, led by David Dunn, a doctoral student at UNC, who is also an Nvidia intern, allows a user to focus on real-world objects that are nearby, or far away, while also being able to see virtual objects clearly.

For example, a label displaying a person’s name above a person’s head might appear to a user to actually be on top of their head, creating an experience that blends the virtual and real world’s more seamlessly.

Nvidia is also showing off two new techniques for using fluid elastomer actuators — small air chambers — to provide haptic feedback that enhances VR and AR, by connecting what you see on your display to what you feel in your hand. Both are created by Cornell University in collaboration with Nvidia.

One is a prototype VR controller that lets VR users experience tactile feedback while they play, relaying a sense of texture and changing geometry. Its soft skin can safely provide force feedback, as well as simulate different textures and materials.

The second is a controller that changes its shape and feel as you use it. The company has integrated these novel input devices with its VR Funhouse experience.