CHICAGO—It’s not an exaggeration — many of us have seemingly turned into skilled photographers overnight. With the rapid advances in handheld devices and easy-to-use photo-editing applications, people have been accustomed to snapping their own photos from their phones or tablets for years now. Some of us also are getting savvier and more creative with how photos are shared or posted.
In this new work from Facebook researchers, users are now able to turn the photos they take on their devices into 3D images within seconds. The team will demonstrate their innovative end-to-end system for creating and viewing 3D photos at SIGGRAPH 2020.
The 2D-to-3D photo technique has been available as a “photos feature” on Facebook since late 2018. To take advantage of this feature, originally Facebook users were required to capture photos with a phone equipped with a dual-lens camera. Now, the Facebook team has added an algorithm that automates depth estimation from the 2D input image, and the technique can be utilized directly on any mobile device, expanding the method beyond just the Facebook app and without the requirement of having a dual-lens camera.
“Over the last century, photography has gone through several tech ‘upgrades’ that increased the level of immersion. Initially, all photos were black and white and grainy, then came color photography, and then digital photography brought us higher quality and better-resolution images,” Johannes Kopf, lead author of the work and research scientist at Facebook, says. “Finally, these days we have 3D photography, which makes photos feel a lot more alive and real.”
The new framework provides users with a more practical approach to 3D photography, addressing several design objectives. Users can access the new technology via their own mobile device; the real-time conversion from a 2D input image to 3D is seamless, requiring no sophisticated photographic skills by the user and only takes a few seconds to process; and the method is robust enough to work on almost any photo — new or one previously taken.
To refine the new system, the researchers trained a convolutional neural network (CNN) on millions of pairs of public 3D images and their accompanying depth maps and leveraged mobile-optimization techniques developed by Facebook AI. The framework also incorporates texture inpainting and geometry capture of the 2D input image to convert it into 3D, resulting in images that are more active and lively. Each automated step that converts a user’s 2D photo, directly from their mobile device, is optimized to run on a variety of makes and models and is able to work with a device’s limited memory and data-transfer capabilities. The best part? Users get instant gratification, as the 3D results are literally generated in a matter of seconds.
Researchers at Facebook have been working toward new and inventive ways to create high quality, immersive 3D experiences, pushing the envelope in computer vision, graphics, and machine learning. In future work, the team is investigating machine-learning methods that enable high-quality depth estimation for videos taken with mobile devices.
In addition to Kopf, the research team at Facebook who collaborated on “One Shot 3D Photography” includes Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge, Yangming Chong, Josh Patterson, Jan-Michael Frahm, Shu Wu, Matthew Yu, Peizhao Zhang, Zijian He, Peter Vajda, Ayush Saraf, and Michael Cohen.
Representative image for “One Shot 3D Photography” © Facebook