A unique collaboration resulted in sci-tech awards for three researchers.
Last month, two weeks before the main event, the Academy of Motion Pictures Arts and Sciences awarded 15 scientific and technical awards to 46 men who pioneered advances in moviemaking technology. Among the awards this year was a Scientific and Engineering award given to Per Christensen, Michael Bunnell, and Christophe Hery for developing point-based rendering for indirect illumination and ambient occlusion.
The first film to use this rendering technique, Pirates of the Caribbean: Dead Man’s Chest, won an Oscar for the visual effects created at Industrial Light & Magic. Now available in Pixar’s RenderMan and widely adopted by visual effects studios, the point-cloud rendering technique has helped studios create realistic CG characters, objects, and environments in more than 30 films.
Simply put, the technique is a fast, point-based method for computing diffuse global illumination (color bleeding). This point-cloud solution is as much as 10 times faster than raytracing, uses less memory, has no noise, and the amount of time it takes to calculate does not increase when surfaces are displacement-mapped, have complex shaders, or are lit by complex light sources. It owes its existence to a unique interplay between researchers in a hardware company, a software company, and two visual effects studios.
Bunnell’s Gem of an Idea
The idea originated with Michael Bunnell, now president of Fantasy Lab, a game company he founded. Nvidia had just introduced its programmable graphics processing unit (GPU), and Bunnell was working in the company’s shader compiler group. “It was a new thing and an exciting time,” he says. “We were translating human-written shaders into code that could run directly on the graphics processing chip.”
Real-time rendering made possible by the GPU opened the door to more realistic images for interactive games and more iterative shading and lighting in postproduction houses. Bunnell pushed his excitement out into the world by writing a chapter for the first GPU Gems book on shadow mapping and anti-aliasing techniques.
“It wasn’t a new technique,” Bunnell says. “It was about doing something on a graphics chip in a reasonable number of steps.”
Bunnell was more interested, though, in subdivision surfacing, in tessellation that breaks a curved surface into the triangles needed for rendering, and he began working on ways to render curved surfaces in real time. The demo group at Nvidia used his code for a product launch, and then asked if he could do something more: real-time ambient occlusion.
Ambient occlusion darkens areas on CG objects that we can see, but that light doesn’t reach—the soft shadow under a windowsill, for example, or a character’s nose. It is calculated knowing the geometry, not light, and in some ways is self-shadowing.
At top, when Russell holds open Carl Fredricksen’s door with his little foot in Up, he owes the soft, colored shadow beneath to an award-winning point-cloud-based rendering technique. Above, the diffuse, indirect illumination also helps make Carl’s storybook house believable.
|
The demo group had implemented a version of ambient occlusion using notes from Hayden Landis’s SIGGRAPH 2002 course. (Landis, his colleague at ILM Hilmar Koch, and Ken McGaugh, now at Double Negative, received a Technical Achievement Award from the Academy this year for advancing the technique of ambient occlusion rendering.) “The only problem [the demo team] had was that it took about eight hours to compute the ambient for a 30-second demo,” Bunnell says. “It looked good, but it was still an off-line process. Basically, they baked in the shadows.”
So, with a publication date for a new GPU Gems in the offing, Bunnell decided to tackle the problem. And by then, Nvidia’s GPUs were faster and more programmable, with branching and looping built into the chip. First, Bunnell created a point cloud from the vertices in the geometry. “I created a shadow emitter at each vertex of the geometry,” he says. “And, I had each vertex represent one-third of the area of every triangle that shared the vertex. I approximated that area, kind of treating it like a disk. Then I used a solid angle calculation that tells you what percentage of a surrounding hemisphere the disk would obscure if you were looking at that disk. That tells you how much shadow the disk creates.” He “splatted” the result, vertex by vertex, onto pixels on the screen, adding and subtracting depending on how dark the disks were. And then he realized he didn’t need to do that.
“Instead of splatting, I could make the emitters at each vertex be receivers,” Bunnell says. “I could go through the list of all these vertices and calculate one sum for that point, and accumulate the result at full floating precision. So, I made the points (where I did the calculations) do more than cast the shadow for ambient occlusion—they also received shadows from other data points.”
And that led to a breakthrough. “Since I had thrown the geometry away, I could combine points that were near each other into new emitters,” Bunnell says. “So, I would gather four points or so in an area and use them as an emitter. Then, I had a hierarchy where I combined these emitters into a parent emitter, a hierarchy. So, if I’m far enough away from the points, I can use the sum, the total of all the children, and I don’t have to look at all the children; I can skip a whole bunch of stuff. If not, I can go down one level, and so forth. I can traverse the tree instead of going to each emitter that’s emitting a shadow value.”
The second breakthrough was in realizing that if he ran multiple passes, he could get a closer approximation each time. “I could get an accurate result without looking at the geometry,” Bunnell says. “Then I realized if I could use this for shadowing and occlusions, I could use it as a cheap light transport technique.” That made indirect illumination—which needs to know about light—possible. And, he wrote about all this in GPU Gems 2.
Pixar’s Up is one of the latest feature animations to use RenderMan’s point-based approach for color bleeding, as evidenced in the image above, but Sony’s Surf’s Up was the first. More than 30 films have used the technique for VFX and animated features.
|
The Next Step
Meanwhile, at ILM, Christophe Hery had developed a method of rendering subsurface scattering by using point clouds to speed the process. He used RenderMan, which had always diced/tessellated all geometry into micropolygons. “It does this tessellation very fast,” Hery says. “So I wrote a DSO (dynamic shared object) that could export a point cloud corresponding to the tessellation RenderMan had created. My intention was to use it only for scattering, but I learned I could store anything.”
In 2004, Hery spoke at Eurographics in Sweden about how he used point clouds for scattering, and in the audience was Per Christensen, who had joined Pixar. “He came to me and said that he wanted to implement this in RenderMan,” Hery recalls. And he did. Christensen and the RenderMan team made sure the rendering software could generate a point cloud and had the appropriate supporting infrastructures. Everything was in place for the next step.
In 2005, Rene Limberger at Sony Pictures Imageworks, where work on Surf’s Up had begun, saw Christensen at SIGGRAPH. “He asked me if I would take a look at Bunnell’s article and see if I could implement it in RenderMan,” Christensen says. So Christensen created a prototype version targeted to CPUs in a renderfarm, rather than a GPU.
“I also extended it somewhat,” Christensen says. Mike [Bunnell] computed occlusion everywhere first, and then if something realized it was itself occluded, he would kind of subtract that out. I came up with a method that I believe is faster because it doesn’t need iterations and it computes the color bleeding more accurately. It’s a simple rasterization of the world as seen from each point. It’s as if you have a fish-eye lens at each point looking at the world and gathering all that light. Developing the prototype was quick because the point-cloud infrastructure was already in place.”
Christensen gave Limberger that prototype implementation to test. “And, right at the same time, I got an e-mail from Christophe Hery at ILM,” he says. “He had the same request. I said, ‘Funny you should ask. I just wrote a prototype. Give it a try and give me some feedback.’ It would have been unethical for me to tell Christophe that Rene was testing it as well, so he didn’t know the guys at Sony were doing similar work. But, Christophe picked it up quickly and put it into production right away.”
Christensen considers the close collaboration with Limberger and Hery to have been very important to the process. “They are doing film production, so they knew what would be super useful,” he says. “They did a lot of testing and feedback, and suggested many improvements that I implemented.” Pixar first implemented the color-bleeding code in a beta version of RenderMan 13 in January 2006, and the public version in May.
“ILM had collaborated with Pixar for years,” Hery says, “but this was more.” The two exchanged ideas, feedback, and source-code snippets at a rapid pace, on nearly a daily basis.
Speed Thrills
Christensen, who considers himself a raytracing fanatic, ticks off the advantages this approach has over raytracing. “It’s an approximation, but raytracing is an approximation, too,” he says, “and both of them will eventually converge to the correct solution.”
“The effect is exactly the same,” Christensen continues. “But using the point cloud is faster. Raytracing is a sampling. If you raytrace to get ambient occlusion, you shoot all these rays, count how many hit and how many miss, and that gives you ambient. If you want color bleeding, you also have to compute the color at those hit points. That involves starting a shader to compute the color, so it’s time-consuming and expensive. With the point-based approach, you get color bleeding for free. The object [from which you generate the point cloud] already has the color and materials applied, so the point cloud has the appropriate colors built in. You just look up the pre-computed color at that point and you’re done.”
(Top) Davy Jones was the first CG character ILM rendered using point-cloud-based indirect illumination. (Bottom) Double Negative recently used the technique in 2012.
|
Similarly, while displacement mapping slows a raytracer down, the point cloud doesn’t care, which is one reason why Hery wanted to use this method for Pirates. “We saw a three-times speedup and more for ambient, and probably four or five times faster for indirect illumination [color bleeding],” he says. “It enabled a new look, and we could do Pirates 2 without taking over the whole renderfarm.”
Hery, who is now a look development supervisor at ImageMovers Digital, adds, “There’s still no better practical solution for indirect illumination in production for RenderMan-based engines. It’s the best approach for optimizing.”
At SIGGRAPH 2009, Christensen finally met Bunnell, the man whose idea led to the sci-tech awards for the three researchers. “We had exchanged e-mail,” Christensen says, “but, hadn’t talked in person. We had dinner in New Orleans. I was excited to have finally met him in person. The point-based approach is like all great ideas: In hindsight, it seems obvious, but somebody has to think of it. It’s absolutely brilliant.”
2009 Sci-Tech Oscars
The Scientific and Technical Awards, often called Sci-Tech Oscars, are in three levels: Technical Achievement Award (certificate), Scientific and Engineering Award (bronze tablet), and the Academy Award of Merit (Oscar statuette). Of the 15 awards this year, 12 center on tools for rendering, on-set motion capture, and digital intermediates.
Rendering:
Scientific and Engineering Award
Per Christensen, Michael Bunnell, and Christophe Hery for the development of point-based rendering for indirect illumination and ambient occlusion. Much faster than previous raytraced methods, this computer graphics technique has enabled color-bleeding effects and realistic shadows for complex scenes in motion pictures.
Scientific and Engineering Award
Paul Debevec, Tim Hawkins, John Monos, and Dr. Mark Sagar for the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures.
Technical Achievement Award
Hayden Landis, Ken McGaugh, and Hilmar Koch for advancing the technique of ambient occlusion rendering. Ambient occlusion has enabled a new level of realism in synthesized imagery and has become a standard tool for computer graphics lighting in motion pictures.
On-set Performance Capture:
Technical Achievement Award
Steve Sullivan, Kevin Wooley, Brett Allen, and Colin Davidson for the development of the Imocap on-set performance capture system developed at Industrial Light & Magic.
Digital Intermediate:
Scientific and Engineering Award
Dr. Richard Kirk for the overall design and development of the Truelight real-time 3D look-up table hardware device and color management software.
Scientific and Engineering Award
Volker Massmann, Markus Hasenzahl, Dr. Klaus Anderle, and Andreas Loew for the development of the Spirit 4k/2k film scanning system as used in the digital intermediate process for motion pictures.
Scientific and Engineering Award
Michael Cieslinski, Dr. Reimar Lenz, and Bernd Brauner for the development of the ARRIscan film scanner, enabling high-resolution, high-dynamic range, pin-registered film scanning for use in the digital intermediate process.
Scientific and Engineering Award
Wolfgang Lempp, Theo Brown, Tony Sedivy, and Dr. John Quartel for the development of the Northlight film scanner, which enables high-resolution, pin-registered scanning in the motion-picture digital intermediate process.
Scientific and Engineering Award
Steve Chapman, Martin Tlaskal, Darrin Smart, and Dr. James Logie for their contributions to the development of the Baselight color-correction system, which enables real-time digital manipulation of motion-picture imagery during the digital intermediate process.
Scientific and Engineering Award
Mark Jaszberenyi, Gyula Priskin, and Tamas Perlaki for their contributions to the development of the Lustre color-correction system, which enables real-time digital manipulation of motion-picture imagery during the digital intermediate process.
Technical Achievement Award
Mark Wolforth and Tony Sedivy for their contributions to the development of the Truelight real-time 3D look-up table hardware system.
Technical Achievement Award
Dr. Klaus Anderle, Christian Baeker, and Frank Billasch for their contributions to the LUTher 3D look-up table hardware device and color-management software. |