CG tools and new techniques help Director Zack Snyder film superhuman action in documentary style for Man of Steel
When Visual Effects Supervisor John Des Jardin sat with Director Zack Snyder to plot the visual effects for Warner Bros.’ Man of Steel, one thing became clear: Snyder wanted the camera in the middle of the action. “He said that he didn’t want to do the normal thing when you have stunt guys, someone gets hit, and the way we know [the superhero] goes over our heads or up into the sky is because the camera spins and then we have a digital takeover,” Des Jardin says. “Zack didn’t want the camera move to drive the animation. That’s putting the cart before the horse.”
Des Jardin, who had worked with Snyder on Watchmen and Sucker Punch, first met with the director in November 2010 to talk about Man of Steel, the first Superman movie since 2006. In addition to Snyder, Christopher Nolan, who had rebooted the Batman series with a darker version of that caped crusader, was a producer and co-writer on the film. Preproduction began in the spring of 2011. Filming began in July 2011. Postproduction ran from February 2012 to February 2013.
All told, three studios – The Moving Picture Company (MPC), Double Negative (DNeg), and Weta Digital – created the majority of the 1,500 shots, with Scanline handling an ocean-based oil rig rescue, and PLF, the previs.
ARTISTS AT The Moving Picture Company (MPC) developed digital Superman’s flying style.
MPC did the Smallville battle, Superman (actor Henry Cavill) coming out of a fortress in the arctic and learning how to fly, the escape pod sequence, and present-day Kryptonian objects. Weta created everything on Krypton and Kryptonian interiors (see “Krypton,” below). DNeg took care of all the visual effects in Metropolis (see “Metropolis,” Below), including building the gargantuan city, as well as a fight in the Indian Ocean involving a giant piece of Kryptonian equipment. In addition to the oil rig rescue, Scanline created a tornado.
“The movie readily allowed us to split work on the film into sections to give to the vendors,” Des Jardin says. “The Kryptonian things created at Weta, however, appeared in the rest of the film. And, everyone had a digital Superman. Even Scanline – they had a digital Clark Kent on the oil rig. But, MPC did the first primary digital Superman.”
Previs started with “stunt vis.” Stunt coordinator and Second Unit Director Damon Caro, who had worked with Snyder and Des Jardin on Sucker Punch, took the first stab at choreographing the fight scenes. “He shoots his stunt vis in a training facility,” Des Jardin says. “He has sound effects, and he uses [Adobe’s] After Effects for bullet hits and sparks. Zack [Snyder] buys off on the stunt vis. Then we tear it apart, asking, for example, if we can do a wire rig here, CG there.”
Next, Des Jardin worked with Kyle Robinson, previs supervisor at PLF, where artists were building a version of Smallville and Metropolis. “Kyle, Damon, and I did a performance-capture version of Damon’s fights,” Des Jardin says. “We used three prosumer cameras to record the motion, and then Kyle hand-animated the fights for the previs. Smallville was straightforward, but Metropolis was different. It was citywide. We didn’t have that space, so we made it a Western gunslinger fight using a saloon made of boxes. A character throws a glass on the floor as a challenge. Someone gets thrown out the window.”
Krypton
People on Earth may have adopted Superman as one of their own, but he is, after all, an alien creature walking among us, as Director Zack Snyder’s Man of Steel doesn’t hesitate to remind us. We see young Clark Kent trying to cope with his ability to see through people and to hear everything, and later to use his adaptation to his advantage when, as an adult, he must fight people from Krypton, his birthplace, to save his adopted planet.
Weta Digital artists built Krypton’s cityscapes, buildings, technology, and spaceships. And then they blew up the planet. Dan Lemmon led a visual effects team that created 550 shots for Man of Steel. “A little more than a third of the film,” Lemmon says.
Working from concept art by Production Designer Alex McDowell, the artists at Weta Digital built the dreary city inside the remnants of a giant strip mine. “The city is now decayed and dead,” Lemmon says. “What’s left are big stone slabs and arches that form almost a cave, miles across with a louvered ceiling of raw stone. The main areas are the towers. We have a bit of action there, but we don’t go in and amongst the sprawling residential areas below. We did several wide shots leading to and from the digital environment.”
Outside the cities, a few digital beasts roam the planet, and Kryptonians fly on digital animals that look like a cross between dragonflies and mammals.
Although much of the environment work was similar to digital locations the studio had created for previous films, the artists created new rock faces for the alien planet. “We Lidar-scanned a new set of cliff faces for textures,” Lemmon says, “salt cliffs and volcanic cliffs. We were trying to create a unique hard and sheer rock face. It’s a dying planet. There are burned tobacco colors, not a lot of greenery.”
In addition to the terrestrial landscape, the artists created an underwater birthing chamber. “It looked like a kelp forest,” Lemmon says. “The Kryptonians grow from a specific codex of DNA instructions, so we have organic stalks with embryonic sacs with babies. They have an ambient movement. And then we put worker robots in the environment, as well.”
We see other examples of interesting Kryptonian technology, as well. Snyder asked the crew to combine ancient tools and ancient organic shapes with high-tech ideas to create things that, to humans, would appear almost magical. Three-dimensional display technology suited that goal perfectly.
“On Earth, we have computer monitors and televisions,” Lemmon says. “For Krypton, we created floating, semi-silver beads that could form 3D shapes in front of the actors. If someone needed to pass along a communication, we had a sheet of this beaded liquid geometry that would assume the face of the Kryptonian.”
For a sequence in which Superman’s father explains the history of Krypton via a likeness of himself in this Kryptonian technology, Weta Digital’s simulation and animation departments worked together to create a series of bas relief-type sculptures from these little beads.
“We ran simulations on top of stylized 3D geometry,” Lemmon says. “We would animate the stylized geometry and then use that geometry as target shapes for simulations that moved the beads. The beads could blend from one target to the other.”
The artists could dial in parameters to specify the size of the beads and the rate at which they moved, and the beads could divide and subdivide to change resolution.
“We always have some ambient buzz,” Lemmon says. “We had inherent noise built into the technique. So, getting the silver reflective surfaces to read clearly and stand out was a challenging for lighting.”
When it came time to destroy the planet, the team found a unique method that, again, had the simulation and animation departments marry physically accurate simulations with art direction.
“We’ve seen a lot of planets blow up in movies over the years,” Lemmon says. “So we thought about how a planet might collapse if the core became unstable, and how we might translate that into visual elements. We wanted to create something that respected the natural magnetic field with planets that have molten cores.”
Thus, they cause Krypton to collapse around its equator. “It falls into the center of the planet and we get an intermediate shape, like the core of an apple when eaten around the edges,” Lemmon says. “As the planet collapses, it heats up and there’s a big nuclear reaction. We have a ring of explosions around the circumference, and you see a little twisting plane of debris. Then, the planet blows itself out into space. It ends up reading as a tiny star.”
To create the collapse from the close-up point of view of Superman’s mother, who watches the city destroyed around her, animators and simulation artists worked closely together. “We blocked the motion using keyframe shapes to get the timing, the main beats of the events story-wise,” Lemmon says. “Then we timed simulations to match the rough animation blocking. It was like laying charges on a set and setting them off in a particular order.”
In addition to building and exploding the planet, and creating digital elements for shots on the planet surface, the artists created the vessel that carries baby Superman to Earth, and other Kryptonian ships, including the Black Zero, which houses General Zod and the Kryptonian rebels. “When the ships are in orbit, they’re ours,” Lemmon says. “When they come down to Earth, we hand them off to other vendors.” – Barbara Robertson
Later PLF’s previs artists might fly the character from the window to the top of a building. “The nut we had to crack for this movie was that in Smallville, when Superman fights Kryptonians, we wanted a human operating the camera,” Des Jardin says. Finding a way to have the camera drive the animation for the down-and-dirty fight between Superman and his equally powerful Krypton foes on the streets of Smallville resulted in a unique variation of virtual cinematography made possible with computer graphics.
Rather than shooting empty plates and then fitting the action of digital characters with the camera move, the postproduction team needed to continue the live-action shots when a character did something not filmable. The goal was to animate digital characters that seamlessly continued the actors’ or stunt actors’ performances on location and extended them into superhero action, and have the camera follow the action as if a filmmaker were doing a documentary.
“When we did takeovers from actors to stunt actors to animated characters for Sucker Punch, the actors were on greenscreen stages,” Des Jardin says. “But, for Smallville, we had to keep the integrity of the location in Plano, Illinois. Guilluame [Rocheron] and I said, ‘If only we could capture the environment for each setup.’”
WETA DIGITAL ARTISTS created high-tech Kryptonian displays with shape-shifting 3D beads.
Rocheron, the visual effects supervisor at MPC, and Des Jardin first tried a typical method for creating digital environments. “A couple months before principal photography, we did a proof-of-concept test,” Rocheron says. “We shot a bunch of tiles to re-create the background so we could transition to fully digital doubles and have control of the camera. The main limitation was that Zack would shoot with one camera. So, that wouldn’t give us much of the environment. With the tiles, we could cover a bit more of the environment, but it would have been time-consuming to shoot tiles for every possibility. So, that wasn’t a solution. We had to design something more advanced to completely capture the environments.”
It’s a bird. It’s a plane . . .
Superman flew thanks to the efforts of artists and animators at visual effects studios who created and performed actor Henry Cavill’s digital doubles.
For close-ups, the crew would film Cavill on a belly pan. “It’s more stable than wires for the actors, so they can do more powerful, natural actions” says Guilluame Rocheron, visual effects supervisor at MPC, where artists prototyped Superman’s flight style and cape movement. “We often kept his face and hands and then replaced his body with CG.”
During one of the first belly pan tests, Director Zack Snyder showed the team the type of action he wanted to see. “We were doing a performance-capture session,” says John Des Jardin, visual effects supervisor. “We gave Zack a camera. He went right up to Henry, pushed between his arms, right into his face, and shook the camera. We took to heart that we needed to have that kind of frequency of shaking when we were close.”
Usually, when the film crew shot Cavill on location, he didn’t wear Superman’s cape. “When we shot live action with the cape, it would fall behind him and not do anything,” Rocheron says. “But we wanted that big flare. We wanted to control that. So we had a little rig with simple dynamics that the animators could use to pose the cape and time it nicely to what Superman was doing. We’d reference illustrations by Alex Ross. Once we had a sign-off on the keyframe poses, we’d run the simulation. The keyframe poses drove the simulation.”
Autodesk’s nCloth within Maya provided the dynamics. “We had a layer of custom tools to connect the animation pass to nCloth,” Rocheron says.
Rocheron estimates that they added a digital cape to 70 percent of the shots. “If the cape is an extension of his body action, we added it,” he says. “The first time you see Superman in the movie, you can tell it’s him just by looking at the silhouette because the cape forms that super-iconic Alex Ross shape.” – Barbara Robertson
The result was a process they’ve named “Envirocam.” “It’s a combination of using a camera on a precise, fast, motorized nodal head pre-programmed for a repeatable process, and tools to stitch the pictures together,” Rocheron says. “We push a button and the camera captures a 50k map of the environment, a 360-degree spherical capture with 72 pictures per capture, 12 times the resolution of an HDRI. Then, we built an entire pipeline to calibrate the captured environment with 3D geometry of Smallville and project the sphere onto Smallville. Those tools were mostly within [The Foundry’s] Nuke. We also captured HDRIs, and sometimes it was easier for lighters to work from them, but the Envirocam is also a giant HDRI.”
Thus, rather than a library of plates, the team at MPC had a library of Envirocam spheres. “We could re-create the entire environment in CG in the shot lighting,” Rocheron says.
To create the geometry onto which they would project Envirocam spheres, modelers at MPC started with a Lidar scan of the location in Plano. Then, working in Autodesk’s Maya, they re-created the buildings, streets, and other elements.
In addition to the Envirocam system, a second device dubbed a “Shandycam” captured still images of the actors to help light the digital doubles in postproduction. “It’s a pipe rig with six cameras that wrap around the target,” Des Jardin explains. “It was a quick and dirty way to get an on-set record of lighting on an actor you’ll turn into CG.”
Des Jardin describes the action on set: “We’d have four guys hold Henry [Cavill] upside down aimed at a Kryptonian. The camera operator would get that cut. Then we’d bring in the Shandycam. Bang, bang, bang. We’d shoot six rounds of stills that we could use as tileable textures for that moment of most clarity in the move. Have everyone hide. Put a tripod with the Envirocam where John Clothier [camera operator] was. And we’d take two Envirocam sets of stills.”
With those Envirocam stills, they would be able to move a digital camera through a matching digital location later. On set, it took the crew between three and five minutes for both setups and a subsequent HDRI capture.
“We tested the system in May 2011 and then shot scenes in August 2011,” Des Jardin says. “It was like a dance down the street of Plano, Illinois. Camera. Shandycam. Envirocam. We also had witness cameras, and the Kryptonians wore performance-capture suits so we could animate digital characters using their moves.”
MPC ARTISTS OFTEN replaced the actor’s real cape with a digital cape that could form iconic shapes in the live-action shots.
On set, the crew might film a shot in multiple pieces. For example, they might film Cavill walking, and then have a stunt actor playing a Kryptonian slap Cavill’s double to the ground. A takeover later would have the CG Kryptonian and Cavill’s digital double do a superhuman slide. Then, the camera would frame the real actors’ faces. If the action called for another Kryptonian to jump into air and crash into Superman, the team at MPC could make that happen even though they had nothing filmed.
“You witness the superhumans fighting and flying,” Rocheron says. “But, it isn’t like traditional visual effects where you cut to a wide shot to better witness the action. This was continuous. It looks like the camera is continuously filming everything you see. You see Henry [Cavill] as Superman, and then he takes off to dodge a Kryptonian punch right in front of your eyes. That was the whole point. You see actors’ faces. Something super happens. But, there’s no cut to go from live action to CG. We could do this because we had the library of Envirocam spheres. It gave us a lot of freedom.”
Rocheron gives an example of how that freedom made it possible to change a shot later. On location, Cavill is in hand-to-hand combat with a stunt actor, who they would replace with an eight-foot-tall CG character, a Kryptonian in full armor. “It’s a cool little action piece,” Rocheron says. “A traditional fistfight with Henry [Cavill] standing on the ground. But, if Superman were fighting an eight-foot-tall character, he would hover aboveground so that he’d be at the same height and could punch the guy in the face.”
Using Envirocam spheres taken on the day in the shot lighting, the postproduction artists at MPC replaced the background, animated Cavill’s digital double hovering in front of the Kryptonian, and filmed it with a digital camera move that followed Superman as he lifted into the air.
“Technically, we can put the camera anywhere now,” Rocheron says. “We never had that moment when someone said, ‘Sorry, we didn’t shoot a plate for that side of the street.’ We have the full 360-degree environment already lit. It’s a cool thing.”
Barbara Robertson is an award-winning writer and a contributing editor for CGW. She can be reached at BarbaraRR@comcast.net.
Metropolis
Artists at Double Negative, led by visual effects supervisor Ged Wright, created the biggest city in the world and then demolished it to street level. “We talked a lot about what that might entail,” says John Des Jardin, visual effects supervisor for Man of Steel. “They built 32 square miles of a CG city for Superman to fly through. He fights at ground level and up in the air. Ged and his team spread through Chicago taking photographs and doing Lidar scans for days. We also had a helicopter shooting the Lidar scanner downward. I thought it would tap out halfway down to street level, but was pleasantly surprised that they could use the point cloud scans to build geometry even for the streets.”
Although the film crew shot footage in Chicago, particularly for crowds of people running from destruction, Double Negative artists created Metropolis by patching together three digital cities. “We had Chicago lakeside, Los Angeles, and New York,” Des Jardin says. “Three big city areas with lower-level buildings between. The destruction happens in Chicago. We flattened the center, have a ring of destroyed buildings, and then part of the city not so destroyed. For that, the actors are in a greenscreen world.” – Barbara Robertson