In two films this year, we saw actors performing in zero gravity, which is, of course, something impossible to film.
Director Alfonso Cuarón’s Gravity sends two astronauts (Sandra Bullock and George Clooney) orbiting in outer space without a vehicle. Their space suits are low on oxygen, and debris from an explosion flies toward them. We watch Clooney drift away, and then for the rest of the 90-minute film, the camera follows Bullock as she tries to find a way home. There are only a few scenes in which Bullock is not in zero gravity.
In Gravity, previs often drove the cameras, lights, and sometimes even the actors’ motion on set.
In a second zero-gravity film, Writer/Director Gavin Hood’s Ender’s Game, a brilliant young teenager spends time weightless in a “battle room” learning how to lead a team that will fight a forthcoming alien invasion.
Artists at Framestore handled 95 percent of the visual effects in Gravity. Similarly, Digital Domain artists provided the visual effects for Ender’s Game. To sell the illusion of weightlessness in the actors’ performances, the two studios used two very different approaches.
Tim Webber was visual effects supervisor for the film; Max Solomon was animation supervisor and previs’d the opening sequence. We caught up with Solomon at the VIEW Conference. “Alfonso [Cuarón] conceived the film as traditionally filmed, with actors on sets and on wires,” he says. “But, he has a very particular style: His films are immersive, with long takes, and early tests showed that a traditional shoot with post wouldn’t work. Tim [Webber] suggested using CG. Alfonso was skeptical, but there was no alternative.”
Thus, the team developed a plan in which they would move the actors only moderately and have a camera and lights orbit around them.
“Alfonso wanted total freedom of motion,” Solomon says. “There would be no sense of horizon. The characters would be free to move and shift in any direction, and the camera needed to move all around them.”
But, while it was possible to imagine moving a camera around the actors, moving lights large enough to represent light emanating from Earth presented a problem. “The Earth provided much of the light, and it would be colossal in frame,” Solomon says.
In early 2010, long before production started, Paul Debevec’s group at ICT/USC had demonstrated a light stage system in which light from LEDs surrounding an actor provided changing lighting conditions, while high-speed cameras captured the actor’s face. Later, Director of Photography Emmanuel Lubezki saw images created with LEDs on screens behind performers at a rock concert. The ideas coalesced and evolved into a lightbox that Webber designed, a cube within a cube covered with LEDs that Framestore eventually programmed with images.
The outside cube was 20 feet high to provide room for a tilt rig beneath and a camera to move below. The actor performed within a smaller cube inside that was typically 10x10x10 feet; however, it could change shape and size, and walls on sliders could move in and out. Actors inside the box could see images created with the LEDs. Light, which could change in color and brightness, appeared to move around the actor inside.
At top, an actor in the lightbox could see images created with LEDs. At bottom, Framestore artists worked from previs to animate the wide, all-CG shots.
Importance of Previs
In 2010, when Solomon began working on the previs, two artists worked with him, but that team soon grew to 30 animators, and The Third Floor contributed previs. “Initially, we thought the previs would be a guide, but as we developed it, we realized previs would drive the lightbox and cameras on robots. It would need to be technical and precisely planned. Alfonso realized that this is where he would make his film.”
To understand how people move without gravity, the animators spent time studying reference material from NASA, running simulations, and talking to astronauts. They mapped out the shot structure, working from storyboards.
“Alfonso is one of the rare directors who imagines something and takes it all the way through,” Solomon says. “The ideas were all there in the original storyboards.”
For one shot during the opening sequence, however, the previs artists had to deviate from the camera motion planned in the storyboard. In the shot, the space station has been hit, and Bullock, who is outside in a space suit, grabs onto an arm projecting out from the station. “The camera was disconnected from Sandra [Bullock], and it was confusing,” Solomon says. “We found that in all the shots, there is no context. So, it’s hard to assess what’s happening, and that can make you nauseous in stereo. It’s better to have one thing move.” In this case, the previs artists designed the shot with the camera locked onto Bullock while debris spins around her.
Two months before production was due to start, the artists switched from previs to technical breakdowns. In this “techvis” process, the team assessed shot methodology and shoot feasibility, and then did a breakdown of camera and actor motion and lighting. For many of the shots, the team pre-programmed the camera and lights based on decisions made during previs, and for some, even the movement of the actors.
On Set
“We had three shoot methodologies,” Solomon says. “One was traditional with the camera on a crane and actors on wires or dollies. The second used the lightbox and motion control.” In the lightbox, the crew could adjust the master controls for the specially designed camera and offset Earth and sun spheres created with LEDs driven by previs. The hue, brightness, and saturation of the 1.8 million LEDs were individually controllable.
“We shot at half-speed because of limitations on how fast the camera could move,” Solomon says. “Then we retimed after.”
Motion control drove the third method of shooting. “We had the actors, lights, camera all on motion control,” Solomon says. “It was the least efficient and the least flexible, so we used it for only one or two shots.”
Meanwhile, at Framestore, artists did modeling and lighting tests for the CG space suits. “We saw them as a third character,” Solomon says. “We based them on real suits, but they needed greater range of motion. We simulated the cloth to fold, bend, and crease realistically.”
In all the exterior shots, the actors are digital characters except for their faces. After the shoot, the work on tracking the cameras, the helmets, and the bodies began. “It was a massive headache,” Solomon says. “We rebuilt the previs with all the new plates, managed and adjusted the timing, then began the process of re-animating with Sandra’s and George’s performances. The performance was all in the face. By chance, the lightbox was the perfect environment. It was isolating and confusing for the actors – all the emotions they needed to express.”
DIGITAL DOMAIN corrected footage of actors on set to give the correct pivot point for movement in zero gravity.
Ender's Game
In Ender’s Game, Asa Butterfield, the actor playing the lead character Ender Wiggin, trains in a zero-gravity room during battle school. Digital Domain provided the effects for this sci-fi action/adventure under the leadership of Visual Effects Supervisor Matthew Butler. The studio had three big advantages: First, Digital Domain was a co-producer, which gave it early involvement in the planning (see “Moving On Up,” pg. 20); second, Butler has a master’s degree in aeronautics and astronautics from MIT; and third, his roommate in college is Astronaut Gregory Chamitoff, who had flown on the space shuttle Endeavor and spent months at a space station.
Butler worked with Garrett Warren, the stunt coordinator and second unit director, on solving the zero-gravity problem. “It’s tricky,” Butler says. “It’s important to show your real actors and actresses, your heroes, so wherever possible, we wanted to shoot them for real. But, there is no zero-gravity place on Earth we could use. We wanted to shoot live-action faces, but we faced the physical limitations of reality.”
The solution was what Butler calls a “smorgasbord of solutions.” Some shots were fully CG, which removed the problem of physical reality. For some, they could use live-action shots of the actor. “But more often than not, they photographed Butterfield, and we manipulated the content back into a physically realistic situation,” Butler says. “When the actors are in the middle of the battle room, their center of mass is where it is. In zero gravity, you still have a center of mass – mass and gravity are completely independent. But, you have no weight. And, your center of mass cannot move without force. What it means is that the actor’s pivot point should be at a fixed point in space, not a fixed point on the body. There’s no point on the body that is a pivot point. But, we still have the same inertia; Newtonian physics still applies, and we had to abide by all those rules.”
On set, Warren had rigged wires and armatures to move Butterfield. “In tight shots when you couldn’t see anything other than his head and shoulders, he was on a bicycle seat on an armature,” Butler says. “If he needed to leap, Garret would move him on wires. The problem is that penduluming is a function of gravity, so it was hard to move him at a constant speed.”
The stunt coordinator also put Butterfield in a tuning-fork-like apparatus that held him at the waist, and used a dual-axis harness to rotate him. “But again, it didn’t allow for a correct pivot point,” Butler says. “We did our best, and then we’d get back to the ranch here, to Digital Domain, copy what we photographed, and look at what was wrong with it.”
ANIMATORS could move digital doubles freely and use a tool later to correct the pivot point and stabilize the character in 3D space.
Pivot Point
The first step was to move the actors in the footage as much as they could to try to approximate zero gravity. Next, they rotomated the resulting images and copied the movement onto 3D characters. Then, using custom tools, they computed where the center of mass would be at any time to see how much it deviated from what it should be. If the difference was marginal, they used the footage. If not, they fixed the motion.
“We calculated the correct pivot point,” Butler says. “It’s a complex problem to solve with a skeletal structure, but that’s what computers are for. The tools let the performers and animators do what they wanted, then we computed the motion of the center of mass. The tool came up with new animation that satisfied the laws of physics in zero gravity and kept the head pointing back to camera in the same orientation as when the actor was photographed,” Butler says. “That was important.”
The artists projected texture detail from the photographs onto the geometry – the 3D characters – and re-rendered the characters. “It was important to get footage as close as possible to what we wanted to achieve,” Butler says. “So, that’s what we did, and I believe it worked. We repaired all the shots that were wrong, which was probably about half of them. In tight shots, you couldn’t really tell they weren’t in zero gravity. In the medium-wide shots, we replaced nearly all of them. In the big, wide shots, they were fully synthetic.”
However, even those entirely CG shots needed refining, as well. “We didn’t constrain the artists,” Butler says. “They were free to move the characters where they wanted, and then we wrote a tool that corrected the pivot point and stabilized the character in 3D space.”
Facing Reality
To reproduce the actors’ faces, the team relied on scans from ICT/USC to capture data that represented the features geometrically and to replicate the light. The CG characters didn’t need complicated expressions; when the actors needed to deliver lines, the crew filmed them and used that footage.
“On one extreme, you have fully CG characters and can make sure the physics are correct,” Butler says. “On the other, you have human actors, and we were at the peril of making what we shot dynamic. We rocked and rolled between the two and picked our sweet spots. I’m a firm believer that if you can shoot something, you should shoot it, so that’s what we did – even if we’d have to manipulate it. We’re not doing visual effects for the fun of it any more. I believe the work is successful because we had a successful marriage between live-action stunt work and synthetic manipulation.”
“In visual effects, we model reality,” Butler says. “We look at whether something behaves the way we’re used to, and what we’re used to is physics and optics. So, we write renderers and simulators. They look beautiful because they follow physical rules that define behavior. We followed the same guidelines.”
Barbara Robertson is an award-winning writer and a contributing editor for CGW. She can be reached at BarbaraRR@comcast.net.