Mother Of Invention
Issue: Volume 34 Issue 3: (March 2011)

Mother Of Invention

The story, written by director Simon Wells and his wife, Wendy Wells, takes a left turn from most Disney movies in which mom plays no part. For their part, these screenwriters sent mom to Mars to nurture a colony of motherless aliens. But, the spaceship snags her 9-year-old son Milo, too. Once on the planet, Milo learns he has one night to send mom back home. Fortunately, a geeky and fat Earthman named Gribble and a rebel Martian girl help him take on the alien nation as he attempts to rescue his mom.
 
Created using performance capture, Disney’s Mars Needs Moms is the fifth such film for producer Robert Zemeckis; the second and last for ImageMovers Digital (IMD), the performance capture studio he founded; and the first for Simon Wells. Wells had directed several animated films—including The Prince of Egypt, for which he received an Annie nomination—and the live-action feature The Time Machine; he also had been a story artist for Shrek 2, Shrek the Third, Flushed Away, and other animated films.

“The method intrigued me,” Wells says. “I’d been fascinated by motion capture. I like that you get to work with actors, they get to work with one another, and that it’s performance, not voice-over in a booth. So that appealed to the part of me that liked filming live action. And I have directed a number of animated films, so having all the advantages of animation in postproduction appealed enormously to me.”

 Wells took the job on one condition: that he and Wendy could write it. “They had done some development work, but they weren’t happy with it,” Wells says. “So we took Berkeley Breathed’s book and worked directly from that. The film was always conceived as a motion-capture project.”


ImageMovers Digital translated motion data onto FACS-based expressions, such as these, that the crew implemented within a blendshape facial animation system

Modifying the Process

Although the process Wells used was similar to that developed by Zemeckis to incorporate live-action techniques for A Christmas Carol and previous films, Wells modified it in ways that reflected his work on traditional and CG animated films. As with Zemeckis’s films, Wells captured actors’ dialog and performances—on Mars, the hands, face, and body of as many as 13 people at once.

“We did scenes in one continuous take,” Wells says. “It’s better than live action, where you do master shots and then coverage, and each has to be set up while the actors go to their trailers, lose energy, and then have to get up to speed again.” Then, he selected the performances he liked.

“At this point, the camera angle is irrelevant,” Wells says. “You’ll see, during the credit roll at the end of the movie, the performances we captured. They are nothing like the shots in the movie.”

Wells shot the actors for five weeks starting at the end of March 2009. When they finished, he selected the performances he liked. The crew then applied the motion data captured from the bodies in selected performances to what they call “video game” resolution characters. But, whereas Zemeckis had a rig that allowed him to steer a virtual camera using gear that resembled a real camera, Wells drew on his experience as a story artist and director for traditional animated films to create a rough cut from the performances.

“Wayne [Wahrman, editor] and I found we could imagine the shots in 3D space, so I would draw thumbnails to describe to the artists how we wanted the shots to go,” explains Wells. “We would pretty much talk our way through the sequences, sketching them out. The crew made our low-res 3D shots from that.”

With the 3D characters in the 3D sets for each shot, Wells could decide where he wanted the virtual camera. “Once you’ve built your 3D models, you have the actors in a box,” Wells says. “You can have them do their best takes over and over and over without complaining, and you can do photography to your heart’s content. We pasted the facial performances onto the low-res models so we could see the actors. It was a bit squirrely to have what looked like a mask stuck on a 3D model, but it was very useful for timing.”

Wells found the process of filmmaking using this method exhilarating and a bit dangerous. “You can wander around and look at the actors from any angle you like,” he says. “So it’s a rabbit hole that you can quickly go down. I was shooting and reshooting and reshooting the same performance again and again. It takes a certain discipline to make decisions and move the story from scene to scene.”

With the shots edited into sequences, the camera moves in place, and the overall rhythm of the movie set, Wells began working with the animation team in August 2009. Huck Wirtz supervised the crew of approximately 30 animators who refined the captured performances working with the motion-captured data applied to higher resolution models. Here, Wells departed from Zemeckis’s approach, too.

“Bob [Zemeckis] and I have the same feeling about staying true to the actor’s performance,” Wells says. “But he was trying to get photoreal movement. I chose to take it to a degree of caricature, which was in tune with the character design. I’d take an eyebrow lift and push it a bit farther. The smiles got a bit bigger. It was exactly what the actor did, just pushed a bit.”

Wells believes that most people won’t notice the caricature, which isn’t cartoony, but gave the characters more vitality. “It makes them feel a bit more alive than the straight transfer of motion data,” he says. “In terms of actual choices, the way the character behaves emotionally came entirely from the actors. That said, though, being able to translate that emotion through the medium took skill.”

Refining the Performances

Wirtz organized the work by sequences that he scheduled on a nine- to 10-week basis, and then split the work among himself and two sequence leads. “We all worked equally on everything, but if it came down to someone having to get pummeled, it was usually me. “I was glad to take it, though.”

In addition, five lead animators took responsibility for particular characters, usually more than one character. And, Craig Halperin supervised the crowd animation. “We created the motion cycles for him to use in Massive,” Wirtz says. Otherwise, the crew used Autodesk’s Maya for modeling, rigging, and animation, and Mudbox for textures.

The animators began translating the actors’ emotional performances by “cleaning up” data from the body capture first. Then, they moved on to the more difficult tasks of refining the motion-captured data for the hands and faces. “We had a great system worked out,” Wirtz says. “We’d start by showing Simon a whole scene with the data on the high-res models, but with no tweaks. We called that zero percent.”

Next, they’d adjust the eye directions, adding blinks and altering eye motion, if needed, so all the characters were looking at the right spots. And, they worked on the hands. “We made sure the characters grabbed what they needed to grab,” Wirtz says. They showed the result to Wells and called that stage “33 percent.”

Once Wells approved the 33 percent stage, the animators moved on to the mouths, making sure the lips synched to the dialog and that the facial expressions were appropriate. “We wanted to be sure the eyes caught the tone Simon wanted,” Wirtz says. That resulted in the 66 percent stage. Between 66 and 99 percent, the animators worked on the fine details.

“We did a lot of hand tuning on everything,” Wirtz says. “A lot on the faces, on anything they hold or grab, and any contacts—feet on the ground, things like that. Sometimes the data comes through perfectly and you’re amazed right out of the box, but you always have to do the eyes.”

The facial animation system used blendshapes based on FACS expressions. “It looked at where the markers were and tried to simulate the expressions,” Wirtz says. “We also based the system on studying muscle motion. The system kept evolving; we kept refining it. It takes a lot of heavy math to spit out a smile. Simon was happy with the performances by the actors on stage, so we worked hard to keep that emotion. We were definitely translating a human performance onto an animated character, and what makes it come through is that we tried to get back to the human performance.”


IMD could capture data from faces, hands, and bodies from as many as 13 actors on set at one time. A crew of approximately 30 animators refined the performances once they were applied to CG models.

Keeping the characters out of the Uncanny Valley, where they look like creepy humans
rather than believable characters, depends primarily on the eyes, Wirtz believes. “We paid close attention to what the eyes are doing,” he explains. “We tried to follow every tick carefully. It’s not the eyeball itself, it’s also the flesh around the eyes. It has to be there, working correctly, all the motivational ticks and quirks in the eyebrows. The other side of it is the rendering.”

Creating the Look

Rendering, along with texture painting, look development, lighting, effects animation, compositing, and stereo, fell under visual effects super­visor Kevin Baillie’s purview. Artists textured models using Adobe’s Photoshop, Maxon’s BodyPaint 3D, and Autodesk’s Mudbox for displacements. Pixar’s RenderMan produced the shading and lighting via an in-house tool called Isotope that moved files from Maya.

“We know what ‘real’ looks like,” Baillie says. “Being a half-percent wrong puts a character in the Uncanny Valley. So, we made the decision to stylize the characters and have a bit of fun with them. When you have characters that are more caricatures than photoreal humans, the audience lets the character off the hook a little bit.”

To help animators see how the eyes would look once rendered, R&D engineer Mark Colbert created a virtual eye. “Usually animators work a little bit blind,” Baillie punned, and then explained that animators have geometry for the iris, pupil, cornea, and so forth. But the animation packages don’t show the refraction from the cornea to the pupil, and that pushes the pupil. Thus, animators often have to adjust the eye line after rendering.

“Mark [Colbert] created a way for animators to see the effect of the changing refraction in the Maya viewport using a sphere with a bump for corneal bulge and a CGFX shader,” Baillie says. The CGFX shader produced the results in real time.

Much of the film takes place in the shiny-metal Mars underground, which became an interesting rendering predicament. “RenderMan is phenomenal at displacement, motion blur, and hair, but shiny things and raytraced reflections are challenging,” Baillie says. “We couldn’t use spotlights. We had to have objects cast light. So we implemented point-based lighting techniques for reflections.”

Christophe Hery, who is now at Pixar, had joined ImageMovers Digital during production and helped the crew implement an evolution of techniques developed while he was at Industrial Light & Magic, and for which he had received a Sci-Tech award (see “Bleeding Edge,” March 2010). “We rendered scenes with fully reflecting walls and floors in 15 minutes,” Baillie says. “It was unheard of. That optimization really saved our butts. We did all our indirect illumination using point clouds generated from all the lights in the scene that had to emit light. We’d bake the floor into a point cloud, and had those points cast lights.”

When the characters Gribble and Milo find themselves in an abandoned Martian city with glowing lichen and other bioluminescent vegetation, the crew handled the lighting by baking each little plant into a point cloud that cast light onto the ground. They also used point clouds for subsurface scattering on the characters’ faces.

“Christophe really helped out a lot with that,” says Baillie. “We had the characters singing on all four cylinders because we had the guy who invented the technique working with us. We started with a shadow-map version of subsurface scattering but ended up preferring the look and speed of point-cloud-based subsurface scattering.” When they reached the limitations of the point-cloud techniques—lips might glow when they touched—Pixar’s RenderMan development team jumped in to help.

“I think the thing that had the biggest impact on the film at the end of the day, and the scariest at first, was the extensive use of indirect lighting with point clouds,” Baillie says. “The studio had put a lot of hours into a system they had used for A Christmas Carol, so it was hard to persuade them to change. But I’m super glad we pursued it.”



(Top) The design for the aliens meant they had to be CG characters; they couldn’t be people in suits. Animators scaled the mocap data appropriately. (Bottom) Gribble and Milo are two of the four human characters in the film, but they are caricatures, too, and that helps them avoid the Uncanny Valley.

Moving On

While they were still in production, the crew learned that Disney would close the studio. “It was sad,” Wells says. “I understand Disney’s decision from a business point of view. To keep the studio running, they’d have to guarantee a tent pole every year. They didn’t want to carry another standing army of 450 artists. But from the point of view of creative artists, this crew was working together so efficiently and fluidly, producing such high-quality work, and it was heartbreaking to see it broken up.”

Already Baillie has joined with two other former crew members from ImageMovers Digital: Ryan Tudhope, who came to IMD from The Orphanage, and Jenn Emberly, who was performance supervisor at IMD, and before that, animation supervisor at ILM. The three have founded Atomic Fiction, a visual effects studio based in Emeryville, California. For his part, Wirtz founded Bayou FX in his home state of Louisiana, in Covington, near New Orleans.

It’s likely that the former IMD’ers will continue networking in interesting ways, as they take what they have learned at the studio and expand it out into the universe. “Every once in a while a bunch of us get together and reminisce about the awesome things we did together,” Baillie says. “One of the hardest parts of my job was figuring out which amazing idea to go with. It was a constant barrage of consistent amazement.”

Barbara Robertson is an award-winning writer and a contributing editor