The credibility of virtual humans, the possibility that CG actors could step in for real actors in more than wide shots, stunts, and brief glances, took a giant step forward when artists at Digital Domain created the character Benjamin Button for the movie of the same name and brought home a visual effects Oscar.
Artists at that studio have continued pushing the technology and the process forward. Three pixies in Maleficent represent their latest efforts.
"We built on the foundation we established with the facial animation, skin, and eye shader work on Benjamin Button and then
Tron," says Kelly Port, visual effects supervisor at Digital Domain. "We took that to the next level. We had approximately 600 shots that made it into the film, but by far, [creating the pixies] was the place where we pushed the state of the art."
Directed by Robert Stromberg and starring Angelina Jolie in the leading role, the film tells the story of Maleficent, the villain with a heart of stone in "Sleeping Beauty." (It's Maleficent who places the curse on newborn Aurora that will take effect on her 16th birthday.) Carey Villegas was overall supervisor of the film, produced by The Moving Picture Company (MPC), Roth Films, and Walt Disney Pictures, and distributed by Disney Studios. In addition to Digital Domain, Method Studios, MPC, and The Senate provided visual effects, and The Third Floor did previs.
Early in the film, when princess Aurora is born, the king asks the three little flower pixies - fairies - to take care of her and keep her safe. In the middle of the film, the pixies transform into humans. And then later, they become little pixies again. So, sometimes they are full-size humans played by live-action actors, but when they are only three feet tall, they are CG characters.
Performance Capture
Because the flower pixies would transform into actors later in the film, the actors drove the little fairies' performances, and each pixie resembled one of the actors: Imelda Staunton (Knotgrass), Juno Temple (Thistletwit), and Lesley Manville (Flittle).
"The pixies are stylized versions of the actors," says Darren Hendler, digital effects supervisor. "Robert [Stromberg] and Carey [Villegas] wanted photoreal characters that conveyed all the essence of the actors' performances. They are not animated creatures."
Creating the stylized digital doubles was a multi-step process. The crews captured the actors' physical and facial performances, created CG models rigged to accommodate each actor's expressions, applied captured motion to the CG models, and textured, shaded, and lit the actors' faces.
For the performances, the filmmakers brought the three actors and stunt actors to Digital Domain's virtual production studio where Gary Roberts supervised the motion capture. "We were able to do three face and body captures simultaneously with the stunt actors on rigs," Port says. "We used wishbone poles to move them around. So, we had six people in the capture volume while we were capturing the three helmet-cam setups. It was pretty involved."
Each actor that was captured had two camera operators: one taking a wide shot and one taking tight shots. Six additional camera operators provided witness camera footage for wide and tight reference. Four cameras filmed the face.
"We can analyze the black-and-white HD cameras from the helmet, but they have a really wide focal length and are a bit distorted," Port says. "It isn't the best for animators smoothing out facial capture. So, we also used cameras farther away and zoomed into the face."
Pixie Style
Before the team could apply the captured motion to the CG pixies, modelers and riggers needed to build the characters.
"Traditionally, if we're just doing characters, we build the character first," Hendler says. "On this show, we built CG versions of the actors as our base so that our base was sound and really matched each actor."
Maleficent in Action
Although work on the pixies comprises a large portion of Digital Domain's artistry and development, the studio also created effects around the main character: Maleficent's wings, Angelina Jolie's digital double, and the environments through which she flies. All told, the crew created digital versions for five of Jolie's Maleficent costumes.
"Angelina Jolie had a lot of wire work and dynamic flying shots," says Darren Hendler, digital effects supervisor at Digital Domain. "One of the first things we asked production was that she have tight-fitting costumes, not free-flowing, because that would work better on the rig. On the first day of shooting, she was in a long, free-flowing chiffon piece with five-foot-long sleeves billowing in wind from fans. Her costume wound around the rig. To do the paint-out would have been so complicated, we replaced her entire body and costume. To that end, our CG costume had to match all the free-flowing dynamics."
In addition, although the crew used as much live-action footage as possible, they sometimes had to substitute a complete CG double with digital hair. "In some cases, the flying performance didn't match what [Angelina Jolie] and the director wanted, or we didn't have footage," Hendler says. "We had an opportunity to scan her at the end of the shoot with her full on-set makeup and prosthetics, so we could use that one-to-one with no changes. As we did with the pixies, we worked with side-by-side versions of the live action. Her free-flowing hair required complicated dynamics and simulations."
Even more complicated were her wings. "We modeled and rigged every feather in those wings," Hendler says.
The artists based the wings on eagle wings, albeit ones that could be art directed. "We had a lot of art direction for how the wings folded, bent, and moved," Hendler says. "We had to custom-create poses and make sure none of the thousand feathers would interpenetrate. Each sits on its neighbor in space. We wanted them to be correct."
To handle the collisions, the crew built an in-house tool kit. Animators worked with large flight feathers and had a textural view of the rest to shape the wings as directed. Then, the effects team converted the wings to full feathers and ran the simulations. "The director's brief from the beginning was that the wings had personality and life," Hendler says. "They weren't just physical wings on her back. Sometimes they aren't under her control. We had shapes for which it wouldn't be possible to simulate the wings without the feathers interpenetrating. So, we had a small team of people who removed the worst offenders." - Barbara Robertson
Thus, the artists created photoreal CG doubles for each actor early in the process, using a Light Stage system at USC's Institute for Creative Technologies (ICT) for high-resolution facial scans, and Gentle Giant for body and head scans. "Working with Paul Debevec, we got pore-level scans at ICT," Port says. "Those high-resolution scans for the base model were the best we've ever gotten."
To accommodate facial expressions, the team had the actors perform a set of FACS poses. Modelers and riggers used expressions captured in the FACS poses to sculpt shapes and design a system of control points for each pixie's face.
"We did the captures in 2012," Port says. "We got still FACS poses at ICT and poses in motion with the Disney Research group based in Zurich." The Zurich crew installed a booth on set in London to do FACS sessions supervised by Digital Domain.
"Working with the Disney Research Group in Zurich, we could see transitions from one face shape to another," Port says. "Not to the resolution ICT got, but in motion. It was very helpful, an extra layer of shape reference we didn't have before."
Separately, modelers at Digital Domain built the pixies.
FROM LEFT TO RIGHT: Head-mounted cameras captured Juno Temple’s facial expressions; a CG model of Temple was the base for the stylized model of Thistletwit; the final pixie.
"We didn't want just a smaller version of the actors," Port says. "But, finding the balance was a creative challenge. We wanted to keep the essence of the actors, so we had to decide what we needed to keep. They absolutely had to reflect back to the original actors."
To help the modelers, the team had the actors do what Port calls a "crazy, broad range of facial expressions," none of which would appear in the movie but that the team could use as character studies.
"Understanding the human face, the essence of a character, was an interesting learning process," Port says. "It's one thing to do a caricature for a static image, but in the animated world, there are expressions. We probably did a few thousand iterations."
Using a consistent strategy as much as possible, the team gave all the pixies larger heads, pointy ears, larger eyes, and slightly smaller noses, but varied the shapes, proportions, and relative distances between the eyes, nose, and mouth.
Then, they transferred the face shapes and rigs created for the CG versions of the actors to the pixie models using tool sets devised for the task. "We did most of our work on the [CG] actor's face," Hendler says, "and then transferred the face shapes to the new [pixie] facial anatomy. The pixie design changed through production, so having tool sets that automated this process was great."
Each pixie's face might have as many as 3,000 shapes. "We've done a lot of work on our facial rigs," Hendler says. "In the past, we might have a linear transition between one expression, one face shape, and another. Now, that same in-between might have 10 face shapes. We have a huge number of shapes around the eyelids to have the skin unfold and lift up."
MOTION-CAPTURE DATA underlies the pixies’ performance and expressions.
Each CG model of an actor and the CG model of the actor's pixie had the same rig design - the same settings. "We made sure our pixie's face matched the nuances on the actor," Hendler says. "If an animator gave the actor a 50 percent smile, the pixie would have a 50 percent smile. The animators could switch back and forth. That was crucial."
Once the animators were satisfied with the shapes and rigs on the pixie model, they could work with the data captured on the motion-capture stage.
"Transferring the data from the camera to the rig is an area we improved upon," Port says. "We're taking the data and applying it so fast now that animators can do it themselves. The tracked data comes in. Someone hits a button. The tool set processes the data and puts it onto the rig quickly. It's a darn good starting point. The challenging areas are around the eyes and mouth."
The Eyes Have It
The tool set moves the motion-capture data onto the pixie body and, separately, onto the face. "We had a moving point cloud with 200 points representing the actor's face," Hendler says. "Our solver takes those 200 points and transfers them to the [CG] face. Then, it's up to the animators to enhance, refine, and work on regions that weren't captured fully."
Once the data is on the pixie rig, animators can see the pixie face moving with the actor's expressions as the actor speaks her lines. "Animators listened to the audio track and worked with Matthias Wittmann, our facial animation supervisor, to nail the mouth shapes so they hit the dialog," Port says.
The eyes were a special concern. As anyone who evaluates virtual humans and digital doubles knows, the eyes can make or break the illusion. For reference, the crew had high-speed footage of the actors' eyes filmed extremely closely under specific lighting conditions.
"We realized that when someone blinks, the blinking is not just up and down," Hendler says. "When the eyelid comes down, it takes a different path. We tried to mimic a lot of detail that we saw in physically correct ways."
Blood Flow Beneath the Surface
As digital doubles edge closer and closer to photorealism, visual effects artists continue to find subtle additions that provide an extra bit of physical reality. For this film, the artists at Digital Domain sent blood flowing beneath the skin of the pixies' digital faces.
To do that, they first had the actors playing the pixies perform a range of expressions under a specific cross-polarized lighting setup at ICT to see how muscle motions affected blood flow.
"We had them hold an expression and then relax," says Darren Hendler, digital effects supervisor at Digital Domain. "Then we timed the rate at which the changes in blood flow would occur under the skin, the number of frames. This is something we'd never done before. We could see how muscle motions affected the amount of blood under the skin. If the actor scrunched her face, her face got redder. If she compressed her lips, the blood drained out."
The artists soon learned the effect has to be subtle. "If you put it in full tilt, you get a red face very quickly," says Kelly Port, visual effects supervisor at Digital Domain. "So we had to dial it in. There is a transition, a time delay, when the blood is pushed out or flowing back into the skin. To my knowledge this hasn't been represented before. Adding it is unique. As subtle as it is, it takes us one more step toward being realistic." - Barbara Robertson
Port is particularly proud of the work the team did with the eyes. "We have done the eyes so well that you could have only the eyes on a full 100-foot screen, in motion, and they would hold up," he says. "All those little details - eye water, modeling, and shaders within the eye itself, the corners of the eyes, the wetness, the irises...all that is in there. The whole eye area was critical."
As the animators worked, they could see wrinkles and fine details in the skin on the pixies' faces. "We always want to make sure our animators work in a real-time environment, so we spent a huge amount of development to create a real-time version of our facial rigs," Hendler says. "We have all our 3,000 face shapes with dynamic wrinkles working in real time in [Autodesk's] Maya. "And we created CG effects shaders within the Maya viewport. In the past, we would run the animated characters through a light render so the animators could see what they looked like. Now, they can see how wrinkles affect the expressions."
Wrinkles and More
"One of the great things about the rigs and the fast graphics cards in our workstations is that we could implement CG effects that we usually don't see until a shot has been lit, rendered, and comp'd," Port says. "Before, the animators would work with a low-resolution animation puppet and would have to guess what the face would look like with its displacement shaders. Now, they have the benefit of real-time wrinkle maps and displacements. They could even see the blood flowing beneath the skin (see "Blood Flow Beneath the Surface" page 18), although most animators turned that off. The wrinkles and displacements, which affected the shape, were more informative. Blood flow was more important when the shots went to lighting and comp. It's pretty cool."
In addition to using Maya for animation, the team used that software program for cloth simulation, rigging, and modeling. Pixologic's ZBrush and Autodesk's Mudbox also helped with high-resolution sculpting. Chaos Group's V-Ray rendered all the shots, including, via a custom interface, shots with clouds and effects created in Side Effects' Houdini. Compositors used The Foundry's Nuke.
Cloth simulation was complex. "I had never before done a cloth simulation with thousands of lily petals," Hendler says. "And not just petals. Leaves with different thicknesses, spines, veins. We had to mimic all that into their digital wardrobes. We tried to build [real] miniature versions of the wardrobes, sewing flowers and leaves together to see how they would move. One guy spent every night for a week building a hydrangea bodice. We were all familiar with fabric, but building with organic materials was something we'd never done before. And kind of hope we never do again."
Thistletwit, for example, had a dandelion skirt with grassy layers of leaves, and had layers of fur and hair. "She had 12 or 13 hair and fur grooms," Hendler says. "Every time she bends and moves, millions of hairs interact correctly."
Shared Environments
When Maleficent flies through the fairy-tale world, Digital Domain artists created some of the CG environments surrounding and beneath her.
"We had volumetric cloudscapes in stereo," says Darren Hendler, digital effects supervisor at Digital Domain. "She flies down and through canyons above the water, and we see MPC's creatures flying and diving out of the water. We created water simulations with their creatures. And it was all in stereo, so there was no cheating."
The studios accomplished the shots using Alembic and deep files. "We could take a deep file from MPC and use that in [Side Effects'] Houdini to hold out our water," Hendler says. "It was some of the most complex sharing we've ever done." - Barbara Robertson
"The bar is so high these days, the audiences are so sophisticated, there's a high threshold for the quality we have to hit to be competitive," Port says. "The computers and servers are faster, and having more processors helps. And then on top of that, we add technology that we've developed at Digital Domain over the past 10 years."
The result of that development is three digital characters that bring us closer to a photoreal digital double than ever before. "I really love the pixies," Port says. "Now that we've nailed it, I'd like to do a whole movie with just the pixies."
Barbara Robertson
is an award-winning writer and a contributing editor for CGW
. She can be reached at BarbaraRR@comcast.net.