MARK RYLANCE’S NUANCED PERFORMANCE PROVIDED ANIMATORS WITH UNIQUE, FASCINATING CHALLENGES. DOTS ON HIS FACE CAPTURED SKIN MOVEMENT, BUT BFG’S SUBTLE EXPRESSIONS REQUIRED CAREFUL FINE-TUNING BY ANIMATORS.
When visual effects artists and animators create a CG character, particularly a humanoid character, they often note the quiet performances by the character, the tender moments, as the most difficult to achieve. The moments when a character’s face needs to show emotion, not simply repeat lines. When the character is most human.
Meet BFG, the big friendly giant in the eponymous movie. Created at Weta Digital based on actor Mark Rylance’s performance and voice, the CG character has the face of a thousand stories.
“Mark Rylance can say a thousand things before he opens his mouth,” says Guy Williams, visual effects supervisor at Weta Digital for The BFG. Weta Digital’s Joe Letteri was the senior visual effects supervisor, and Jamie Beard was the animation supervisor.
Steven Spielberg directed the fantasy-adventure film, which tells a kinder story of an orphan girl kidnapped by a lonely, elderly giant than the original Roald Dahl children’s book. The late Melissa Mathison, who wrote Spielberg’s E.T. the Extra-Terrestrial, provided the screenplay for the Walt Disney Pictures release.
It seems odd to conflate “subtle” and “giant,” but this is what Spielberg, Rylance, and the Weta Digital crew have achieved, making possible the all-important connection between BFG and the child Sophie (Ruby Barnhill).
In the film, BFG, a runt among giants, is an outcast who collects dreams and eats vegetables. He kid-naps Sophie after she spies him blowing dreams into a window, and then isn’t sure what to do. The other nine giants in the alternate universe that’s “Giant Country” know: They want to eat her. To defeat these child-hungry giants, Sophie and BFG devise a plot that involves the Queen of England, Bucking-ham Palace, and some farting of green smoke.
Weta Digital was the sole visual effects house on the show, creating digital environments and effects in addition to characters, as they had for Spiel-berg’s animated feature The Adventures of Tintin (2011). But, with Weta Digital’s experience on Planet of the Apes, The Hobbit, and other films in the years since, much about the process has changed.
ACTOR RUBY BARNHILL, WHO PLAYS SOPHIE, HUGS A LARGE JAR BENEATH A HUGE TABLE TO MAKE HER SEEM VERY SMALL.
MOODY MOCAP
“I think Steven was a little bit worried at first, and then he was amazed by the changes,” Letteri says. “At first he dropped into Tintin mode – do a little animation, block out the scenes. The big change was bringing that onto a live-action stage.”
The changes began with the motion- capture stage itself. “For Tintin, the sets were gray rooms with tape on the floor and chicken-wire walls and tables,” Williams says. “A prop for a mug might be a coffee can painted gray with tracking markers. But now we can capture data outside in an environment with actors on a live-action set.”
Thus, the crew decided to move the technology they had used outside for the Apes movies, inside for BFG. That meant the environment inside a motion-capture volume no longer needed to be chicken wire, and props no longer needed tracking markers.
“Mark Rylance is a great actor,” Letteri says. “We wanted to make him and Ruby comfortable by having the two of them act on stage, so we created a theatrical stage. It was the first time we combined the two ways we motion-capture: We dressed the motion-capture volume like a stage set. We started calling it ‘moody mocap.’ ”
The table Rylance touched in BFG’s cottage was handcrafted. He could handle real pots and pans. He could walk to a real door.
“We didn’t spend two million dollars on the set, but it was nice,” Williams says. “We could have shot it. We could dim down the lights. There was light coming in the window and firelight in the fireplace. We told [Rylance] he could pick up any-thing he wanted. We could track it later. We didn’t want to dilute his process to fit our technology.”
MARK RYLANCE WAS MOCAPPED IN A FURNISHED SET WITH A HUMAN-SIZE TABLE. WETA CALLS IT “MOODY MOTION CAPTURE.” A DOLL SOMETIMES STOOD IN FOR RUBY BARNHILL.
Although the crew experimented with having Rylance in a costume, he wore a motion-capture suit instead.
“He didn’t need the costume,” Letteri says. “But he did have a mesh cape when he needed it.”
He also wore Weta Digital’s facial capture system – a helmet with one camera.
“We kept the facial capture technology the same,” Letteri says. “We’re already capturing HD data; we can’t get much more out of it. Most of the work happens back at Weta Digital, trying to read the information and apply it better to the character. We use only one camera be-cause it’s lighter weight and less obtrusive, which makes the filming easier, but it’s harder technically.”
A MATTER OF SCALE
The difference in size between 24-foot-tall BFG and 10-year-old actor Ruby Barnhill was one reason to have the motion capture on a real set. When Rylance walked around his cottage, a doll the size that the character Sophie would be in the film stood in for Ruby. Sets with a giant table made Ruby look Sophie’s size, and for these scenes, Rylance would stand on an 18-foot-tall riser to give her a correct eyeline.
“We’d build a mocap volume on top of the riser,” Williams says. “There were 30 feet between Ruby and Mark as they talked to each other, but the cameraman saw a 24-foot giant because we comp’d the giant over Mark. Whatever Mark would do, the giant would do.”
If Rylance needed to deliver dialog while walking beyond the riser, a camera fed foot-age into a monitor on a pole to give Ruby a correct eyeline. She could interact with his image on the screen.
This is how the process of filming the two actors would typically work. On Monday, they might do a facial and motion capture of Rylance in the cottage using witness cameras, no film. Ruby would be present to read lines to Mark. On Tuesday, they would shoot live-action plates of Ruby. If she and Mark talked, Mark would be on the riser and motion-captured again just in case.
“It gets tricky,” Williams says. “If he needed to move more than he could on the riser, we’d use the mocap from Monday so, for example, you could see BFG walk from the door to the table, and then we’d hand that off to the live-action, real-time mocap. That way, the cameraman could follow BFG walking and see if he ducked down and talked to Ruby. There was a little pop, but he could see the action. A lot of effort went into figuring out how to shoot each day.”
When other giants, who are twice as tall as BFG, appeared in a shot, filming would extend to three days, with the 50-foot giant capture on Monday, the 24-foot BFG capture on Tuesday, and live-action shots of four-foot Sophie on Wednesday.
To put one of the large giants in BFG’s cottage, the crew built a chicken-wire cottage with a full ceiling at half-scale on the motion-capture stage.
“Jemaine Clement (Fleshlumpeater) had to almost crawl through the door, and he ends up in a slumped position,” Williams says. “We wanted to capture that. It was awkward for Fleshlumpeater, and that contempt for being inside the BFG cottage showed.”
A simulcam setup gave Spielberg, the DP, and the camera operator a view with CG giants blended in real time into the live- action footage.
“Because Steven had blocked out the movie to plan it, we had time to think about the scale differences and make the connections work,” Letteri says. “So when he wanted to try something new, we had a kickback of different techniques that meant Mark and Ruby could always work together. Steven started using [the simulcam] to have freedom to explore new ideas.”
The crew also set up a virtual camera tent for Spielberg, and whenever they shot a motion-capture scene, they’d bring the edited motion applied to the CG character into the tent.
“He could go in there and rehash the scene to decide if he needed pickups before the stage was struck,” Letteri says. “When there were set changes going on, he’d be in the tent editing or figuring out scenes. It allowed for a really creative process.”
Because the simulcam was always turned on, the crew had some unintentionally funny moments.
“Between takes, Steven would climb the ladder to the riser to talk to Ruby and Mark,” Williams says. “In the camera, we’d see him and Ruby talking, but when he turned to Mark, we’d see BFG in the camera gesturing to Steven like an actor. It was a great peek behind the curtain.”
ARTISTS AT WETA DIGITAL CREATED NINE 50-FOOT GIANTS IN ADDITION TO THE 24-FOOT BFG, WHICH MEANT SHOTS WITH THE LIVE-ACTION SOPHIE HAD THREE SCALES.
BUILDING GIANTS
Modelers based BFG’s design on artwork for his body and on Rylance for his face. For the other nine giants, they started with rough previs models from The Third Floor, illustrations from Dahl’s book, and concept art from Weta Workshop.
“The audience wouldn’t have a lot of time to get to know them, so they needed to be unique,” Letteri says. “And we needed to get them into 3D quickly. So we brought in the Weta Digital design team to build 3D models and get them into animation.”
As for software, the crew refined the facial rigging system.
“There are a lot of different ways to trans-late what muscles are doing, and sometimes taking the long way around is best,” Letteri says. “We took another pass at analyzing and understanding better our muscles and what influences them, and found better ways to combine them in motion.”
Even though Weta Digital’s motion editing software translates captured data onto the character’s face, giving the animation team a starting point for the facial expressions, the data also moves onto animation curves so that animators can refine the expressions. In addition to having the captured performances, the animators referenced video from the facial camera and witness cameras.
“No computer can translate an actor’s performance directly,” Letteri says. “It never has. I won’t say it never will, but there are too many things we don’t understand. You have to look at a performance as an artist. When I watch BFG, I ask whether I get the same feeling as when I watched Mark on stage with Ruby. Artists have to be involved to get that right. Mark works very subtly. He allows an expression to unfold. He gives the audience time to absorb what he’s doing. The timing for each part of the face and how it overlaps is really important. We had to pay attention to that.”
MOTION DATA
A crew that would top 100 animators and motion editors moved data onto the giants, refined, and augmented the data. Some augmentation was necessary due to the limited data available.
“We don’t often motion-capture hands or get the movement of individual fingers,” Beard says. “We don’t get ear animation.”
But, much of the artistic work focused on facial animation – that of BFG and of the other giants.
“It was really interesting to work on an actor, Mark Rylance, that underplays,” Beard says. That was more of a challenge than working on the other giants. Steven asked those actors to overplay and give a humorous performance.”
Williams provides an example of Rylance’s underplayed performance.
“When Sophie says, ‘Please let me go,’ BFG wipes his face and stares at her,” Williams recounts. “As he says, ‘No,’ you can see he wants to say yes. There’s all this emotion going through his face, but he has only a two-letter word to read.”
Each actor was painted with facial capture dots corresponding to 12 major muscle groups. The dots tracked skin movement, and that skin movement implied the underlying muscle movement. The motion edit software and the motion edit team translated the data and applied it to muscles in the CG model.
“We use tracking dots on the face to infer how the skin and muscles move, but we’re still left guessing about what’s going on under the skin,” Beard says. “It comes down to interpolation. Capturing the movement of the skin is like the tip of the iceberg. Yes, you can see the cheek moving, but what’s driving it? The mystery of the performance still has to be unraveled. This isn’t about pressing a button. Animators have to solve that final riddle.”
NOT JUST A SMILE
Beard provides an example to show how data tracked from dots on the surface of someone’s face can be misleading.
“There’s a shot in which Mark does a warm smile to the camera for the longest time,” he says. “We couldn’t understand what we were doing wrong with our BFG face. It turned out that Mark was doing nothing with his mouth.”
The animators realized what was actually going on in Rylance’s face when they covered his eyes in the video footage: It no longer looked like Rylance was smiling. But when they uncovered his eyes, he looked happy.
“The complexity that goes beyond how the skin moves is really tricky,” Beard says. “Particularly on shots like that. There’s a tendency to rely on the mouth if you want smiling, but in this shot his mouth wasn’t doing anything. He had the apple cheeks usually created by the mouth, but he had used the musculature around his eyes.”
This, Beard points out, is not unlike what art students learn in drawing classes.
“Art teachers drill into you that you don’t draw the surface, you draw what’s under-neath,” Beard says. “That gives you the ability to understand how things are shaped and how to render them in 3D. You need the same skill for life drawing. You need to know the bone structure of a skull to draw a face accurately.
Muscles are angled and positioned in certain ways at different depths, and when you understand that, you can understand the flow of the skin. The tracking data gets us 80 percent there. It gives us the flow of the skin. That last 20 percent is an uphill struggle, but that’s where detail is paramount and the underlying structure is paramount.”
ANIMATORS DISCOVERED THAT RYLANCE’S HAPPY FACE RELIES MORE ON MUSCLES AROUND HIS EYES THAN A SMILE.
GIANT GIANTS
The other nine giants are twice the size of BFG. Stunt actor and choreographer Terry Notary worked with the actors to help them perform as big, heavy characters. On set, motion-captured actors had properly scaled props, and some wore weights.
Motion editors used Weta Digital’s software to translate the captured data and scale it the proper size for the giant CG characters. Animators then tweaked the characters’ weight by modifying the motion-capture data.
“We worked with subtle details,” Beard says. “The arc of something heavy tends to be straighter and not change direction as much. So, if an actor swung his arm, we made sure the arm was as smooth as if it were a massive weighted item. And sometimes, we tried to slow things down – particularly for giants in the background.”
Similarly, the animators sometimes exaggerated the actors’ facial expressions to increase the contrast between BFG and the bigger giants.
“Overall, when someone spoke, we maintained their original performance,” Beard says. “But, there was an opportunity to have more fun with the facial performances, to amp them up a bit more, because there was more humor associated with them. Steven wanted big expressions, to make sure they stayed fun and not scary.”
Once animated, all the giants moved into the simulation pipeline. First, a muscle simulation moved the skin. Then, multiple sims moved the hair and clothes.
“BFG wears a shirt with suspenders and a vest, and all those layers and all parts of those layers are simulated,” Williams says. “We have threads hanging off threadbare clothes. We have buttons that flop around. The suspenders are rope ladders – strings with rigid bars between. It was an insane level of detail. We ran a hair simulation in parallel with clothes that have to shrink onto the characters a layer at a time. If a shirt bunches up, it pushes the suspenders’ rope bridge out. It took days to do.”
INSIDE AND OUT
WETA DIGITAL’S MANUKA RENDERING SOFTWARE MADE DIGITAL ENVIRONMENTS WITH TENS OF MILLIONS OF POLYGONS POSSIBLE. LIGHTING ARTISTS USED FOG AS A COMPOSITIONAL ELEMENT IN MUCH THE SAME WAY A LIVE-ACTION DP MIGHT DO.
The movie was largely shot on stages, with Weta Digital artists extending the sets digitally.
“Even though we had shots on London streets, we built them indoors in Vancouver,” Williams says. “We didn’t want to put a 10-year-old girl on dark London streets. And that way we could shoot whenever we wanted. We also had second-unit footage for Giant Country.” That footage was useful for elements – water crashing onto a rocky coastline – and for reference.
To render the characters and the digital environments, the team used the studio’s proprietary Manuka software, a path tracer.
“We were able to put pebbles on the floor between the stones in the cottage,” Williams says. “We had hundreds of human-sized, itty-bitty books in the giant’s cottage. Bowls full of padlocks around the room. All this interesting detail. Manuka just goes in and renders it no matter how much you put into it.”
The same level of detail extended outside. “We modeled blades of grass, millions and millions for Giant Country,” Williams says. “Shrubs, abandoned cars, trees, Ferris wheels. We used to talk about millions of polygons. Now we’re talking about tens of millions.”
Manuka also helped the artists create the atmosphere Spielberg wanted for these shots.
“We started using creative ways to light the scenes,” Letteri says. “In the past, we might have dressed fog in during compositing, but for this movie, we were lighting with it just like we would have on set. We’d fill an environment with smoke using volumes created by specifying a fog density. We’d hide lights in the virtual sets. The lights would light up the fog, scatter around, and light up an interior. Because Manuka is a path tracer, it interacts with everything. It was a whole different way of lighting.”
DREAM CATCHERS
During the film, the relationship between BFG and Ruby builds through BFG’s ability to create dreams.
“Basically, we made the dreams with light, liquid-y, particle-y light,” Williams says. “There’s a dream cavern in the back of BFG’s cottage, and our Wayne Stables did an amazing job of rendering all the dreams, shiny objects, and glass. The dreams are almost like dust motes lit with invisible light. It’s all done through 3D. The technology to create the dreams wasn’t complex, but the visual storytelling was. You want to feel it more than see it, like something you see out of the corner of an eye. We tease a cloud of particles into a shape that’s a little pantomime story. You might see a glimpse of a couple holding hands, and then it explodes back into a little dream of light.”
The BFG gave Weta Digital artists the opportunity to flex their talents into creating environments and effects that range from wide landscapes to little dreams, and from giant characters’ broad humor to BFG’s subtle emotions.
“I’m proud of what the team has done,” Williams adds. “At one point, Steven [Spiel-berg] asked me why Weta is so good. He wondered if it’s our software. But it isn’t the software. It’s the passion. These projects are made by teams, and our group is phenomenally talented. If Mark Rylance lays himself bare, it’s stupid for us to carve out the most economical way to realize his performance. We have to be as passionate as he is. We don’t stop until it’s right.”
Barbara Robertson (BarbaraRR@comcast.net) is an award-winning writer and a contributing editor for CGW.