Heroic Effects
Issue: Volume: 30 Issue: 11 (Nov. 2007)

Heroic Effects

Imageworks uses performance capture to pwer stylized CG humans for Beowulf, a next-gen 'live-action' animation film.
By Barbara Robertson
 
NO ONE knows whether the legendary warrior Beowulf actually existed—the epic poem describing his heroic acts mixes myth with history—but scholars agree that the characters in “Beowulf” are based on real people. So, too, are the digital characters in director Robert Zemeckis’s film of the same name.

The myth and the film follow the tragic hero through three acts: Beowulf defeats the monster Grendel and becomes the mightiest warrior of them all, Grendel’s mother seduces him, and he fights a dragon and saves his people, but at a personal cost.

For the film, Sony Pictures Imageworks drew on its experience in creating animated films and visual effects for live-action films to blend live-action performances captured from a large cast of actors into an entirely CG film. As the studio had previously done for The Polar Express (see “Locomotion,” December 2004), which Zemeckis also directed, and Monster House (see “This Old House,” July 2006), which he produced, Imageworks captured the performances of all the actors playing the human characters in the film, including those in crowds, and then applied that data to CG characters.

People will ask why, with all the digital characters based on live-action performances, Zemeckis didn’t make a live-action film. Some might answer that he did. In fact, Paramount Pictures is promoting the film not as an animated feature, but as “digitally enhanced live action.” Others might see the film as next-generation animation.

Beowulf, for example, looks nothing like actor Ray Winstone, who plays the legendary hero in the film. Winstone is 5 feet 10 inches tall; Beowulf, 6 feet 3 inches. Winstone is 50 years old, and Beowulf ages from 25 to 70 years old in the film. Other digital humans, however, more closely resemble the actors who perform them—Grendel’s mother channels Angelina Jolie, for example, and King Hrothgar looks very much like Anthony Hopkins. But, none of the characters look photoreal.

“The [digital] characters in Beowulf are always stylized,” says Jerome Chen, senior visual effects supervisor at Imageworks. “They’re mythic. Bob [Zemeckis] likes to see realistic movement on characters that are not meant to look real, but he likes to direct actors.”

The stylized look extends also to the almost entirely CG backgrounds—the only live-action elements in the film are a few film clips of water layered into digital water to heighten the illusion. In addition to the human characters, Imageworks also created the monster Grendel, whose performance is based on that of actor Crispin Glover, and an enormous, fire-breathing digital dragon.

The people, that is, the digital humans, presented the biggest challenge. The crew needed to find a place between photoreal and stylized realism that kept the audience in the story. And, they needed to create hundreds of characters. To do this, they developed more efficient and sophisticated ways to capture the actors’ performances and shaped tools that helped artists work more quickly, leaning on techniques created for previous shows as much as possible.

 
Setting the Stage
The actors worked in a 25x35-foot “volume,” that is, a motion-capture stage, which was 10 feet longer than the area used for Monster House, and which the crew expanded by another 20 feet when they acquired the movement for horses. The larger area moved shooting along more quickly than that for Polar and Monster House, as did the ability to capture and record more actors’ performances at one time. This was an advantage especially for crowd scenes: Zemeckis could direct as many as 21 people on stage.
 

Like many of the characters in Beowulf, Queen Wealthow (Robin Wright Penn) resembles theactor who provided the voice for the stylized digital character. A crew of approximately 60people worked on hair and cloth for the all-digital cast.

To capture these performances, the crew used a Vicon-based optical system with 244 MX40 cameras and a version of that company’s Blade software customized for this project. It took the crew two months to build a rig around the volume and set up the performance-capture stage. “We placed each camera manually,” says Demian Gordon, motion-capture supervisor. “It took a week just to focus and aim the cameras, and a week to run the cables.” When they had to extend the volume for the horses, it took another week to change the rig.

The performance-capture crew placed the cameras in three primary locations—at one-foot, three-foot, and five-foot distances above the floor—although they could have them as high as 16 feet. Once they had placed the cameras, the crew built a wall around the rig to protect it from the actors. “If an actor throws something, we don’t want it to hit the rig, which might cause us to have to recalibrate the cameras,” Gordon says. The cameras peeked out through portholes in the wall, which had an unexpected benefit.

“Normally, one camera looking at another sees the lights, so we have to screen out those spotlights from the data,” Gordon says. “But because the cameras were in the portholes and off axis enough that they didn’t see the spotlights, we got more data.” As before, the cameras captured markers, small reflective dots worn by the actors and applied to props. Software translated the markers into data that moved an actor’s digital doppelganger or a prop once the data was applied to a CG model.

For this film, the crew captured more data for each character in Beowulf than they had for characters in previous films. They had recorded only the bodies, faces, and three fingers on the actors’ hands for Polar and no fingers for the cartoony characters in Monster House. By contrast, Beowulf actors wore Lycra suits, gloves, and a hat that altogether held 78 markers on their body, 25 on each hand, and 121 on their face, plus an eye tracker that captured eye movement. Over the suits, the actors sometimes wore costumes made of tulle and other materials the cameras could see through so that, for example, a warrior could fling a cape and a woman could lift the bottom of her skirt and run.

To give Zemeckis a quicker look at the digital performances for this production, the crew devoted 44 of the cameras to a real-time system embedded within the regular system. “We call it ‘near time,’” says Gordon. “We used two sizes of markers to screen the data: small markers for faces, hands, and props, and larger markers on the body for real-time data.”

In addition, for the first time, the crew fitted the actors with an electroculogram device that tracked the movement of the actors’ eyes. Electrodes applied around each eye measured the electrical charge of muscles and sent that data to a wearable PC the size of a cell phone. “If you were to draw a crosshair from the electrodes, the pupil would be dead center,” says Gordon. Software then converted that data into animation curves.
 

(At top, left) Performances for the characters in this shot, including the horse, began with
mocap. (Middle) Animators created the dragon’s performance by hand. (Right) Effects artists
adapted techniques from Ghost Rider to control the dragon’s fire. At bottom is the final image.


Data also came streaming in from mead cups, swords, and other props. “We captured 250 types of props,” says Gordon. “And each had a unique marker set.”

To organize all this so that the software could tell which dots belonged to which prop, and to set up the capture sessions quickly, the performance-capture team developed a digital numbering system that used colors, Braille, and bar codes. “We had a visual coding system whereby the color neon pink might represent zero and orange might represent one,” says Gordon. “We also used Braille letters so the computer could tell the difference between props.”

Bar codes assigned to each actor and prop automated the setup. “We checked the props and actors in and out of the volume by scanning the bar codes,” says Gordon. “The database sees the bar code and gets the calibration and the templates, knowing which actors and props are in the shot. No one had to do this manually.”


Digital Dailies
Data from the performance-capture stage streamed into approximately 80 computers, often as much as 2.5tb of data a day. In addition to the motion-capture data, the crew also wrangled video from nine camera operators who moved around the stage during the capture sessions. 

The real-time data, captured by 44 cameras from the large markers on the actors’ bodies, moved into a system that semiautomatically applied the data to the appropriate relatively low-resolution CG models. The crew put those characters into rough CG environments within Autodesk’s Motion Builder for Zemeckis to review at the end of each day, much as he might have looked at shot dailies on a live-action set. The difference, however, was that he and cinematographer Robert Presley, who had also been director of photography for The Polar Express, created camera moves for the shots by once again using the studio’s wheels system to control the virtual world with a real camera. Imageworks called the result, which mimicked the layout step in an animated film, the “director’s layout.”

Once Zemeckis had selected the shots he wanted, those and the matching video from the nine roving cameras moved to production editorial, where editors working on an Avid system cut the shots into scenes. “That gives us a rough edit,” Gordon says. “We had the framing, we knew how long the shots were, and we knew who was in them.”

At that point, two teams of performance-capture integrators—one for body data, the other for facial animation—cleaned up the data and put the featureless dots onto high-resolution rigs for the digital character’s body and face. The shots then rolled back into the wheels system for final camera moves and approvals, and, once approved, on into Autodesk’s Maya, where animators adjusted the performances and improved the facial animation.

“Animators spent about 25 percent of their time on body-related issues,” says Kenn McDonald, animation supervisor. “The rest of the time they worked on facial animation.”

Imageworks animators create facial expressions using a pose-based system; the motion-capture data applies values to each pose. In addition to the blendshapes, animators could control individual muscles.

“During integration, if an actor is smiling but the data does not reproduce that smile exactly, we had tools to tell the program to weigh the data in a particular direction or exclude poses that interfered,” McDonald says, “and then, we would run the solve again. Applying the facial data is a combination of alchemy and artistry.”

For Grendel, McDonald created a facial rig with limited movement on one side of the creature’s face. Thus, when the integrators applied data captured from Glover, the face reacted as if it were partially paralyzed.
 

Director Robert Zemeckis directed the performances of the actors who played the digitalcharacters and then later framed the shots using a system with which a camera operatorcould control a virtual camera by moving a real camera head.

During the performance-capture session, a camera dedicated to each primary actor provided close-up video reference so that later the animators could see video images on planes within a Maya scene file. “They could compare what they were doing with the actor frame by frame,” McDonald says. “The major difference between using performance-capture data and hand animating is that the animators don’t have to figure out the timing issues. The performance capture gave the animators a basis, and then the animators concentrated on getting as much performance out of the digital puppet as possible. It’s that last 10 or 15 percent of doing an animated shot that’s always the most difficult.”
Crowds of sometimes as many as 125 people populate several of the scenes—in the mead hall, for example. Because Zemeckis had specific actions he wanted people in the crowds to do, Imageworks captured actors on the performance-capture set, including the facial performances, in groups of 18 to 20 people at a time.

Modelers working in Maya and Pixologic’s ZBrush built the crowds using variations of nine male models and six female models. Each of the 15 base models had one facial variation—a longer nose, higher brow, sharper chin, for example. “We changed areas that weren’t largely affected by facial capture,” says Sean Phillips, digital effects supervisor. Variations in hairstyles, facial hair for the men, skin coloring, and costumes helped alter the look of the base models.  

At the start, each of the base models had unique rigs tied to the actors performing the majority of the shots. However, Zemeckis’s shot selection often resulted in one character performed by several actors, and several characters performed by only a few actors.

Hairstyles
Sho Igarashi led a team, which grew to 60 people at the peak of production, who created and animated hair and cloth for the primary characters and the characters in the crowds. Imageworks’ hair system has evolved over several years to handle such animals as Stuart Little and such digital doubles as Superman; but it was the work on the characters in the animated feature Open Season that helped style the female characters’ hair in Beowulf. “With a technique we developed for Open Season, modelers could make a hair volume that we could grow hair through and off,” says Phillips. “That helped with the braids.” The braids also had IK chains so animators could edit the simulations as needed.
 

Animators spent most oftheir time working on facialexpressions using the captureddata and video referenceshot during the performance-capture session.
 
To handle the number of characters that needed hair in Beowulf, Igarashi’s team developed a method to take the hair from one character and use it on another, like a wig that moved from model to model. “We would focus on the hero characters first, creating their braids, beards, and their long, unclean hair,” Igarashi says, “and then clone their hair onto the background characters.” They could make the hair and sideburns on the clones longer or shorter, and cut beards into goatees.

The effects team also gave the primary characters body hair—peach fuzz for the female characters, chest and arm hair for the male characters. “It was most important for the female characters,” Igarashi says. “It kept them looking soft.”
Costumes
Each of the characters in the film, from Beowulf’s warriors to members of the queen’s court to peasants, wore a period costume made from layers of clothing that had to move. “Creating the clothes for Beowulf was the culmination of everything we’ve tested before,” says Igarashi. “Beowulf’s warriors wore chest plates, chain mail, and boots. And, we had to represent the volume of fabric in dresses and capes accurately.”

Usually, Imageworks modelers build clothes on the models. For Beowulf, though, the cloth team worked with costume designers who dressed the cast of actors. “We spent months taking pictures of actors and body doubles in costumes,” says Igarashi, “hundreds of costumes.”

Then, using patterns the designers created, the cloth team reproduced the costumes piece by piece. “It’s more tedious,” says Phillips. “It’s not the way a modeler is used to working.” But, it resulted in clothes with the correct volume of fabric, which was more cloth than they expected in the dresses and capes.

The cloth team used the clothes they built for simulation only. When they felt confident they had the correct volume, they sent the costumes to modelers, who created more than 100 layered costumes as renderable models. Once built, custom tools allowed the artists to rebuild the costumes procedurally and accommodate changes quickly.

“We made leaps of understanding in how to build a costume, how to build layers, what’s seen and not seen,” says Igarashi, “also, in the logistics of building an army of costumes so that we had control over dynamics in a way that was modular and shareable.”

For efficiency and artistic control, one artist took responsibility for both hair and cloth simulation on a character. “There was a lot of interaction between hair and clothes,” says Igar­ashi, “so it made sense to have one person making decisions within the context of a shot.” An artist wouldn’t need to perfect a cloth simulation, for example, that the character’s hair would later hide. Typically, cloth simulations ran first, and then the artists ran the simulations for the hair, which riggers had set up with controls that could cause it to stick a bit as it moved over the clothes.
 
The Way They Look
Although the characters, costumes, and hair might move perfectly, the illusion would be spoiled if they looked unreal. “The film was never intended to look like it was photographed,” says Phillips. “Any time we presented something to Bob [Zemeckis] that leaned toward photorealism, he’d respond to that, but he didn’t want it to look exactly like that. We wanted it to look real but not photographic, like an old master painting.”
 

Effects artists simulated the clothing first, then CG hair.

With this in mind, approximately 20 texture painters, shader writers, technical directors, and other artists on the human-look dev team created the texture and color of the human characters’ skin and their eyes. Many of the artists had previously worked on digital doubles for Superman and Spider-Man, and on the characters in The Polar Express and Final Fantasy. “We had a brain trust of people who had created digital humans,” says Phillips.

Although the human-look dev team photo­graphed the actors using the same diffuse lighting rig and cameras with polarized filters with which they had captured photos of Tobey Maguire and others for Spider-Man 3, texture painters still painted all the textures by hand. “We used the photographs as a starting point,” explains Phillips. “But we developed a different scheme for painting the characters. We had a similar texture space from one character to the next.” By keeping the UVs the same from one character to another, they could share texture maps among the characters.

“We could morph from one into another,” says Phillips. “Some of the guys thought we’d see weird stretching, but we didn’t have as many problems as we thought we would. It worked particularly well for background characters.”

For the skin, lighters relied on a Pixar RenderMan shader derived from a technique published in Nvidia’s GPU Gems 2 book that approximates raytracing. “We called it our indirect lighting model,” Phillips says. “It emulated the bounce light you get from some raytracing shaders, but it’s more efficient.”

For the eyes, the artists used a fully refracting model, which the studio had been using for digital doubles in live-action films. Close-ups of the actors’ eyes taken during the performance-capture sessions provided reference. “We all became experts on human eye anatomy,” says Phillips. “Some of the hardest things were getting the contact shadow on the eyeball from the eyelid to look right. It doesn’t sound like much, but it was something we struggled with. We didn’t want to have to adjust it on a shot-by-shot basis.”

If Zemeckis had wanted the look to be photoreal and Imageworks had succeeded, then someone might argue that he could have created the film as easily, perhaps more easily, with photography. But that wasn’t the goal. Because Beowulf is a legend, populating the film with stylized people enhances the mythic feeling. Even so, while some might disagree about whether Beowulf is a next-generation animated film or a next-generation live-action film, no one would disagree that the characters in Beowulf represent another step in using computer graphics to create intriguing digital humans. 

 
Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.
 
 

CG GETS, EFFECTS, AND A DRAGON
Imageworks created everything in Beowulf with computer graphics, including the sets, the enviroments, the trees, fire, water, and one golden dragon.
 
The film opens with Beowulf arriving by ship, but most of the action happens in Greatland, especially in Grendel’s cave and Heorot, the mead hall. “We used a lot of 3D matte paintings integrated with 3D geometry in the foreground for the large outdoor sets, where the characters are never in one place very long,” says Sean Phillips, digital effects supervisor. Matte painters worked in Adobe’s Photoshop and Maxon’s Cinema 4D. Modelers created the mead hall, though, in Autodesk’s Maya, in full 3D.
 

For water in wide shots, the effects artists used a wide water plane with diffuse elements painted in by matte painters, and added specular highlights from 3D water. “That worked well for the aerial shots,” Phillips says.

However, to create water inside Grendel’s cave, Theo Vandernoot, effects animation supervisor, applied a shader he wrote to a flat plane. Artists could comb the direction of ripples on the plane. A Side Effects Software Houdini particle simulation provided splashes on top. For the crashing ocean water surrounding Beowulf’s boat, the crew used a grid-based shader borrowed from Surf’s Up (see “Radical, Dude,” June 2007) and particle-based foam, sea spray, rain, and fog. But, for small fluids—honey-wine (mead) spilling from a cup, water thrown onto the floor—they used a proprietary fluid simulator written inside Houdini.

Technology adapted from a second previous film, Ghost Rider (see “Blazing Effects, February 2007), helped the artists create fire. “We used basically the same technique—Maya-generated fluid fields directed by a Houdini wrapper for all the interactive squirting and splashing fire,” says Vandernoot. “We had 30 scripts all calling each other.” 

This particularly helped with the dragon, an entirely hand-animated CG creature that flies with bat-like wings and breathes fire. Two animators blocked out the dragon’s performance and then handed those files to other animators who refined the animation in much the same way as they refined performance-captured animation. For scenes with Beowulf riding the dragon, the crew captured actor Ray Winstone riding a gimbaled motion-control rig.

The most innovative work on the part of the effects crew may have been in how they delivered the effects. They rendered only the atmospherics—the smoke and volumetric steam—and large-scale water. Otherwise, they did test composites and shipped to the lighters only effects entities, as bits of code. “You could think of it as a bin that contains all the information for rendering, like a snippet of a RIB file,” Vandernoot says. Thus, the lighters could dress a set with effects—fire in torches, for example—and then hit the render button.

As with performance capture, costume design, hairstyling, textures, and skin rendering, to create the volume of computer graphics needed for Beowulf, the crew had to become more efficient.

“We had to design the artistic environment to get more iterations,” says Jerome Chen, visual effects supervisor. “The higher the number of iterations, the better the work.” –BR