Crafty Effects
Issue: Volume 39 Issue 4: (Jul/Aug 2016)

Crafty Effects



The translation of interactive video games into passive cinema has not been an easy one, but with nearly $400 million in box-office receipts after two weeks in the theaters, Warcraft has to be considered a success. It’s the biggest video game movie ever and will likely be the first to earn more than $400 million worldwide.

The film accomplished that despite dismal reviews from critics who, as a whole, found the story lacking. Directed by Duncan Jones, Universal Pictures’ Warcraft: The Beginning might be the first film for which critics found the CG characters more compelling than the live-action actors with which they share the screen. 

LA Times critic Justin Chang notes the “skillful use of performance-capture technology to bring a fictional race to credible life (Kebbell’s Durotan is an expressive standout).” And then draws a comparison to the human characters. “Jones pointedly introduces Durotan and his fellow Orcs first, and we soon grasp that, Gul’dan’s unchecked megalomania notwithstanding, the horde is bent on survival rather than domination. There’s also the unfortunate fact that most of the human characters are far less memorably realized.”

The Village Voice’s Alan Scherstuhl: “…the filmmakers honor the games’ multi-various perspectives by casting an Orc (Toby Keb-bell) as Warcraft’s first on-screen hero – and depicting Orc marriage with big-lug tender-ness. These scenes are… the only ones in the film that suggest that anyone in Azeroth (or the Orc world invading it) has ever felt anything besides go-here/fight-that.”

Manohla Dargis, The New York Times: “It says something about Mr. Jones’ choices that he gives Durotan so much screen time and that Mr. Kebbell, with the help of the special-effects wizards, makes good use of that time with a nuanced, moist-eyed turn that evokes old-studio gladiators like Victor Mature. Durotan is a beautiful brute, and all the more human for it.”

The Orcs are beastly humanoid characters with tiny heads, tusks, and enormous hands, and they are always CG characters. But these critics – and others not quoted – haven’t labeled the Orcs as CG characters. In fact, the critics talk about the characters through the performances of the motion- captured actors. It’s a quiet tribute to the visual effects artists on the film.

As anyone creating visual effects for films knows, there is more to bringing a character to life than transferring motion-capture data from an actor to a digital model. Mr. Kebbell’s nuanced, moist-eyed turn would have been far less successful if his character had gazed into the camera with lifeless eyes, straw hair, and silicone skin.

Bill Westenhofer was overall visual effects supervisor for Warcraft, with Jason Smith and Jeff White the visual effects supervisors at Industrial Light & Magic. ILM was the primary vendor, creating 100 Orc characters, including the aforementioned Durotan and seven other heroes. 


Although Warcraft takes place in many environments that World of Warcraft players will find familiar, these locations were more often sets than CG.

“This is a movie that people will assume has a lot of CG environments,” Smith says. “It’s not true. The forest was a really large set, 100 by 50 feet, with giant, full-size trees that had six- to eight-foot-diameter trunks. We could get 50 to 80 feet of trees in cam-era. We filled in pools of blue in the distance.”

Similarly, another environment, the Stormwind Market, was a set with the first story and a half of all the buildings on a stage. Visual effects artists added the top floors and extended the city into the distance. They also added background shelves to a Karazhan library, a ceiling to the energy chamber, and extended other sets. Even the battles take place on a large set 50 to 100 feet long. ILM artists added the CG characters and mountains in the distances to live-action plates shot on set.

“One interesting thing was creating the idea of a small world,” Smith says. “A character might be standing in the snow, but in the distance was desert red rock in one direction, and in the other direction, a forest. We used that in the film to conjure up the feeling of the game. If you walk a little while from a snowy area, you’ll be in a desert. The game really did define a lot of how the places in the world should look.”

Thus, ILM’s work in San Francisco, Singapore, and Vancouver centered on the Orcs.

“Our Orcs had to hold up half the movie,” Smith says. “I wanted them to be as en-chanting as the fantasy makeup characters I loved growing up, but their eyes are very human. To get their humanity right, we had to level up in a lot of areas.” 

Those areas included skin shading, facial capture, hair grooming and simulation, and lighting.



For Warcraft, ILM artists used Pixar’s RenderMan – the Reyes version, not the newer RIS version.

“We started development three years ago,” Smith says. “We were definitely still a Reyes show.”

The advances they made in skin shading were more artistic than technical.

“The developments we made were in tuning the skin relative to real-world photography,” Smith says. “We started with photo reference of a real person and looked at how much red was bleeding across the line, the transition from sunlit to shadow. When we compared our skin to the photographs, we found that our scatter [subsurface scattering color] tended to be a little too neutral. If we didn’t have enough red in the scatter, enough blood, the skin felt like wax and it was all one color. So we really pushed the input to the system. At a certain depth, the scattering picks up a nice red hue and it warms the skin.”

Some of the Orcs are colored green, and for those, the artists used yellow rather than red. “Jeff [White] and I lived this already with the Hulk. If you scat-ter green, the skin really does look like silicone. If you add red, it looks gray. What we found on Hulk is that the right way to approach scattering is to think about adding warmth. On a green person, that’s by moving toward yellow.”

As they had done for Hulk, the crew took a life cast of an actor, Robert Kazinsky in this case, who plays the character Orgrim, and from that extracted pore-level detail.


ILM has perfected its method for on-set motion capture since first using IMocap for Davy Jones in Pirates of the Caribbean: Dead Man’s Chest.

“On this film, we would be working indoors on soundstages, so we decid-ed to get the best of both worlds,” says Animation Supervisor Hal Hickel. Because the greenscreen sets were enormous, the motion-capture team was able to hide cameras in the large trees and other props to create a motion-capture volume as they would on a typical motion- capture stage.

“We designed the system around captur-ing the performances in the environment,” White says. “We wanted to get the reaction to the environment in the actors’ faces.”

The actors wore suits with active and pas-sive retro-reflective markers. Director Jones could see the characters visualized on set.

“We’d get a great mocap performance,” Jones says. “The actors would leave. And we’d play back the low-res version of what the characters had done. We could see the shots and recompose them to move the camera around or get it into a close-up.”

Riggers assigned each Orc a skeleton based on the shape of its outer geometry, using a system named BlockParty. The creature’s muscles moved based on its performance; simulation changed the muscle stiffness.

For facial capture and animation, the actors wore a helmet with cameras pointed at their faces. The motion-capture and animation teams at ILM then used an up-dated version of the studio’s Muse software, which had been developed for Turtles, to move the data onto the CG characters.

“Turtles had more demand for editabili-ty,” Smith says, “for splicing different takes. Our goal was to translate the performance directly from the actors.”

For the humanoid Warcraft characters, the crew referenced their work on Davy Jones, for which Hickel had received an Oscar for Best Visual Effects.

“The key to our work on Warcraft was the philosophy we developed and the things we learned working with [Davy Jones actor] Bill Nighy,” Hickel says. “Some people want to change everything: Start with the first line of one take, then use the third line of another. Duncan [Jones, director] calls the result a Frankenstein performance. We found the more you do that, the less believable and authentic the performance feels. There is a dense web of connections between the expressions on an actor’s face and their body language. The tilt of a head, a blink all add up to something that starts to not work anymore if you rearrange it. We treated motion capture as live action. What you get on the day is what you get. We don’t stomp on it and modify it. We carry it like a fragile thing through the process.”

To accomplish that, ILM built a CG version of each actor who would be motion-captured, and before moving data onto their Orc character, the crew first applied the data to their CG doppelganger. Only when satisfied that the data caught the actor’s facial expressions accurately on the digital double did the team move it onto the Orc.

“We want to ensure that what the actor did reads on the Orc to the full extent,” Hickel says. “We use the same mesh but re-form it into the Orc shape. There are shared landmarks on both models. There was still significant animator input, but it generally was to add things lacking in the facial capture (neck stress, swallowing, and so forth), or to ensure the human lip sync transferred properly to the larger, tusked Orc mouths.”

The advancements the technical crew made to Muse were in improving the fidelity of the data as it moved from actor to CG model. They updated the rigid tracking to solve the overall motion of the head and jaw. Then, a component dubbed SnapSolve moved the performance point-by-point, frame-by-frame onto the character. Animators worked with simplified offset controls on top of the SnapSolve. They could, but rarely needed to, have the data moved completely onto animation controls in ILM’s proprietary facial animation software Fez.


At SIGGRAPH, ILM’s R&D Engineers Stephen Bowline and Andrew Johnson will present a talk on the studio’s new system, HairCraft, which the pair developed for Warcraft. The system was also used to great success to create the bear’s fur in The Revenant.

Hair systems are not new to ILM. In fact, 20 years ago, ILM’s Carl Frederick and Jeff Yost created a state-of-the-art hair groom-ing system for Jumanji’s animals and then improved that system to create the ape in Mighty Joe Young. Hair systems at ILM and at other studios have help make digital doubles, CG characters, and CG animals believable in numerous live-action and animated films over the years, many of which have received Oscar nominations and awards. But, surprisingly, one area that demanded new technology for Warcraft was hair, fur, and feathers. What prompted developing a new system for this film?


“Three different important reasons,” Bowline says. “The complexity of the grooms, the number of assets and creatures with hair, and the scale and turn-around time. Back when we did Mighty Joe Young, everything was controlled with texture maps. It was very labor-intensive. For Warcraft, we had 287 photoreal assets that had to be created and dressed with hair. Forty-two of those were unique creatures, not variants. And of those, 22 had different costumes. We didn’t just have creatures with hair; we had costumes with hair, fur, and pelts. And, they all required simulation.”

To untangle the complexity, Bowline and Johnson combined two strengths: direct artist manipulation with sculpting tools and a procedural system.

“Our artists are used to manipulating curves and to using a node-based procedural system within Zeno,” Bowline says.
“So we took that interactive ability of an artist to sculpt very specific and art-directed silhouettes, an envelope of curves, whether for a creature or digital double, and used that as input to a procedural system where we can add detail.”


The process begins with modelers who place curves to define flow lines and create shapes, sculpting and manipulating the curves individually or in groups.

“They might create an envelope with a small number of curve guides, say 1,000, to create a gross shape,” Bowline says. “Those 1,000 would become 10,000 procedurally and could be used to direct the rest of the hair, which could be in the millions.”


The modelers also might sculpt volumes using their choice of software tools – Pixologic’s ZBrush, The Foundry’s Mari, ILM’s Zeno, or other software – and place them on a CG character. The volumes would later be filled with hair.

“We had characters with intricate braids, so the modeling staff would create the tubes and we’d add detail procedurally,” Bowline says. “We’d also use volumes for fur-lined cuffs, animal pelts over a character’s shoulder, and other costumes parts. We could simulate those volumes with a cloth engine.”

Look-development artists would then use the curves and tubes placed by the modelers to generate specific hair grooms.
“They aren’t just manipulating guide curve sculpts,” Bowline says. “It’s more technical than that. They spent most of their time working in the node-based system, although some created new curves and sculpted, as well. They look at the artwork and de-compose a hairstyle or groom into many components we call layers. Some hair might be clumped together, some might be filler underneath, some wild hairs might cork-screw off. They create an array of the types of hair – fine, thick, and so forth – and then work on those individually, adding layers and sub-layers to build complex grooms.”

One layer might have tufts of hair. Other layers might define frizzy, coarse, or twisty hair. Once created, the artists often saved pre-defined layers as templates in a library of options that others on the crew could load as a starting point.

The geometry from these procedural, node-based graphs developed by the look-development team moves on next to creature development for simulation.

“HairCraft interfaces directly with our Physbam simulation engine,” Bowline says. “It’s a round trip. We feed the geometry through Physbam, and then the simulation goes right back into the procedural graph to deform all the rendered hair, the instanced hairs. We can dial it up or down to have more detail when a character is close to camera, and less when it’s farther.”

The artists can also directly access individual hairs and select appropriate populations of hairs to perform the sim. They might, for example, send individual stray hairs wiggling in the breeze, rather than moving with the rest of the hair.


Orc Design

Working from concept art provided by Duncan Jones’ production team, Christian Alzmann and a team of concept artists at ILM devised detailed costumes and make-up looks for each of the five clans.

“We wanted to make them distinguish-able from one another,” Alzmann says. For example, one uses tree bark for armor, another uses bones, and a third, metal. Colors change from one clan to another.

“If they are on the battleground, we wanted people to say, OK, they have ram horns, metal, and black and red on their faces,” Alzmann says. “That’s Blackrock.”

All the Orcs also have bruises, piercings, scars, and tattoos.

“They have massive hands, so we made sure the tattoos look like someone drew them with a fist,” Alzmann says.


“Look-dev artists spend a long time crafting the style,” Bowline says. “We don’t want the simulation to smear that out and average the dynamics.”

In addition to using HairCraft for the fantastical creatures in Warcraft, the ILM artists also used the system to create the extraordinary realism of the bear in The Revenant. Although the engineers developed HairCraft for Warcraft, the long production schedule meant the system was used on The Revenant before Warcraft’s release.

“One of the reasons we could pull off the realism in Revenant was the interface with the dynamics, the hybrid procedural/inter-active system,” Bowline says. “Being able to sculpt a still image of hair on something is completely different than having it move realistically with the character.”


ILM primarily used two types of dynamic systems to move the hair. One is a strand-based system with articulated rigid curves developed by R&D engineer Christopher Twigg some years ago.

“The other thing we do is simulate higher- level geometry like tetrahedral meshes inside the procedural system,” Bowline says.


“This is good for hair in beards that moves together in a giant Tet mesh without much individual hair movement. We simulated the braids like rigid chains that deform the tube geometry; the hair grows inside the tubes.”

They can also generate this higher-level geometry along the length of each curve; that is, create a tetrahedral mesh for each curve. “It’s an old technique that we developed eight years ago, but it wasn’t fully procedural,” Bowline explains. “Now it’s integrated and works extremely well. We used it for The Revenant. The strand-based system is faster, but it has fewer degrees of freedom – we can bend and twist, but there isn’t much stretch. With the tetrahedral mesh, we have many degrees of freedom, and it responds well to collisions. It takes longer, however the results are extraordinary.”

In some cases, the artists had the two simulation systems working together.

“One of our artists is fond of the tetrahedral embedding, of having a Tet mesh per hair,” Bowline says. “On the Revenant, though, we had a shot where the bear’s paw was close to camera. The simulation artist needed a lot of curves to get accurate collisions, so even though she had set it up to use the tetrahedral mesh, she could switch to the strand solver to simulate every guide on the paw. These things can be combined.”

By integrating the procedural system with the dynamics, ILM helped the artists become more efficient without sacrificing complexity.

“In Warcraft, there’s a creature called the Frostwolf that has 7.5 million hairs,” Bowline says. “We don’t simulate 7.5 million hairs, so the ones we do sim have to be the right ones. We want hair in tufts or in a clump to move together, but we don’t want to interpolate across tufts. We also don’t want a small number of hairs to move a large number of tufts.”

“If you look at hair simulation in some films, you see too much uniformity,” Bowline adds. “Hair moves together too much be-cause you can simulate only so many things. But, we can move the dial on a shot-by-shot basis. If a creature is close to the camera, we want to simulate every single guide in that frame. We want high-resolution collisions and heterogeneous behavior. But, we don’t want the simulation artist to have to build a different setup. With the procedural system, we can get to the right population and the right number of hairs.”


For the crowd simulation in Warcraft, Industrial Light & Magic partnered with Hybride.
“They used their in-house tools and [Softimage] XSI,” says Jason Smith, visual effects supervisor, “and worked with a huge library of actions captured at Animatrik’s motion-capture studio. They could quickly randomize the characters with, say, half one clan and half the other, or maybe 10 percent female and 90 percent male, and populate huge battle scenes quickly with hundreds of characters.”



At the end, fitting believable humanoid characters into live-action plates depends on the skill of compositing artists who rely on the ability of lighting artists to match the light in the photographed plates. To help these artists, ILM came up with a new technique called LightCraft for creating a 3D representation of the lighting in the live-action environment.

“Normally when we capture lighting in an environment, we use a sphere,” Smith says. “With LightCraft, we take one sphere like normal, and then also take one a foot higher so we have a little parallax between them. Using that parallax, we can look at an image and decide how far away each piece is.”

Thus, the lighting artists could use the toolset to construct lights at different depths.

“LightCraft is just a toolset to push for-ward our ability to get a more true representation of where the lights are on set and how far away they are,” Smith says. “This is especially important when we have a character moving through an environment.”
Although developed originally for Warcraft, the film’s long postproduction schedule gave crews on other films moving through ILM a chance to use and improve that technique and others. Star Wars: Episode 7 used Light-Craft and pushed it even further, according to Smith. And, The Revenant.

“We delivered the bulk of Warcraft a year ago, in April 2015,” Smith says. “I’ve been on Warcraft-related things since 2013. Up to [April 2015], it had been in production for a year and a half, and filming had happened six to eight months before.”

After delivering Warcraft in 2015, Smith moved onto The Revenant, and received an Oscar nomination for his work supervising ILM’s visual effects on that film. Then, War-craft came back.

“We had an opportunity to do another delivery, so we took some shots we wanted to improve to support the story more cleanly,” he says. Small improvements. For example, they added more green to characters affected by fel magic that were near a fire, to single them out.

Green or not, tusked and heavy-hand-ed or not, in the end, the tools ILM developed for Warcraft helped the artists create and perform characters so believable that critics don’t dwell on the fact that they’re digital or CG, but instead laud the performances of the actors that ILM motion- captured.

For the invisible visual effects artists, that’s high praise indeed.

Barbara Robertson ( is an award-winning writer and a contributing editor for CGW.