Produced by Bill Iwanyk and Polly Johnson, the film (filled with 3D CGI) is directed by Jonathan Liebesman, with Nick Davis returning as production visual effects supervisor. Also returning from “Clash” to “Wrath” for more CG fun with gods and monsters is Framestore, which used computer graphics and intense animation to create two elaborate visual effects sequences for the production. A decade after his heroic defeat of the monstrous Kraken, Perseus (Sam Worthington) is attempting to live a quieter life as a village fisherman and the sole parent to his 10-year-old son, Helius. Perseus cannot ignore his true calling when Hades (Ralph Fiennes) and Ares (Édgar Ramírez) make a deal with Kronos to capture Zeus (Liam Neeson). The Titans' strength grows as Zeus's remaining godly powers are siphoned, and hell is unleashed on earth. Enlisting the help of the warrior-queen Andromeda (Rosamund Pike), Poseidon's demigod son Agenor (Toby Kebbell) and fallen god Hephaestus (Bill Nighy), Perseus bravely embarks on a treacherous quest into the underworld to rescue Zeus, overthrow the Titans and save mankind.
Framestore created some 300 shots for “Wrath of the Titans.” Leading the team was Visual Effects Supervisor Jonathan Fawkner. “We were chiefly tasked with two sequences,” he says, “the first involving an encounter between Perseus’s team and a trio of Cyclops, the second (which follows the first closely within the film’s chronology) concerning Perseus’s group’s assault on The Labyrinth, a towering maze that provides access to Tartarus wherein Zeus is imprisoned. So we had two very contrasting types of visual effects to deliver: photorealistic near-humans, albeit giant ones, interacting with environments and human actors; and an impossibly vast, constantly moving architectural environment.”
An Eye for Detail
“For the three Cyclops, we decided that performance capture was the only route from the start, but that we’d play it a little differently from usual,” explains Fawkner. "An initial session was recorded before the shoot to explore the behavior of the Cyclops and inform the cast and crew. Then the plates for the sequence were shot in forest land in Dorking, England, during April 2011, without mocap referenced directly in camera, but rather relying on the tried and tested ‘tennis ball on a stick’ technique providing the most flexibility for the director and cast.”
By the time the team started the actual performance-capture sessions at Shepperton Studios, they had a sequence in the can, cut and camera tracked. “We had an accurate scan of the set, and by carefully laying out proxy trees and other obstacles, and by matching the topology with a movable sloping deck, we were able to composite the Cyclops into the plate live, providing an incredibly intuitive and comprehensible tool,” says Fawkner.
Former international rugby player Martin Bayfield was cast for the mocap shoot, not least because, at 6 foot 10 inches tall, he had something of an edge when it came to playing giants. Involved from an early stage was Animation Supervisor Paul Chung. “Since Martin Bayfield’s performances would be used for all three—very different—Cyclops characters, I did a lot of research into how each of them might move and perform,” says Chung. “I wrote some biographical notes on each family member to give Martin some material he could inform his performances with: the hot-headed, muscular younger brother, his fatter older sibling who always has to rescue him from the scrapes he gets in, and their dad, who both of them are a bit scared of—that sort of thing. Bayfield took all this on board and gave excellent physical performances.”
Bayfield would study the plate, the timing and the rhythm, and with Fawkner directing, attempt to hit his marks in each shot. The team would try as many variations as time would allow, and the various takes were delivered to the client to pore over and make selects from within hours of the capture, ready for editing in the normal fashion. Says Fawkner: "The massive benefit of doing the capture this way was that the animation was effectively blocked and locked very early on, leaving more time to finesse the details.”
Motion capture has a bit of a reputation, not unearned, for creating a slightly unnatural “mocap look.” There are a lot of different elements that contribute to that, from the actor wearing the markers and where they are placed, to how you solve it, to how you put it on the character. There are maybe a dozen stages that this data goes through, and detail can be lost at any one of them. So Framestore has developed a new pipeline.
The group at the studio uses several witness cameras to complement the mocap input, increasing its accuracy. With “Wrath,” they also became the first company in the world to forge a partnership with IKinema, a company that makes software for the process of taking mocap and transferring it to a creature of a different scale. Framestore also built tools based on IKinema software that could help quality control the solving process, neatly comparing solve with witness cam footage. Not only does the solver prove incredibly accurate, but it also gives a newfound flexibility that streamlines the entire mocap solving and retargeting pipeline.
Nicholas Scapel, head of rigging, takes up the story. “So we got the action perfectly from Martin Bayfield the actor to Martin Bayfield the digital model. The key to great mocap is how you give it to animation. Many studios have a mocap department that has a large motion editing team. They cannot do technical animation, and they fiddle with the mocap to make it work in the shots, and then it goes to animation, which is often less than thrilled with what it gets. We want to give as close as possible to the raw performance to the animators and to let them work it up from there.”
Scapel’s rigging team (leads Laurie Brugger and Matthew Goutte, and Rigger Mauro Giacomazzo) had also worked together on the acclaimed Dobby and Kreacher appearances in the final “Potter” films, but this was a whole new level of intensity.
“We soon realized that these creatures were much more dynamic than Dobby and Kreacher, and that anatomically there was lots more to do,” recalls Scapel. “We ended up with about 10 times the amount of data that we generated for ‘Potter.’ Our hero or base mesh was about half a million polygons, and the map that we used was the equivalent of a 900-megapixel image times maybe 50 different maps that we had. We also had four months fewer than we did for ‘Potter,’ so it was quite a challenge.”
Creating the Cyclops’ anatomy and musculature involved a massive amount of detail, given that the creatures are frequently shown in extreme close-up. Creature Modeler Scott Eaton was with the team for much of the project, and his anatomical expertise was invaluable. He was able to do detailed sculpts of all three characters, and inverse meshes that represented their inside anatomies.
The team used a generic human for the base, cutting out a spot for the eye patch, and then bumping up the resolution. The rigging team gave controls to the animators, allowing them to control the muscle tension, as the same action can look completely different depending on how tense the muscles are.
The result of all this was that the rig would have as much detail as possible but didn’t have any dynamics, so the creature effects team would run a simulation of the fat and the muscle jiggle and the skin sliding.
To achieve this, they extended the approach they had used on Dobby and Kreacher, which was to have essentially a volume mesh providing a certain thickness under the skin, simulating a volume rather than doing a typical surface simulation. This gives the effect of flesh having a mass. Says CG Supervisor Mark Wilson, “Nicolas Scapel and I worked hard together to ensure that on the shading and rendering side, we anchored the displacement of the mesh with the movement of the skin so that the two things worked together and it doesn’t look like a series of surface events, that the Cyclops has internal organs, bones and muscles.”
Notes Fawkner, “From the outset I wanted to push what has been termed ‘physically plausible lighting,’ if not perfectly ‘physically realistic lighting.’ The lights and materials have physically plausible characteristics in a way that [Pixar’s] RenderMan hasn’t previously been able to do. RenderMan 16 allowed us to raytrace the whole character, the weapons and the interactive effects. Paying close attention to the environment, the position and intensity of the light sources, all of which were surveyed and photographed per slate, provided a deeply rewarding result in a way that doesn't compare with a more traditional RenderMan pipeline.”
Fabulously detailed textures—created by a team led by Michael Borhi—meant the group could get close enough to see the Cyclops’ fingerprints. Building on the experience and heritage that Framestore has established, the studio retooled its skin and hair pipelines, optimizing them for the demands of raytraced global illumination.
Says Wilson: "RenderMan 16 didn’t actually come out until after we’d nearly completed the project, so we were using pre-release versions throughout, in some cases actually developing tools and routines in-house that would later be supported natively by the software which was a little nerve wracking at times, but Pixar was enormously helpful.”
The creatures’ faces posed many of the biggest challenges. The one element that was pure CG animation, informed by live reference material but not by mocap, they were designed to be appealing and not cartoony. But so much of our reading of facial expressions derives from the brow line and the eyelids and the way they combine to wrinkle and create emotional information. So if you have one eye, there’s a big problem.
The team played around with different eye elements and ended up with a large central eye with two modified tear ducts. Anger, fear, sadness, pain all had to be expressed. They created a highly flexible brow that could be treated either as a left- or right-eye feature, or as a hybrid of both. They also developed a system that would read the amount of stretch and compression in the skin and automatically generate appropriate wrinkles. This system was also successfully used on the hands.
Notes Compositing Supervisor Chris Zeh, “The Cyclops was—surprisingly, perhaps—more straightforward than the Labyrinth work. This was largely because the renders were very good—they’d put a lot of time, thought and R&D into making the creatures look as though they were there in the plate, which was very helpful to my team. It was somewhat complicated by the decision to switch from the darker, foggier look they’d shot to brighter sunlight, but we effectively re-lit it very successfully.”
Amazing Stories
The Labyrinth is first seen by Perseus and his men from a distance. It is an impossibly tall tower, with circular layers in constant revolution, rotating in different directions grinding each against each. They climb to the top where there is a battle with the evil Ares (Édgar Ramirez). After gaining entry, the heroes are separated inside the cavernous interior until Perseus fights and defeats the Minotaur, whereupon the entire interior realigns itself and a gateway to Tartarus appears.
“The long shots of the tower were a straightforward, albeit enormous, modeling job,” explains Fawkner. “And the fight at the top involved a lot of shockwaves, god weapons flashing and so forth.” This part of the sequence was further complicated by a decision, taken quite late on in production, to ratchet up the tension during the fight by creating a magical door that Hephaestus (Bill Nighy) has to open. This was shot against greenscreen, with the Framestore team adding the stone door puzzle elements.
“On the Labyrinth as a whole, we usually body-tracked the actors and placed CG dummies in situ,” continues Fawkner, “so that when the dummies were lit correctly and looked like the guy on the plate, you knew you were in a good place. I wanted to bring volumetric lighting into the picture, too, as a complement to the physical lighting I’d already got in place. We were using Arnold Renderer for this sequence, which chews through geometry like nobody’s business. We actually had the film’s gaffer come in and talk to us about how he had lit the (relatively contained) set—and how he’d like to light it if it was a physical set. This brought a welcome element of realism and practicality, but was also creatively really rewarding.”
The final shot of the sequence involves the set breaking apart to reveal the gargantuan chamber that the Labyrinth is in before reforming into the gateway to Tartarus. With nothing more than a single figure on 1000 frames of greenscreen, Framestore needed to bring all the various disciplines together. They re-created the entire Minotaur set, lighting and set dressing, then the effects team, led by Rob Richardson, proceeded to animate and decimate it. Side Effects Houdini and Autodesk Maya simulations of falling columns and crashing stone, combined with hundreds of 2D elements and an ever-moving and -changing lighting scene, provided a memorable and fitting finale to Framestore's work.