By Barbara Robertson
The largest setting in which a unique world could be created is, of course, the imagination. And few people have supplied the imagination with as rich a set of materials for envisioning a fantasy world as has J. R. R. Tolkien in his trilogy, The Lord of the Rings. Some 50 million people have bought The Lord of the Rings since its publication in 1954; in 1999, Amazon.com readers chose the three-volume epic as the best book of the millennium.
Thus, turning this magnum opus into a movie is an intoxicating idea. If the movie could tap into the essence of what made the books so captivating, it would be extraordinary. But the very nature of Tolkien's densely written work, which takes place in mythical Middle-earth where elves, wizards, goblins, trolls, and furry-footed hobbits live, battle, tell legendary tales of ancient times, and take long journeys through fantastic landscapes, makes it difficult to imagine how it could be condensed into a film. On December 19, the world will discover if Director Peter Jackson has succeeded in doing just that when New Line Cinema releases the first installment of The Lord of the Rings Trilogy: The Fellowship of the Ring. (The next two installments are scheduled for December 2002 and 2003.)
|
|
In The Fellowship of the Ring, the hobbit Frodo Baggins (actor Elijah Wood) begins his journey to destroy the powerful Ring by returning it to the Cracks of Doom. Accompanying him are the other eight members of the Fellowship: two men, Aragorn/ Strider (Viggo Mortensen) and Boromir (Sean Bean); the elf Legolas (Orlando Bloom), a dwarf, Gimli (John Rhys-Davis); and three other hobbits, Sam (Sean Astin), Pippin (Billy Boyd), and Merry (Dominic Monaghan). In addition, the film stars Liv Tyler as the elf princess Arwen, Cate Blanchett as the elf queen Galadriel, and Ian McKellen as the wizard Gandalf.
|
At top, the actors playing the Fellowship and Gandalf were filmed in a bluescreen background and placed in a miniature set. At right, they escape the CG monster Balrog's fiery foot in a miniature set. In the scene at bottom, Frodo and Aragorn are digital |
It would be impossible, of course, to create The Lord of the Rings without using effects. Physical effects were created in New Zealand, largely by Weta Limited with Weta Digital, a separate arm, responsible for digital effects. All three installments were shot simultaneously in New Zealand and completed in December 2000. The 480 visual effects shots for The Fellowship of the Ring were completed in October 2001, but work on the following installments will continue for the next two years. "This movie is kind of an effects person's dream," says Jim Rygiel, visual effects supervisor. "We have miniatures, pyrotechnics, bluescreen, practical elements, and CG elements all mixed and matched. Sometimes the Fellow ship might be digital; sometimes they might be real; sometimes they are scale doubles."
Similarly, locations were created with various methods. Many, such as Hobbiton and the mountain Weathertop, were found in New Zealand or created from existing landscapes. Other locations were created with 58 miniature sets, into which bluescreen and digital characters were composited. "Richard Taylor [director of Weta's Workshop] quite correctly called them 'bigatures,' because they often would fill the sound stage," says Barrie M. Osborne, producer, who also produced The Matrix. Some of the most dramatic sequences in "film one," as the crew calls The Fellowship of the Ring, however, were created largely with computer graphics-particularly, scenes in the Mines of Moria and on the battlefields of Mordor. In addition, a short sequence in which wild horses formed in the huge waves of a flooding river allow Frodo and Arwen to escape from the Ringwraiths was made possible with computer graphics at Digital Domain, and is one of the few CG sequences not created at Weta Digital.
CG techniques were also used to create previsualizations and to make possible otherwise impossible sweeping camera moves on miniature sets. "We've done every possible form of effect that there is on this film, whether analog or digital," says Ellen M. Somers, associate producer, who was producer/supervisor for What Dreams May Come. "I don't think we've missed one technique yet. Normally there are four or five R&D items, things that need to be developed for a film. Maybe six or seven if you're really pushing it. For film one alone, we had 27."
|
To plan camera moves for scenes with the CG cave troll, animatics with scaled representations of the troll and actors were fed into VR goggles; a box motion-captured in real time became the virtual camera. |
With this in mind, Osborne, Rygiel, Somers, and others involved with the production of The Fellowship of the Ring singled out a few areas that illustrate particularly innovative CG techniques and technology: methods for creating CG characters, crowd animation software, motion capture techniques, and digital grading, which helped balance and change colors of the final film.
We meet the three largest CG creatures in the Dwarrowdelf chambers: the octopus-like Watcher who guards the gate, the 10-foot-tall cave troll who protects the goblins in the Mines of Moria, and the fiery Balrog, a giant composed of "shadow and light." Also, in film one we get a glimpse of Gollum, a de formed hobbit-like character with a warped mind, who will figure prominently in the second film.
All the major CG creatures started as clay maquettes. "These [maquettes] are very detailed," says Matt Aitken, digital models supervisor. "The Gollum maquette had every pore on Gollum's skin."
To capture this detail, the modelers used a handheld scanner, now marketed as the Polhemus FastScan, which produced a dense cloud of points that represented the surface. The modelers then built NURBS models in Alias|Wavefront's PowerAnimator on top of the point cloud. Because the resulting models, which preserved detail from the maquette, were too heavy to use, the modeling team developed a technique based on the 1996 Siggraph paper "Fitting Smooth Surfaces to Dense Polygon Meshes" by Venkat Krishnamurthy and Marc Levoy to reproduce the scanned detail using lower-resolution models. Simply put, they created a low-resolution surface with the same basic shape as the denser, high-resolution surface, aligned the two surfaces, calculated the distance between them across the surfaces, and used that calculation to create height values for a displacement map. Once the high-res surface was encapsulated in a displacement map, that model was no longer needed. The low-res model was used for animation, skinning, and lighting, and the displacement map, which is an image file, was applied at render time.
|
From left to right: First, the scanned data of the cave troll maquette is converted to a high-res NURBS model with 3.7 million CVs. Next, a low-res version is modeled with 28,000 CVs. Third, to restore surface detail, a displacement map calculated as the |
"You go right back to the look of the original," Aitken says. For the cave troll, the NURBS model created for the surface that would become the displacement map had 3.7 million control vertices (CVs) on 250 NURBS patches; the low-res version had 28,000 CVs. "We developed this technique in '97 and '98, and we were very excited about it," Aitken says. "The main author of the paper went on to create a company called Paraform, but their software wasn't available when we needed it."
Beneath the surface, the modelers placed a skeleton-the bones and joints that were used for animation. And then to help make characters look real as they moved, the graphics software development team at Weta Digital wrote Maya plug-ins, that is, C++ programs, to create muscles that would expand and contract with movement and cause the skin to slide in realistic ways.
To create muscles, a creature TD (technical director) would start by specifying beginning and ending attachment points and points in between that would help define the shape as the muscle moved. Muscles could be attached to other muscles, to surfaces modeled as bones, or to other NURBS surfaces. Once the points were specified, the muscle plug-in automatically created an adjustable shape, a NURBS surface, to fit. Then, as the attachment points moved, the muscle changed its shape as needed. "It maintains its volume," explains Richard Addison-Wood, graphics software development supervisor. In addition, the muscles had optional dynamic behavior. "You can control the springiness of a muscle so that it will vibrate quickly when it comes to a stop, or jiggle more loosely," he says.
|
In this all-CG image, the goblins (Orcs), including those scaling the walls, were defined and animated with Massive, a behavioral animation program, and rendered with Weta Digital's Grunt. |
To turn the NURBS patches on the surface into a continuous skin, that is, to make the patches act as if they were a single piece of geometry, the development team wrote a second plug-in. "Other people call this stitching or skinning," says Addison-Wood. "I use the term 'mending' to describe it."
A third plug-in written by the development group controlled the skin movement over muscles and bones. "Basically, [the plug-in] looks at the volume between the skin surface and the nearby muscles and bones underneath the skin," Addison-Wood says. "You could think of the gap between where the skin is and where [the plug-in] sees muscles and bones nearby as a fat layer made of little deformed cubes." The plug-in evaluated the distance between the skin and the bones and muscles underneath and tried to maintain the volume and the relationships. In action this meant, for example, if an animator bent a character at the waist, the muscles and fat in its stomach would push the skin out, and the skin would slide appropriately. Parameter maps, which looked like texture maps painted with shades of gray, were used to specify the stretchiness of the skin for various areas of the body.
For the monster Balrog, a cloud of smoke often hid such details. In fact, the 25-foot-tall brute seems to be composed entirely of the fire billowing out from deep, black crevices in his skin and the smoke that surrounds him. To create this fiery fiend, Gray Horsfield, environment department head, used sprites, which are little 2D cards, onto which the team put 100 to 150 frame clips of painted fire and film footage of fire. These cards were texture mapped onto particles, which were used to create an animated, general fire shape. "We've probably got 5000 images of fire organized into clips and categorized by the way the fire behaves," Horsfield says. "We want fire coming from different parts of the Balrog body to behave in different ways."
To get the fire on all the sprite cards to point in the correct direction while being driven by the particles, the team assigned orientations to the sprites in Maya, and then via Mel scripts, used the screen velocity of the particles and key frame animation to control the flow. The team also used similar techniques to create smoke for Balrog, replacing the fire on the cards with footage of smoke. Finally, they composited several layers of fire and smoke, sometimes as many as 33, using Nothing Real's Shake to create final frames.
|
To create the final image (below), the crew filmed foreground actors against a bluescreen background (at top right) and then composited them with CG warriors and a painted background. The image below right illustrates where CG warriors were placed for the |
"The trick for getting this to work with a creature that's so big and makes such extreme motion was that he was animated in slow motion," Horsfield says. Slowing him down by a factor of five or ten gave the team time to control particle dynamics. Once the particles were under control, they sped up the animation to normal speed.
To choreograph the animation and plan camera moves, Randall William Cook, director of animation, used innovative previsualization techniques. For example, to choreograph a sequence in the Mines of Moria in which the Fellowship fights the cave troll, he had the previz team capture the motion of people acting the part of the CG troll and the actors who would fight with him. Then they put animated CG representations of the characters into a scene, scaled them properly-the troll is 10 feet tall, the hobbits are three foot six, and the men are approximately six feet tall-and added additional choreography using "chess piece animation," as Cook calls it, to plan the blocking. When the choreography was set, Cook and Jackson began planning camera moves for the three-minute scene by playing it back in VR goggles.
"Initially, I wanted to use the VR goggles to get a sense of immediacy," Cook says. "There was something about the level of interaction that was a little different and more spontaneous. The fight was in a small space. I wanted the feeling of claustrophobia and confinement." Also, although the crew wasn't able to use the data for motion-control cameras, it's a side benefit they hope to use for the second film.
To plan the camera moves, Cook and Jackson, wearing the VR goggles, held a box that represented a camera. The box was motion captured in real time, and a 3D virtual camera was attached to its CG representation. The view from the virtual camera was fed into the VR goggles, giving the Cook and Jackson a view of the choreography through the camera so that they could experiment with camera moves as they planned the shots.
With the exception of Gollum, who says a few lines, the CG characters in this film don't talk. Even so, Cook had the animators create a lip synch track for each. "Sometimes Peter [Jackson] would come up with dialog, sometimes the animators would," he says. "Although the characters don't move their mouths, it helped the animators show the characters' thought processes."
|
These digital Uruk-hai warriors "hum" as they walk inside the Massive program so they don't bump into each other. Once the warriors start fighting, they use sight in order to determine what response to make. |
A crew of 12 animators used Maya to keyframe the animation for the "hero" creatures. The animation department also edited and signed off on motion captured for digital doubles, for Gollum, and for basic movements used by the Massive program. "With keyframing, the computer is working for the animators," Cook says. "In Massive, the animators are working for the computer."
The brainchild of Stephen Regelous, Massive generates artificial intelligence agents that respond to their environment. "They select which motion to execute, modify that motion, and blend the motion on the fly as they run around in response to the environment," Regelous says. The software was primarily used to create battle scenes with as many as 70,000 fighting warriors, each a Massive agent with its own "brain," but it also helped animate digital doubles and a flock of crows.
Regelous began working on Massive in 1996 at Jackson's request. Once the software was ready, Weta Digital's crowd department began creating brains and bodies, generating libraries of motions, and designing variations. It took two years of this pre-production work before the Massive agents could be used in shots.
The agents were built with primitives that have physical properties. To allow for physical simulations, the program has rigid body dynamics built into it so that, for example, warriors fall believably onto rough ground. Because the goal for this movie was to have each agent (warrior, goblin, digital double) look and act uniquely, the Massive crew designed a variety of tools, pieces of clothing, skin colors, and so forth for each type of agent. Included in Massive are methods for easily generating variations to change geometry or characteristics. "We have dozens and dozens of variables for each agent, which can be anything from how muddy his boots are to how aggressive he is," says Regelous. "We can change proportions of the skeleton, shader parameters, and brain variables." The agents' bodies are assembled as they are rendered by "Grunt," an A-buffer renderer developed specifically for this purpose by Weta's Jon Allitt, who leads development for Massive agents.
At the same time the agents' bodies were being created, the crew also created their movements. "First, we list what each type of agent has to do and break that down into specific actions such as, for the warriors, strike, side step, pull the weapon back, and block," Regelous says. "And each of these will have variations because the agent will be in different contexts."
Of course, this meant that hundreds of motions needed to be captured. "We generally have between 150 and 350 moves for each type of agent," Regelous says. An animation-blending engine built into Massive modified the moves on the fly to let the agents aim a weapon, for example, or grab another agent. "The agents are able to control their limbs with inverse kinematics," he says. Massive's Tree Planner program helped create this complex network of motions, and the result became part of the brain.
The brains were built with modules, which are networks of input and output nodes, rule nodes, and fuzzy logic nodes; the brains typically have 6000 to 8000 of these nodes. The agents can "see" scanline rendered images of their surroundings, "hear" frequencies of sound, and determine where the ground is underfoot and then respond based on rules that use fuzzy values to approximate the way people make decisions. Each type of agent has a particular brain and each agent in an animation has its own brain and therefore, its own, unique responses. The agents make decisions at a rate of 24 frames per second; choosing to, for example, strike an Orc in the middrift when its weapon is at a certain height and it's most vulnerable.
Massive was used in The Fellowship of the Ring for thousands of warriors that, once set loose on a battlefield, would find an enemy, pick a fight, and fight to the death. It helped digital doubles of the Fellowship navigate a steep staircase in Loth Lorien and the Orcs (aka goblins) taunt the Fellowship and climb pillars in the Mines of Moria. The agents can be placed in scenes within big circles drawn over a terrain, into rows and columns, or in particular places. "Once they're in place, we just let them go," says Regelous.
Of course where they really go is into the pipeline. According to Jon Labrie, chief technical officer, the studio has 800 processors of which 420 are on the Linux-based render wall. Some 90 percent of the machines are dual-processor SGI Intel-based systems. In addition, there are some 50 SGI 330s (half running Red Hat Linux, the other half running NT 4), 125 SGI Octane workstations, two Origin 2000 file servers, three Network Appliances file servers, and between 15 and 20 Macintoshes. "We have 46tb of information living on DLT tape in a StorageTek tape robot," Labrie adds. The software list includes Photoshop (Adobe), Matador (Avid), After Effects (Adobe), Liberty Paint (Chyron), Eddie (Softimage), Maya (Alias| Wavefront), 3D Equalizer (Science-D-Visions), Com motion (Puffin), Shake (Nothing Real), Houdini (Side Effects Soft ware), and RenderMan (Pixar Animation Studios), with 3ds max (Discreet) used for previsualization. The studio developed a lot of proprietary code in-house and also licensed custom software writ ten by independent people and companies for this film.
|
To allow dramatic camera moves for sets such as these Pillars of Argonath, Weta created 3D environmental extensions and 360-degree 3D environmental projections. |
For example, the techniques that en able motion to be captured from human actors, applied to scaled CG characters and composited into background plates in real time were designed for and by Weta Digital, but were developed and are owned by Giant Studios. (Giant's motion capture equipment allowed actors being captured to move around in an area as large as 23 meters by 9 meters, according to Weta mocap supervisors Greg Allen and Francois Laroche.)
Similarly, a digital grading project managed by The Post House (Hamburg) used software developed by ColorFront (Budapest) for this production. With this software, Jackson could digitally match colors in a sequence with shots that may have been filmed at different times of the day with multiple cameras in various weather conditions during the year and half shooting schedule. It could also help smooth transitions from bright daylight locations to monochrome underground scenes and even lighten shadows on eyes and change the color of lipstick. "We can design a look for a scene," says the Post House's Peter Doyle.
"You can see the beginning of a whole new way of filmmaking," says Somers. "I think were going to see more start ups for production-only jobs."
For Wayne Stables, a 3D supervisor who joined Weta seven years ago when there were three people working in a little house in Wellington, New Zealand, the biggest thing the studio has had to deal with is the scope of the job. "I don't know of any effects film that has the range of diverse effects that this film has," he says. He points to effects-laden sequences like those in Dwarrowdelf with Balrog, to the Massive army scenes, to 3D backgrounds created to allow big camera moves, to elements such as the fireworks at Bilbo Baggins's "eleventy-first" birthday party. When production on The Lord of the Rings started, Weta Digital had 14 people. Now there are 252. The studio has grown in other ways as well.
"We learned an awful lot," Stable says. "We can lift stuff up to a new level now." And even though this film isn't yet released, they're already working on new challenges. In the next film, they have to deal with Gollum, an all CG character who has a major speaking part and will appear in around 400 shots, and they'll need to create many more battle scenes.
"We're using Gollum as our constant reminder about what we've got to strive toward," Stables says. A Tolkien reader might suggest that this is only as it should be.
Barbara Robertson is Senior Editor, West Coast, for Computer Graphics World.