People walked out of the world premiere of
King Kong in 1933, horrified by a spectacular effects scene: The star, a monstrous gorilla, shook a group of sailors off a log and into a pit, where they were devoured by giant spiders. As a result, director Merian C. Cooper cut the shocking scene. But, director Peter Jackson reprised that sequence for his remake, Universal Pictures’
King Kong, and, with the tastes of 21st century audiences in mind, he not only filled the pit with giant CG spiders, he attacked digital sailors with huge grasshopper-like insects and had gigantic slimy slugs swallow them whole. It’s one of many sequences during the three-hour epic in which Jackson pays homage to the original 90-minute film by adding a blend of spectacle and emotion that he mastered so successfully for
The Lord of the Rings.
To tell Kong’s story, Jackson relied on state-of-the-art visual effects, as he did for
LOTR, and as did Cooper in 1933 for
Kong. Weta Digital, the three-time visual effects Oscar-winning studio (for the
LOTR trilogy), worked on more than 3000 shots for
Kong that were whittled down to approximately 2500 in the final cut.
“We created more creatures for
Kong than for the entire trilogy,” says Joe Letteri, senior visual effects supervisor for
Kong who has garnered two Oscars (
The Lord of the Rings: The Return of the King and
The Two Towers) and one Oscar nomination (
I, Robot) while at Weta. More than 40 types of digital creatures act in the film, from the creepy pit denizens to Kong himself: The giant gorilla is always digital.
In the film, the craggy, bloody, battle-scarred gorilla with mud in his fur is always digital. His performance was created with a blend of keyframe animation and motion-captured data. (All images courtesy Weta Digital / Universal Studios.)
Letteri singles out four areas in which he believes Weta pushed the state of the art for
King Kong: Kong himself, Skull Island’s digital forest (created with miniatures and 3D plants), the ocean simulation, and a reproduction of 1933 New York City. Kong fights for survival in the fabricated jungles of Skull Island, wades through CG water with blood and mud sticking to his fur, and crashes through the streets of a virtual New York. Unlike
LOTR, which was shot on location in New Zealand, Kong’s world is largely digital. To push the state of the art, Weta developed new software and plug-ins for Alias’s Maya, Apple’s Shake, and Pixar’s RenderMan, the three major tools used for the film.
Animation director Christian Rivers began working on Kong by supervising a small, tight group of animators who “fleshed out a lot of the gorilla’s character,” as he puts it. Using Maya, the group started with the film’s famous climax in which the beleaguered gorilla clings to the Empire State Building as he’s attacked by biplanes. Next, they worked on Kong’s fight with three
T. rex dinosaurs, and then moved on to other key sequences.
“[Jackson’s] way of working was to discuss ideas in a story meeting and send the animators off to create little vignettes,” Rivers says. Eventually, the animators created Maya animatics for the shots Jackson deemed best so that he could use digital cameras to design camera moves in the 3D environment.
For his part, Rivers moved on to supervise and direct Kong’s performance and, working alongside animation supervisor Atsushi Sato, that of the creatures interacting with Kong. A team of approximately 50 animators and actor Andy Serkis, who had been motion-captured for
LOTR’s Gollum, created the star’s performance-a blend of motion-captured data and keyframe animation. “We captured Andy for many Kong sequences, excluding the crazy stunts,” says Rivers. “We used his ideas for the dramatic emotional scenes. But to create the weight and physics of a 25-foot gorilla, we also had to keyframe him. And sometimes the director wanted performances that were more practical to keyframe.”
On set, Serkis wore arm extensions and a Lycra suit padded into a gorilla’s physiology. To act with Naomi Watts, who plays Ann Darrow-the Fay Wray role in the original film-he was lifted 15 feet off the ground in a cherry picker. When he roared, a mike dropped the frequency of his voice.
Actress Naomi Watts’s greenscreen image (left, courtesy Pierre Vinet / Universal Studios) was composited with the digital gorilla in front of a digital New York City for this sequence.
When Serkis duplicated his performance on a motion-capture stage, his facial expressions and body movements were acquired simultaneously. Rivers notes that data from the areas around his eyes and brows was most useful. Rule-based “facial action coding” software developed at Weta turned the data from Serkis’s emotional facial expressions into gorilla expressions. “We could take [Serkis] straight in, body and face, or we could animate Kong, or use some combination of the two,” says Letteri.
Kong’s appearance was as crucial as his performance. The filmmakers imagined him as the last of his species, living alone in the jungles of Skull Island. “No one was around to groom him, so he was a matted, dirty creature,” says Martin Hill, 3D sequence lead. “The first maquette was made of yak hair.”
To create Kong’s digital fur, Weta developed a proprietary, deformer-based system that allowed various departments to work with different elements-guide hairs, expressions, and so forth. Each hair was a RenderMan curve. Groomers started by painting texture maps in areas where they wanted the fur to grow. The maps, vertex data, or expressions specified hair density, length (number of CVs), and thickness. “At this stage, we had a porcupine-looking monkey,” says Martin Preston, fur software developer. Texture maps also defined the fur’s color.
To style the hair, groomers specified the frizziness with deformers that controlled where particular hairs would bend. A pelting system, also controlled with deformers, grouped the hair into clumps. “We had 30,000 to 40,000 clumps on Kong’s head alone,” says Preston. “It isn’t a solid mass of hair.”
The stylists positioned the clumps by placing points on the model’s surface, by painting maps, and by having software randomly distribute a number of clumps. “We have a whole collection of plug-in deformers to control layers of clumping, grooming curves, and so on,” says Preston. “On his arm, there are 10 levels of deformers.”
To wedge blood, mud, and tree trunks (the gorilla is huge, after all) into Kong’s thick hair, the fur team used instanced geometry. “We had 2000 leaves and 2000 bits of mud and dried blood in Kong’s hair,” says Preston. Because these elements were generated at the same time as the hair using the same methods, they traveled along with the moving hair. Maya dynamics animated the hair; the simulation crew used a separate set of deformers and scripts to push the fur around when Kong moved.
“What they were all building is a program that executes in RenderMan,” says Preston. “It has all the instructions for growing the fur.” The program, a dynamic shared object (DSO) they named Bonobo, took charge once the lighting TDs had set the lights and applied the shader. “It happens at the end, so at any point in memory there is a limited amount of fur,” says Preston. “It grows, on average, four million hairs to cover Kong.”
Because Kong appears in lighting conditions ranging from the hot, tropical sun to nighttime New York City, the shading team wrote one overall hair shader that incorporated shader algorithms for any type of hair, including that for Naomi Watts’s digital double.
“We started by implementing, in its entirety, a 2002 SIGGRAPH paper written by Stephen R. Marschner and others called ‘Lightscattering from Human Hair Fibers’,” Hill says. Previously, CG shading models dealt with primary highlights on hair, Hill explains, but the Marschner paper added two elements: light that refracts into the hair and comes out the other side, and light that goes into hair, reflects, and comes out the same side it went in. “It more accurately models the math for what happens to a light ray entering a hair strand rather than a cylinder, refracting and reflecting inside it,” he explains.
This math produced what Hill calls “shampoo commercial” hair, perfect for Watts. For Kong’s coarse, matted hair, the crew added displacements and noise. Also, deep shadows, ambient occlusion, and reflection baked on a per-groom level helped give the hair depth and volume. “Each groom has a 3D occlusion map,” says Hill. For specular lighting, math representing an isosurface around the hair clumps provided a layer used to render highlights.
“Fur is very dependent on the groom that defines the surface,” notes Letteri. “You can’t separate the shader from the fur. That’s what was hard-figuring out whether [a problem] was due to the shader or the groom.”
Kong has around four million hairs on his body that Weta Digital rendered using a new shading model that executes in RenderMan.
Kong’s fur had to react to the environment around him, so the team tweaked the shader depending on his surroundings. For example, they gave Kong’s matted, dirty hair a watery sheen at the end of the capture sequence. Also, for shots of Kong on the reflective Empire State Building, the shading crew added a reflection component to the hair and used a different reflection occlusion version for the fur. “We couldn’t use the same methods as we did for the building,” Hill says. “Because you see through his fur strands, we needed more volumetric reflection occlusion.”
To test the shaders, the team shrunk the digital gorilla into a realistic size, rendered that Kong, and put him into nature films to see how well he fit into natural environments-particularly dappled light filtering through trees, as it would on Skull Island, and early-morning light to emulate Kong’s big scene on the Empire State Building, which takes place at dawn.
Jackson’s Weta Workshop, which created models, miniatures, props, and so forth for the film, designed Skull Island to imitate matte paintings from the 1933 film, not as a real jungle. Nearly half the shots, including a
Brontosaurus stampede, a fight between Kong and three
T. rex dinosaurs, the spider pit, and Kong’s capture, take place on this island. “Nearly every Skull Island shot is a combination of elements filmed on a miniature set with digital enhancements,” says Eric Saindon, digital effects supervisor.
To create Kong-sized digital trees, the crew used the same hair system as it did for the gorilla’s hair. “We started with a single hair for the trunk, grew hairs off that for branches, and continued until we had enough to create a canopy,” says Saindon, “and then we did the same thing with leaves.” For interaction between characters and jungle elements, the team often used Maya Paint Effects to create the environment, rendering it with custom software that fed the output mesh straight into RenderMan. Maya’s hair solver handled the dynamics.
“When Kong and the
T. rexes come down through the vines and land in a swamp, we have a montage that’s similar to the 1933 movie in which Kong cracks the
T. rex’s jaw open,” says Saindon. “We didn’t have any plates. We built the walls, ground, plants…everything in 3D, and used Paint Effects for most of that sequence.”
Weta’s custom Maya plug-in named Putty helped with surface-to-surface collisions, particularly at the end of the
Brontosaurus stampede in which dinosaurs and sailors all land in a pile on top of one another. “We also used it for things like footprints and trees,” says Saindon. “It doesn’t do the collisions; it just tells the system when to look for collisions.” For example, rather than having a tree branch constantly checking to see if it is colliding with a creature, the plug-in told the system when a creature was near.
Weta extended the small Skull Island jungle set with filmed elements of miniature trees on 2D cards, matte paintings, and 3D plants. The dinosaur is one of 42 digital creatures in Kong.
For shots on Skull Island that didn’t require much camera movement or creature interaction, compositors created the jungle using filmed elements of miniature trees. “[Jackson] wanted to give Skull Island a sense of life,” says Erik Winquist, digital compositing supervisor. “So, all the elements were shot with wind. Hopefully, it looks like a living place.”
Using a custom 3D interface that Weta created for Shake, compositors built virtual dioramas: They imported rough geometry for the scene and the camera used by the TDs, and then placed cards with the filmed tree elements into the 3D scene. “We had a 100-square-foot set,” says Winquist. “Everything beyond that had to be created by somebody.”
Similarly, shots of the boat on which the protagonists travel to Skull Island and later return with the captured Kong were created from a mixture of elements-plates filmed on stage using a full-scale model, water created from color-corrected filmed elements that were warped by compositors, and with a blend of 3D and 2D digital water simulations.
“We mainly used a Tessendorf-style of water simulation as a starting point to get calm water simulations,” says Ben Snow, visual effects supervisor, referring to Jerry Tessendorf’s SIGGRAPH 2001/2004 papers titled ‘Simulating Ocean Water.’ “Then, we developed water-simulator deformers in Maya that we implemented as RenderMan shaders and Shake plug-ins so we could put the same values into all three and get the same patterns.”
For rougher seas, the crew further developed the tools. “In previous films we used a single wind-speed characteristic, but as we refined the tools, we added off-axis wind direction, which gave us a great deal of complexity,” says Christopher Horvath, 3D CG supervisor. Using data gathered from buoys in the northern Atlantic, the simulation team added turbulence to surface waves in the ocean, pushing smaller waves differently than big ones and having waves moving parallel to the wind as well as perpendicularly.
For the most difficult water scene, however, in which Kong is captured while splashing in waist-deep water, the crew composited filmed elements. “Even with ridiculously full-blown sims, the photos looked better,” Horvath says. “So we concentrated on giving compositors tools to place filmed elements where the splashes would be, and added CG splashes behind to beef up and tie the elements together.”
New water-simulation tools that allowed wind to blow from more than one direction and emulate cresting foam helped create a CG ocean as well as the interaction of the water around the boat.
Rather than creating tools that plug together in Maya, Horvath’s team created a development suite with tools that use Maya controls. “We built new tools on the fly from various components and would recode and recompile on a shot-by-shot basis,” Horvath says. “We have libraries of fluid tools written in C
++ that we can reassemble into new tools quickly.”
One such tool was written by 3D digital water TD Chris Young, who had not written code before this project. His tool creates cresting foam-the sharp bits at the top of waves that gradually decay and move with the waves. “It’s the most evolutionary in terms of pushing CG water technology forward,” says Horvath. “It has 100 controls, and the resulting foam is magnificent. In the past, we used particles or hand painted the foam. This has a real feel to it because it’s based in science.”
When Kong is captured, he is taken to New York, where he’s exhibited on stage. He escapes, tears through Times Square, crashes through the city all night looking for Ann, and then, at daybreak, climbs the Empire State Building. Although Watts was filmed running in an interior set piece, and other parts of the city were built for the actors, most of the city was constructed of 3D models with a 360-degree matte painting in the far background. “In 40 percent of the shots, there is some piece of a set-storefront windows, for example,” says Dan Lemmon, digital effects supervisor. “But, we did digital set extensions above that level. For the rest, we created everything from scratch.”
To build 1933 New York, the crew started with a low-resolution polygonal map of modern-day New York that gave them the skyline for the entire city. Using a dataset of modern New York that had information about the year each building was constructed, they culled all the structures built after 1933. Then, referencing a set of photographs from an aerial survey in the ’30s, they added the buildings that had been torn down. “We created all of Manhattan, the shoreline of New Jersey, and the shoreline of Brooklyn in 3D,” says Lemmon. “We built historically accurate, low-res building models in a format that would be sympathetic to a script that 3D CG supervisor Chris White was writing to add architectural elements.”
Although modelers constructed such signature buildings as the Empire State Building by hand, most of the buildings were constructed using White’s script, called CityBot-Urban Development Software (or, ’bot for short), and a library of historically accurate architectural elements.
“I wrote rules based on the reference photos from the ’30s that told the ’bot what to put where,” says White. The ’bot added appropriate architectural details such as windows, ledges, and doorways, and, thereby, created the mass of the city. The city planners then populated New York with 3D vehicles and people. Massive Software’s crowd-simulation software managed the vehicular and pedestrian traffic unless the shots required hero animation.
Weta extended the New York City set and built a digital replica of 1933 Manhattan with a rules-based system that handled architectural details and textures.
When the camera was at street level, the crew dressed the sidewalks and alleys with mailboxes, fire hydrants, trash cans, bits of paper, and so forth. For street-level set extensions, they used White’s CityBot with additional rules for such elements as stairwells and fire escapes.
To texture the buildings, 3D lighting TD Michael Baltazar wrote software that made rough guesses for material types based on luminance values in black-and-white photos. To render the resulting 90,000 buildings, 3D sequence lead Jean Matthews created a system to bake buildings into textures. “The ’bot would build a building, and all the details would be rendered into textures for displacement maps and so forth, so the building was rendered with textures rather than 3D geometry,” explains White. “We probably have 400,000 textures.” A procedural weathering system added rain and snow.
Film critic Roger Ebert writes that the sophisticated effects created by Willis O’Brien and others for the 1933
Kong “pointed the way toward the current era of special effects, science fiction, cataclysmic destruction, and nonstop shocks…movies and countless other stories in which heroes are terrified by skillful special effects.”
Letteri believes the work Weta did for
Kong could also lead to new types of films. “Creating a title character like Kong, who has such a complex performance without dialog, means we can make creatures that people have not been able to think of before,” he says. “And, if you can build a city like New York, you can build any city-past, present, future-on any planet. [For
Kong] we turned the camera down a street, told the software what kind of neighborhood or architecture to build, and it constructed the city for us. We didn’t have to texture every building by hand.”
Letteri adds, “Having these come together opens up interesting possibilities for what we might be able to do in the future.”
Barbara Robertson is an award-winning journalist and a contributing editor for
Computer Graphics World. She can be reached at BarbaraRR@comcast.net.