Imageworks uses performance capture and hand animation to create ‘dollhouse’ realism.
Any animator who has spent months sculpting blend shapes or posing IK handles will tell you that the phrase “computer animation” is not only a misnomer for the person’s art, but a borderline insult. Computers don’t animate anything; people do. One of the reasons for the misconception is that digital characters usually lack the hands-on tangibility that makes stop-motion puppets feel handcrafted and unique. It’s this handmade charm and tactile reality of stop motion that first-time director Gil Kenan, backed by executive producers Steven Spielberg and Robert Zemeckis, wanted to introduce to the CG medium in Sony Pictures Image works’ latest feature, Monster House.
Kenan wanted the digital characters to feel as though human hands had labored on them, even if it meant preserving the fingerprints the sculptor left in the clay maquettes from which they were scanned. “I wanted the audience to feel like it could relate to every character in every environment by reaching out and touching them, so that meant devising an entirely new approach to putting together a computer generated movie,” says the 29-year-old director, who has spent the last four years living every aspiring filmmaker’s dream.
Kenan was fresh out of film school at UCLA when his short movie “The Lark” was noticed by Zemeckis; along with Spielberg, Zemeckis had opted against developing Monster House as a live-action film because the anthropomorphizing Monster House in question could only be brought to life through animation. And they needed a director who could handle the challenges of animation and directing the partially motion-captured performances of the CG characters. Kenan’s “The Lark,” which featured a stop-motion bird and roto scoped live actors performing against 2D animated backgrounds made in Adobe’s After Effects, earned him the job. Not bad for a if lm shot on DV and edited in Apple’s Final Cut for a mere $400.
Monster House utilizes the same performance- capture system pioneered for 2004’s The Polar Express (see “Locomotion,” December 2004, pg. 16), which lent itself perfectly to capturing the weight and physicality of the characters. Unlike The Polar Express, however, Monster House’s world is far more stylized, blending the childlike elements of Rankin/Bass with the stop motion work of Ray Harryhausen (Clash of the Titans) to forge a kind of “dollhouse” realism in which characters feel more doll like than CG creations.
The story follows a boy named DJ (played by Mitchel Musso) who is obsessed with a mysterious house across the street that is owned by the meanest old man in the neighborhood, Horace Nebbercracker (Steve Buscemi). When DJ and his friend Chowder (Sam Lerner) try to recover their basketball from Nebbercracker’s lawn, the old man goes berserk, lifting DJ off the ground before collapsing dead on top of him. That’s when the house comes alive, devouring anyone and anything that comes its way. DJ and Chowder do their best to alert those living nearby, but their warnings fall on deaf ears, namely those of Zee (Maggie Gyllenhaal), the world’s worst baby-sitter, her apathetic head banger boyfriend Bones (Jason Lee), and two witless police officers. It’s up to DJ, Chowder, and a prep-schooler named Jenny (Spencer Locke) to save the neighborhood.
Stop-Motion CGI
In an era when CG characters can boast millions of volumetric hairs and scenes can be rendered with hundreds of lights, Kenan’s plan to never let the computer’s “inhuman” ability to process data defeat the human connection to the film was bold and audacious, and had repercussions throughout the production. “In my first conversation with my visual effects supervisor, Jay Redd, we decided to remove motion blur from the entire movie,” says Kenan. “Motion blur has been used as a crutch in CG animation for so long, and what you lose is that amazing Harryhausen staccato effect, where things have a real connection to the ground, and a real weight and gravity to them. I want everything to feel planted and tangible and connected to the world.”
In fact, the choice to turn off motion blur was a direct nod to stop motion, notes Redd. “When you’re seeing every frame sharp, when your brain is registering every pose, every eye shape, every dart of an eye or finger movement, it makes the film feel handmade.”
Forging the film’s hand-labored look began during the modeling phase, where sculptor Leo Rijin fashioned clay maquettes of each character, ranging in size from 12 to 16 inches. Under normal circumstances, a modeler would then work from scans to build one half of the character, duplicate it to the other side, and stitch the halves together. Not here. “That drove me crazy when I first found out that’s how they did it,” says Kenan. “I insisted that both halves of the original clay maquettes be modeled. It was really crucial that all the characters had distinct features and weren’t perfectly symmetrical. If you look closely at DJ, you’ll see that one of his nostrils is a little wonky, and that’s just because [Rijin] had a little accident in sculpting his nose, but it constitutes what we identify as a particular characteristic, and it makes a big difference.”
Modelers gave characters the unique facial structures of
the actors voicing them, so the artists could more easily
incorporate nuanced facial expressions.
Modeled with polygons, subdivision surfaces, and bits of NURBS, the asymmetrical character geometry also demanded asymmetrical rigging. “We couldn’t just rig one side and fl op it over to the other side because the model was different on the other side,” says animation supervisor Troy Saliba. “Each side had to be rigged independently.”
To ensure that the character models manifested the slightest nuances of an actor’s facial performance during the motion capture process, modelers infused the models with the unique facial structure of each actor. For instance, artists gave Mr. Nebbercracker big, walnut shell eyelids, and tailored the eyebrows to match those of Steve Buscemi. They also accentuated the nasolabial furrow so Buscemi’s frequent sneering translated perfectly during motion capture.
Carrying flashlights and armed with Super Soakers, DJ, flanked by his friends Jenny and
Chowder, explore the strange happenings inside the Monster House. The characters and
everything else in the film look handmade rather than CG-perfect.
Continuing with this stripped-down aesthetic, the filmmakers also avoided hyper-realistic cloth and hair simulation, relying instead on simple geometry for hair and tubular geometry for clothes. “[Redd] and his team came up with simplified hair. It doesn’t move, and you don’t think about it; you shouldn’t think about it. You don’t think about an actor’s hair in a feature; if you are, you aren’t watching the movie,” says Kenan. In the same vein, the team kept the eyes simple and graphic. Adds Redd: “When the eyes become complex, you tend to want more, and then you start to get into shiny skin, subsurface scattering shaders, and all these photo real qualities we weren’t interested in doing.”
Actors and Animators
United The biggest revision to the performance-capture process used for The Polar Express occurred with facial animation. Unlike The Polar Express, where the character rigs were geared almost exclusively for motion capture, Saliba made sure Monster House’s rigs were equally capable of responding to motion capture and hand animation. As a result, the animation is always a blend between actor and animator. In The Polar Express, the motion-captured data from the 150-plus markers on each actor’s face was applied directly to the corresponding patches of facial geometry on the character, actually shifting the mesh around without any intermediary enhancement. For Monster House, however, the team processed the same motion-captured data from the facial markers through a new proprietary Facial Action Coding System (FACS) muscle system, also used in Superman (see “Leaps Tall Orders,”). Developed by Mark Sagar, now with Weta Digital, the system uses Paul Ekman and WV Friesen’s cataloging of facial expressions based on muscle movements to solve motion-captured face data.
“Imagine it is a library of over 100 facial poses, such as ‘inner eyebrow up’ or ‘outer eyebrow up.’ It harnesses the complete facial range of a person and divides it into individually numbered poses, so that it can choose any number of them, such as 4, 36, 37, and 94, and combine them in different percentages to create a complete facial shape,” explains Saliba.
To set up the system, the team videotaped each person acting out the 100-plus FACS poses, and then hand-keyed a blend shape for each pose to create a FACS library for each actor. Using the motion-captured data, the FACS solver then selects the correct poses and combines them in various percentages to create an expression on a frame-by-frame basis. Once the FACS solver has created these expressions, animators use Image works’ proprietary Character Facial System (CFS) to fine-tune them with blend shape controls.
“Simplicity was my mandate going in because I knew we were doing a lot of editing and animation to the motion capture. I wanted to have rigs that were geared toward animation and not set up so that they would only work with motion capture,” says Saliba. “That was one of the problems they ran into on The Polar Express, and I didn’t want to be in that position because I knew our film was more stylized, meaning the animation department was going to be leaving an indelible fingerprint on top of the motion capture.”
As the FACS solver accessed the various poses and set values on them, it didn’t touch the CFS rig, giving artists two layers of control over the motion-captured performances. (They could also key frame the FACS poses manually if a performance failed to capture properly or the director wanted something different.) “We can use these FACS poses as a foundation to build our animation on. For instance, instead of trying to combine 11 or 12 different muscle blend shapes to create a smile [with our CFS], we can go into the FACS poses and find the smile shape and start with that, using the CFS to make it more organic,” explains Saliba.
While the animators were aiming for a puppet-like feel in the animation, they utilized proprietary sculpt deformers, known as Tweak Clusters, to give the body animation a touch of squash and stretch, and a more graphic look. In all, animators had 11 types of tweak clusters in their arsenal: Some of them would incorporate lattice deformers, others would add or subtract volume, giving the portly officer Landers, for example, a wobbling belly.
Motion Capture
Using 200 Vicon mocap cameras, Image works captured the faces and bodies of the actors as they performed on a 20x20x16-foot stage—nearly double the stage volume used for The Polar Express. “We had only one stage this time [as opposed to the three used for The Polar Express], and one of our goals was to capture within a larger space both face and body data simultaneously for a longer amount of time, so we didn’t have to break up the motion. To do that, we needed a larger stage, because characters are always running across the street or climbing stairs in our film,” says Redd.
Breaking with standard practice, artists sculpted both sides of the maquette, rather than only
half and replicating the image for the other side. This gave the characters distinctive features,
but required separate rigging for each side of the character, as well.
The body performances were analyzed and mapped to the character skeletons using Autodesk’s Motion Builder, and output as an Autodesk Maya file for animation. Kenan also used six video cameras to record the scenes, which he edited together to create a live-action previz to make sure the story was working on a purely character level. Animators also used the video footage as reference as they shaped and sculpted the motion-captured data. Unlike The Polar Express, Kenan says the finished performances for Monster House were a completely organic collaboration between actor and animator. “It became such a stew between key framed and motion-captured animation that it’s almost impossible to discern between the two while watching the film,” he says.
On the mocap stage, wire mesh and foam-core props, which were invisible to the infrared camera, represented the various sets, including the exteriors and interiors of DJ’s home and the Monster House. Of course, the greatest challenge with the motion-capture process itself lies in correcting eye line problems and other proportional discrepancies between the actors and their digital characters. “We were very conscious at the beginning about casting actors with similar proportions as their digital characters,” says Redd. “So we cast kids as the kids. Maggie Gyllenhaal is very similar in height to Zee, Jason Lee is the same size as Bones, and so a lot of our eyelines usually worked very well.”
According to Redd, the kids running across the street tended to be the hardest animations to capture because it was difficult endowing their stride with a believable sense of weighting. “We would capture five or six volumes at a time of the kids running across the 20x20-foot volume, and then edit them together. Capturing scenes involving four or five characters in the volume was also challenging. When the cops show up and the kids crowd around the car, those are the hardest to deal with because the actors are all in close proximity, and the cameras can’t see through them because they’re optical-based. Most times we’d employ [key frame] animation to solve it.”
The set of images at the top of the page depict a model of the Monster House (first) in its pristine
condition and (second) during a stage of disruption. The image to the right shows the hundreds of
controls on the house that allowed the animators access to every little detail. Below is that same model
that appeared in the film.
Once possessed, Nebbercracker’s house becomes a character in itself, and had to be every bit as emotive and expressive as the humans. The crew videotaped Kathleen Turner, who plays the house, rampaging through foam core props on the stage. While the animators took cues from her performance, the entire house was key framed. Artists rigged the house with more than 40,000 controls (mostly IK), including base controls that could torque the overall shape of the house, and finer controls for moving, rotating, stretching, and breaking every plank, shingle, stairway, railing, siding, brick, floorboard, and even the trees on the front lawn. The house has four specifically rigged states: calm, slightly broken down, articulate, and uprooted with tree arms. “We had to plan out how the house would perform, the faces it could make, the emotions it needed to convey,” says Redd. “This resulted in many meetings and story sessions, determining exactly which boards had to break, which could bend, where the joints were, how the windows would twist, how the gutters, steps, and bricks would react, and so forth.”
Autumn in the Air
After the animations were completed and the camera movements were blocked out in Maya, Kenan, director of photography Xavier Perez Grobet, and camera operator Paul Babin then shot the scenes virtually using Wheels, a virtual camera system developed at Image works for The Polar Express. Standing on the mocap stage, the filmmakers shot the scenes as if they were filming live action, using a camera head as an input device to control the virtual camera in Motion Builder. Turning the wheel on the camera head, they could control the pitch, roll, and tilt just like they could with a real camera.
Autumn is more than just a season in the movie; it
played a crucial role in setting a mood, with its
subdued sunlight and the nearly bare trees that look
as if they could reach out and grab someone
Because so much of the film involves children running across streets and through houses, Image works also developed a shoulder-mounted steadicam for Monster House. This allowed the filmmakers to add a more human, handheld feel to the camera movements, better capturing the urgency and emotional charge of a scene. “Monster House has tons of action, in addition to being a very scary film, so we needed a handheld camera to add tension to its movement,” says Redd. “It all goes back to our mandate of making the film feel handmade. We wanted the camera to have the little quirks and pops that humans give it.”
The autumnal atmosphere of Halloween was also a crucial character in the film, with the pale sky of day, the blazing orange sun at dusk, the deep blue shadows of night, the naked, skeletal trees that grasp like talons for the kids, and the flurry of leaves swirling across the ground. “From a dramatic storytelling point of view, we look for what creates the best mood. The sky is very blue in the fall, the sun never gets very high, and the shadows are always very long, so those are the cues people will pick up on,” says Redd. “The film unfolds over a day and a half, jumping from 2 pm to 7 pm in successive scenes. Using angle of light and color of shadow, we could tell the audience what time it was without throwing up subtitles.”
To make sure the characters and environments reflected this atmospheric lighting, Image works developed new global illumination, radiosity, and raytracing software to light the movie as if it were shot on a practical, live-action stage. The software combines refraction, reflection, indirect diffusion schemes, flags, and complex bounce lighting to produce the kind of photorealism a dollhouse might have, even without being completely “real.”
“Bounce light is perhaps the single most important visual component, next to control of shadow color and length,” explains Redd. “After all, we are making a scary movie. It was the key to super-blue shadows, and making this film feel handmade. It’s important to show how the color of the carpet or the wall in DJ’s room affects his skin color in order to get that tangible feeling.”
For the characters in Monster House, Image works used its proprietary FACS muscle system
that utilizes a category of facial expressions based on muscle movements to solve mocap
face data.
Artists also strategically placed lights in Maya so they would only cast shadows from certain objects. For instance, really long shadows are cast by the trees but not by the kids, so that they loom ominously over the children as if they’re about to attack. Astonishingly, the team used only one light in most scenes—the sun—and then arranged bounce cards around the characters, mimicking the way a live-action movie is shot. “I took my lighters down to a small stage, brought in a couple of grips and a camera operator, and through lights on miniature sets, fogged the scene, and got them thinking about how to transcend a cliché night or day look,” says Redd. “I wanted them to tell the story with color, time of day, angles of sun and shadow, atmosphere, and bounce cards.”
When the kids make their way inside the Monster House, they use volumetric flashlights to
illuminate their way. The filmmakers, however, took care not to allow the lights to reveal too
much so as to not give away any “secrets.”
Once the kids infiltrate the house, the children wield volumetric flashlights to illuminate the structure’s interior. Because the story demanded tight control over what was revealed to the audience, the artists used dust clouds, cuculoris, and barn doors to block parts of the set from view. For the plumes of dust cast up by their footfalls that thickened the light inside the creaky old house, the artists used Image works’ proprietary sprite renderer, called SPLAT (Sony Pictures Layered Art Technology). As the kids proceed inside armed with Super Soakers, SPLAT also generated the fluid effects.
All the particle simulations were first done in Side Effects Software’s Houdini or Maya, and then rendered by SPLAT using specific sprites. Because the film is teeming with destruction effects, the filmmakers needed a way to generate massive dust clouds and volumetric effects, yet maintain fine control over their scale and detail. “SPLAT allows us to dial in looks and styles quickly and see the results rapidly, without having to calculate tons of volumetric information,” says Redd. “We can create ‘volumes’ with the types and number of sprites that we use. There are many variables that we can tweak to get a specific ‘look’ for Monster House’s dust clouds, fire, and so forth.”
Strategically placed lights inside Maya cast shadows from specific objects—the tall, leafless
trees extend long shadows while the children do not cause shadows—providing an ominous feel.
Aside from the destruction clouds, another big effects challenge was the ubiquitous ‘Pigpen’ dust that settles around the Monster House. Each cloud had to be individually sculpted for every shot in order to hide or reveal the particular action of the house. “I worked with our effects team to plot out the location, speed, height, and width of the dust clouds from shot to shot,” says Redd. Continuity was very important.”
Landscaping
The film climaxes with the emergence of the “Constance Ghost,” an elegant and graceful apparition—encircled dramatically in a 360-degree camera move—that had to be recognizable as a person. To this end, the artists created keyframed animations as a guide for the simulations, producing dozens of wisps, tendrils, and ringlets of spectral smoke that were rendered and lit, and ultimately coalesce, into the finished apparition via Image works’ proprietary compositing software, Bonzai.
In addition to Bonzai, the artists also used Adobe’s Photoshop and Maxon Computer’s Cinema 4D to create the many digital matte paintings that furnish the shots with background trees, the sky, and deep vistas. Nevertheless, there are almost no static backgrounds in Monster House. The environment is perpetually alive with gently blowing leaves, moving bushes, and scattering leaves. For leaves, grass, rocks, and bricks, Image works used Houdini and exported the surfaces into the studio’s internal geometry format for dynamic simulation or effects.
The house, once it is possessed, transforms from a structure to a main character, and is
every bit as emotive and expressive as DJ and the other CG actors.
For surfacing the characters and environments, texture lead Dennis Bredow and his team used Maxon’s BodyPaint. “We used it for almost everything on the movie, including the myriad layers of dirt, decay, color fade, scratches, dents, dings, scrapes, and so forth,” says Redd. “We spent a lot of time on the character makeup; we coordinated costumes with the color of each character’s face, the rosy qualities of their cheeks, freckles, marks, bumps, and more. [The texturing process] even involved revealing fingerprints on the characters’ skin. It adds another layer of doll-like realism to the whole world without trying to create real, living flesh. Our characters get beat up and dirty over time, so we had to map out when, where, and how each character would become dirtier and dirtier. Lots of tricks were used with alpha channels and layered shaders to control the continuity.”
To give depth and realism to surfaces for extreme close-ups, the artists added procedural bump and displacement maps to some of the painted textures. Finally, to replace the perfections of CGI with the living chemistry of film stock, the team added film grain to every shot and created a diffusion filter to soften the highlights.
The CG artists avoided giving the characters hyper-realistic hair and clothing, and instead
used simplistic geometry for the hair and tubular geometry for the clothing. This worked well
with the movie’s stripped-down aesthetic.
Monster House represents a significant step forward in the evolution of the motion-capture process pioneered on The Polar Express, primarily by giving animators far greater control over the actors’ performances. “It is turning into a new kind of hybrid medium,” says Redd. “Motion capture is really an immaculate reference for the animators. The analogy I use is to Disney’s use of filmed reference for the dances in Snow White. No one would say Snow White is a terribly animated film. For us, the motion capture is the DNA, the substance of a performance.”
In a summer flooded with CG features, each competing for technical supremacy, Monster House also steps out of the beaten digital path to assert, not hide, the authorship of the human hands behind it. “I wanted to make a film that didn’t have the signatures of CG,” adds Redd. “No film is computer-generated, and we wanted our film to show that.
Martin McEachern is an award-winning journalist and contributing editor for Computer Graphics World. He can be reached at martin@globility.com.
Building a Career
In an interview with contributing editor Martin McEachern, Monster House director Gil Kenan shares his fairy-tale journey from a $400 student film to a multimillion-dollar feature film with two of the industry’s biggest names: Robert Zemeckis and Steven Spielberg.
How did you become the director of Monster House?
I won an award at UCLA with my student film “The Lark,” so it was screened at the Director's Guild of
America
. It was my first screening outside of my apartment! One fateful day, my movie made it onto the desk of Robert Zemeckis. He saw it, liked it, and told me about Monster House, which he’d been developing with Steven Spielberg for about eight years.
Were you aiming for a stop-motion aesthetic from the outset?
That's definitely something I was going for, mostly because traditional CG work is not really appealing to me as a filmmaker. I've had a really difficult time with CG. That's why for my student films, I avoided Maya like the plague, because you can fancy things up all you want, but at the end of the day, it all ends unfeeling similar. It doesn't have a tangible feel.
How did you imbue the characters with that tangible feel?
In my first conversation with effects supervisor Jay Redd (who's a master at texturing), we talked about wanting to feel the fingerprints of the sculptor on all our characters' faces. And that's all there, sometimes preserved right from the source scans. Individually, you wouldn't notice all those human ‘touches,' butte point is when you add them all together, it makes a human connection between the film and the audience, and that for me is what's been lacking in CG films. It's the idea that it's not all computer-processed; it took humans to make every model, every environment, and every character. And because the movie stars humans, I wanted that sense of humanity to exude out of every frame.
How did you anthropomorphize the house?
I started out by casting the house just like I cast the kids. I went to my production designer, Ed Verreaux(who was Spielberg's storyboard artist at the beginning of his career, and the first person to draw ET and design Elliot's bedroom), and together we drove around Los Angeles,
Pasadena, and
Glendale
, taking pictures of houses that would fit the character of our house. Then, we reassembled them on a desk, and took one window from one house, a door from another, a porch from the next, and kind of built the face, an ideal facade for this house. The first night I read the script, I just drew like crazy, making bunch of drawings to show Zemeckis what I wanted the house to look like, and those original drawings remained very similar to the finished look of the house.
The houses seem to have the artificiality of a studio backlot. Is that something you were aiming for?
Yeah. I really wanted this movie to take place in an idealized movie suburban reality. I was always a big fan of the studio backlot suburbs. For me, that always defined a certain feeling and emotion. Infact, on my second house hunt with Ed, we commandeered a golf cart and let loose on the Universal backlot. We drove down that street from every awesome suburban movie ever made, taking pictures and just getting the feeling of it.
Is that where The ‘Burbs was shot?
Absolutely! It's also
Wisteria Lane
on Desperate Housewives. I think there's something magical about it. We also ended up at the Psycho house. I jumped out of the golf cart, took a chance at trying the door, and found it miraculously unlocked. I went inside, started goofing about and rolling around on the floors. I couldn’t believe I was actually in the Psycho house! The funniest part is when I came out: There was a tour group driving by in a tram, and they freaked out because just as they were passing by, there was this weirdo with big eyebrows jumping out of the door.
Did Wheels, Imageworks' virtual camera system, help you capture the emotion of a scene better than you could by animating the camera in Maya?
Well, there's one huge fl aw with traditional CG animation, and it has driven me crazy since the early days, and that’s the weightlessness of the camera. As a filmmaker, it has always been the most frustrating thing, because I feel like the weight of the camera is not just an aesthetic thing, it has an emotional thing; there's a gravity to it and a connection to the actors and the scene and the story. It was really important that I shoot this film with the philosophy of a live-action film, which is the philosophy of narrative filmmaking as it's been learned during the last 100 years—but still not forgetting that when it’s absolutely necessary for the story, I can go nuts and break all those laws of narrative filmmaking.
How did you give the camera this connection to the actors?
The first thing I did was hire the least technical, the least computer-knowledgeable cinematographer I could find, someone who made films purely emotionally without any artifice, and that was XavierPerez Grobet, who shot Before Nightfall. That film is purely emotional filmmaking and cinematography. He came in without knowing anything about Wheels or mocap; all he knew was traditional filmmaking. It was a really good marriage between us because I felt comfortable in 3D space, and so together we could fulfill our goal of shooting the film with a real sense of weight and gravity. We had four camera operator sat Imageworks who worked with us for five months to place the cameras and get all the coverage. I then went back to my cutting room and edited all that coverage into the finished cut.
Are there differences between directing actors for live action than for motion-captured performances?
You have to work harder to help them imagine the world, but as soon as the actors were able to embrace the theatrical nature of mocap performance, they were able to imagine themselves purely in character. I find that when you strip away a lot of the stuff—take away costumes, wigs, sets, and props—the first couple of days can be scary because you've taken away their safety net, but shortly thereafter, a kind of transformation takes place where the actors become purely concentrated on character. And in many ways, you get a really heightened sense of character.
Have you heard of a Danish director who made a film called Dogville by doing that exact same thing—stripping his actors of sets and elaborate costumes?
Yeah, Lars Von Trier. And that's the perfect analogy. It's like black-box theater. Focusing on character helped me a lot in bridging the gap between the mocap performance and the finished animation. I worked with all the actors to get a sense of heightened physicality in the performances and mannerisms, making sure they were a bit broader than normal because I knew it would take that extra 5 to 10 percent to translate to the digital characters.
What advancements would you like to see in the motion-captured filmmaking process?
As these movies become technologically advanced, there’s this strange kind of show-off race that’s been happening, and most of it is about how many lights they can fit into a scene, how many amazing hair simulations they can stick on a character, and I feel that's wrong. Every movie should create its own technology. For instance, global illumination was extremely important to our film, to give our [deliberately artificial world] a real sense of existence, so we developed new technology for that. A film should define its own technology and not vice versa. That’s where you get into trouble. I feel like mocap is a tool to serve the story on this film and nothing more. That's where I want to see it go, to become something that's not talked about and just appreciated by an audience for facilitating a great, communicating performance.
|