The Three Laws of Robotics
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
50 years ago, Isaac Asimov's collection of nine stories about evolving generations of robots and mechanical men was published in book form as
I, Robot. One of those stories, "Liar!," introduced the three laws of robotics, which Asimov used to present intellectual puzzles, as have many books and movies since—including the 20th Century Fox film
I, Robot, scheduled for release this month.
Although "three laws" puzzles are at the core of this film, director Alex Proyas gives moviegoers looking for more than brain candy plenty of action scenes as well. Based on Asimov's book,
I, Robot stars Will Smith as Del Spooner, a technophobic cop investigating the murder of Dr. Miles Hogenmiller, a scientist at U.S. Robotics (USR). Could Sonny, a new model NS-5 robot, have violated the first law? The answer is not simple and not simply intellectual. Set in Chicago, 2035, the film features a robot war between two generations of mechanical men and scenes of Spooner fending off truckloads of robots.
|
The new, translucent NS-5 robots created at Digital Domain look identical coming out of the factory, with one exception: The lead character, Sonny, has blue eyes. |
The production and robot designs by Patrick Tatopoulos, who designed the visually stunning film
Dark City, also directed by Proyas, brought into sharp focus one of the first laws of digital visual effects: An effects studio can never have too much compute power. Indeed, consider the processing capabilities required to create and render CG buildings made of stainless steel, glass, and white marble populated with animated robots made of mechanical parts inside translucent shells that reflect and refract the bright luminous environment.
All told, five studios worked on 1000 visual effects shots, but of those, two studios, Digital Domain and Weta Digital, did the heavy lifting—the shots with 3D characters and digital environments. "We used CG throughout," says John Nelson, visual effects supervisor, "from the inception of the movie, to conceptual art, all the way to final renderings. We had many scenes that were completely CG."
Image Engine handled previz before and during location filming in Vancouver, BC; Pixel Liberation Front took over when the crew moved back to Los Angeles. "We prevized all the big action beats," says Nelson, who won an Oscar for
Gladiator. "We have huge action sequences. So I was like a short-order cook, designing sequences by cutting the previz on the Avid. We almost had
I, Robot, the animated version." Nelson gave all shots with Sonny and most shots with the other NS-5 robots to Digital Domain; Weta Digital handled set extensions, shots with the NS-4s, and "anything big and wide."
For Sonny, Proyas had two requirements, one physical, one emotional. "Alex wanted Sonny designed so he couldn't be a guy in a suit," says Nelson. Thus, Sonny is translucent and has a one vertebrae-wide waist. "And he wanted Sonny to be very emotional so you would connect," Nelson adds. "Cinematic robots tend to be foolish or cold. So, even though we had a cast of thousands and huge set extensions to do, job one was creating an emotional performance for Sonny."
To help create that emotional performance, actor Alan Tudyk, wearing a green suit, acted alongside Will Smith during filming. Later, his performance was motion-captured. "Alan was our Andy Serkis," says Andy Jones, animation director at Digital Domain, referring to the actor who played Gollum in the
Lord of the Rings trilogy.
In addition to Tudyk, Proyas directed other actors playing NS-5s on set. For these secondary robots, Digital Domain motion-captured one actor and various stunt people and then created performances to match the green-suited actors. "For each shot, we had five setups," says Nelson. "Rehearse with the proxies [green-suited actors], shoot with the proxies, shoot with lead actors acting to air, shoot the clean plate, and shoot a reference pass." In the latter setup, a stand-in robot and gray and shiny chrome balls were used for CG lighting reference. Also, the crew took multiple-exposure shots for High Dynamic Range Images; Digital Domain built a compact, portable, fish-eye motion-control head that they called "Robotile" to shoot the multiple exposures.
|
Bloodied digital robots attack Will Smith's digital Audi in an all-CG sequence created at Weta Digital. |
For motion capture, Digital Domain used equipment and software from Motion Analysis; for animation, Alias's Maya. "Motion Analysis handled retargeting of the data onto a moving skeleton, and we built a Maya plug-in that transfers that onto our puppet— an FK/IK switching skeleton that animators use," says Jones. "Animators often hate mocap because it's all FK. But if you take the FK out of the joints, you end up with smooth, spongy movements. Our plug-in transferred the data and kept the hardness, the impact from the motion capture."
Jones continues: "We have a series of controls that animators can use to rotate the feet, hands, shoulders, and head that reduce the number of curves animators have to deal with, and this speeds things up. The plug-in baked the mocap data onto these controls."
One thing the group learned was that the robot needed a new shoulder joint. Originally designed as a simple ball joint, it limited Sonny's movement "When Sonny shoots a gun, we used his shoulder to give him a lock-and-load feeling," says Jones. They also played with his spine, twisting Sonny 180 degrees at his waist to reinforce the fact that he's a machine, not a man.
|
Sonny's body was animated using mocap data from actor Alan Tudyk's on-set performance. The robot's intentionally bland face made it difficult for animators to give the robot an emotional performance, but the lighting crew helped with specular |
For Sonny's facial animation, Digital Domain animators used a combination of shape-blending tools and clusters to pull points as they emulated video reference of Tudyk delivering lines of dialog. Because Tudyk's face didn't match Sonny's, the animators had to interpret Tudyk's performance. Also, the robot's design made creating an emotional performance difficult. "Erick Miller wrote tools that allowed us to blend with spline interpolation, which created a more organic curve movement on the skin," Jones says. "We couldn't put too many wrinkles in his face because they wanted an innocent baby look."
Thus, bringing Sonny to life became a dance between animation and light. "He was difficult to light without making his expressions disappear," says Jones. "He had a physical eyebrow, but because there was no dark line for the shape, we were always moving the specular hit to form the brow."
Erik Nash, visual effects supervisor at Digital Domain, managed the lighting. "Andy [Jones] was the keeper of the emotion," says Nash. "I was the keeper of realism. A lot of the time the two goals would be in direct opposition." Adding to the mix was the need to see the machine man's internal mechanisms while keeping enough opacity in the face to allow subtle nuances of expression.
Most of Digital Domain's 520 shots included Sonny; some had thousands of the NS-5s look-alikes. The studio worked for 14 months to create them. "I asked the crew if it was the most complex CG creation they'd worked on, and they all said absolutely," Nash says.
For flexibility in creating the NS-5 look, the team opted for rendering the robots in many layers that were composited in D2 Software's Nuke, developed at Digital Domain. "Other than baking in the key-light direction and the ratio of key light to fill light, all the aspects of how the robot looked were adjustable in compositing," Nash says. "If we had tried to render the robots with a final look, we would never have finished."
The metallic understructure of the robot and the translucent shell each had diffuse, specular, and reflection layers. In addition, the shell had a Fresnel layer that told the compositing software the angle of the surface to the camera. "The optical properties of the shell were very complicated because of the refraction, reflection, and diffusion, so we had to find clever ways to get around doing real raytracing and refraction calculations."
The robot layers were rendered with Pixar's RenderMan using the EXR format, which extended the range of possible colors, creating details in the bright and the dark areas. "We wrote a special plug-in for Nuke, which we dubbed 'Make Bot,' that handled the 40 or 50 layers that make up each robot," says Nash.
Once the team had all the layers working together, it began working on the problem of putting the robot into the environment. "Alex wanted the robot to be a chameleon," says Nash. "His appearance changed dramatically in different lighting conditions. So for each environment, we had to go through the process of discovery."
Unfortunately, the sets were not lit with the robots in mind. "The robots looked best in three-quarters backlight," Nash says, "which is not how you'd light people. We'd wheel our stand-in onto the set, shoot a couple feet of film, and invariably the director of photography would lean over after seeing the robot in the set lit for the actors and say, 'Now you're going to make it look better than that, aren't you?'"
Although much of their effort went into Sonny, Digital Domain worked on a sequence near the end with crowds of robots climbing buildings, shots with a one-armed robot fighting Will Smith, a factory floor shot with 1500 robots, and "a ton" of set extensions. Both Weta and Digital Domain had shots in a digital glass skyscraper that was USR's corporate headquarters. Digital Domain's version was built and rendered in NewTek's LightWave. "It looks like a glass wing standing on end," Nash says. "The whole inside of the building reflects everything." In a sequence on an elevated walkway, the robots are surrounded on three sides by glass that reflects the opposite wall, the walkway, the city outside, the robots climbing the building, and the robots inside, which are also reflecting and refracting everything. "It's a big action sequence with the camera always moving," he says, "so we had to render motion blur into the background plates. We brought in extra renderfarms for a couple of months."
Meanwhile, Weta worked on the big, wide shots, including the film's "establishing" shot. "The old-style robots were getting replaced with the new robots that had the translucent faces over mechanical skeletons," says Joe Letteri, visual effects supervisor. This sequence involved placing robots into plate photography, but as the movie progressed, so did the complexity of the shots.
Of all the 310 shots created at Weta, the tunnel chase sequence was, arguably, the most interesting. The sequence, comprising some 90 shots, takes place in a completely CG environment—a 10-lane tunnel running under Chicago where Will Smith, driving a futuristic Audi, is attacked by two gigantic truckloads of robots.
"We're equipped for doing big scenes with lots of layers and interaction between lots of elements," says Eric Saindon, CG supervisor at Weta, "but the tunnel sequence really put our pipeline to the test. Most of the shots were all CG, including Will Smith." The studio borrowed Digital Domain's digital double of Will Smith. "We used the textures for the digital double, but not the shaders because they use a different method for rendering and compositing," says Saindon.
Weta uses a combination of Maya for modeling, RenderMan for rendering, and Apple's Shake for compositing, preferring to create more of the lighting and the look in rendering than in compositing. "We hammered on Gollum for months, so we could put him in a scene and light him, and he would work like a real character," says Letteri. "But we couldn't do that with the robots because of all the translucency and transparency, so in this film we finished the lighting in Shake."
|
Weta Digital created the tunnel, trucks, car, robots, and sometimes Will Smith's double for this fast-moving, sequence with hundreds of lights and reflections. |
In the tunnel, for example, the crew had to deal with hundreds of light sources. "We had to light it with as many lights as there would be in a real tunnel," says Letteri, "hundreds and hundreds of lights because it's so huge, and the car is moving at 200 mph."
While the car is moving, light is bouncing off its shiny silver exterior, off the stainless steel trucks, the walls and ceiling of the tunnel, and, of course, the robots. "It was quite a process," says Saindon. "We had lots of raytracing—the car into the truck, robots into the truck into the car—and lots of movement. If we had too much straight lighting, it looked like a good video game, not a tunnel, so we used global illumination on the tunnel to fill in the areas and make it feel like light was bouncing around. With the new RenderMan, we were able to cache the occlusion passes, so we cached the lighting into the geometry without calculating it on every frame."
Much of the work went into making the environment look imperfect. "We started going toward 'Utopian,'" says Erik Winquist, compositing supervisor at Weta, "but it looked too fake." Also, the crew developed a rig to pulse lights on the CG objects and added color by making some of the lights orange and some slightly green.
In addition to the neighborhood robots and the tunnel sequence, Weta handled the robot wars, using Massive Software's Massive crowd program and Jon Allit's Grunt rendering software much as the studio did for
Lord of the Rings. "We did hero animation for about 70 robots," says Saindon, "and then used Massive for the war with the NS-5s fighting the NS-4s. When the robots are turning, looking, and walking, that's all mocap and Massive, but the big fighting shots where they're ripping heads off are all hero-animated."
The studio also created the battleground. "One of the biggest things for these shots was building the environments," says Letteri. "We used Lidar data of Chicago, then added hero buildings and background buildings."
For Digital Domain, the film provided the studio a chance to flex its character muscles, and for Weta, a chance to leave organic Middle Earth and enter the future. "I think this film is pushing the state of the art," says Nelson, "particularly the subsurface of the robot and the kind of action shots we were able to do. We have thousands of robots and a very emotional performance by a robot that is a digital character."
Using a 50-year-old, 20th-century story and 21st-century computer graphics, these studios and the rest of the team have advanced the future of visual effects.
Barbara Robertson is a contributing editor of Computer Graphics World
and a freelance journalist specializing in computer graphics, visual effects, and animation. She can be reached at BarbaraRR@comcast.net.
While
I, Robot was being filmed, the crew used General Lift's EncodaCam system to put live-action actors on greenscreen stages into CG environments. With this system, encoders on cameras sent data through motion-control heads into a virtual set system that provided real-time renders of digital sets based on the camera move. Those sets were mixed with the videotape feed from the film camera. "It was great to look at Will Smith on the greenscreen stage and then look at the monitor and see him amongst a thousand robots," says Erik Nash, visual effects supervisor at Digital Domain. "It helped people keep their bearings." That helped the director and actors, but it also helped the camera operator who could, for example, look 50 floors down from the 12-foot ledge on a greenscreen stage.
"We could start with the actors on set and in real time follow them into full-digital set extensions. That enabled our camera operator to find shots on CG sets," explains John Nelson, visual effects supervisor. "Being able to see shots on set brings a whole new hands-on approach to something that's usually decided in a dark room late at night by tired people."
The system gave editors something more to work with than people in green rooms, and gave the visual effects crew shots with complete camera moves in them. "Normally, when the camera operator pans off the greenscreen, the camera stops and the half-finished shot goes to editorial, but they can't see anything," says Brian Van't Hul, the on-set visual effects supervisor for Weta, where many of
I, Robot's huge digital sets were created. "So the visual effects vendor has to figure out what the camera move should be. But with this, the editors get quick composites created and recorded on set." —BR
|
General Lift's EncodaCam system made it possible to see digital proxies of robots with actor Will Smith on a virtual set during Smith's greenscreen performance. |
Alias www.alias.com
Apple www.apple.com
Avid www.avid.com
Digital Domain www.d2software.com
General Lift www.encodacam.com
Motion Analysis www.motionanalysis.com
NewTek www.newtek.com
Massive Software www.massivesoftware.com
Pixar www.pixar.com