“It was one of those secret-passion projects,” says Blue Sky Studios’ Carlos Saldanha, who directed the Twentieth Century Fox Animation feature
Rio. “That’s where I’m from, Rio de Janeiro, Brazil. I thought it would be a great place to set an animation. It has culture, color, a city, the forest, a jungle. So, I started to create an idea there as early as 2002.”
News articles about penguins that lose their signals and wash up on the beach in Rio sparked Saldanha’s first idea. He had just finished co-directing
Ice Age and “Gone Nutty,” a short film for which he received an Oscar nomination. “I loved the concept of a character that’s a fish out of water, that moves from the cold to the tropics and finds his heart in Rio,” he says.
But,
Robots (2005), which he co-directed, and a sequel,
Ice Age: The Meltdown (2006), which he directed, intervened. So, the story set in
Rio landed on the shelf. When Saldanha finished the second
Ice Age, he picked up the story again, but by that time, penguins had starred in so many films, the studio nixed the flightless birds as main characters. If that weren’t enough, a third sequel,
Ice Age: Dawn of the Dinosaurs (2009), which he directed, sent the story back to the shelf again.
“I was so sure it could be a fun project, though, that I kept the idea alive,” Saldanha says. “There was a macaw character that I loved in the original idea, so I decided to make him come from the cold in America.” And, when Saldanha discovered the rare Spix’s Macaw, originally from Brazil but now alive only in captivity, he had a concept he could work with for Blu, his star (voiced by Jesse Eisenberg).
“Blu is an almost human-like bird, raised in a bookshop by Linda,” Saldanha explains. “He can read and research books.” One day, Tulio (Rodrigo Santoro), an ornithologist, comes into the shop and tells Linda (Leslie Mann) that Blu is the last male of his kind. But, there is a female blue macaw in Rio. “He says to Linda that she must come to Rio to meet her,” Saldanha says. So, they hatch their plan.
Once in Rio, Blu meets Jewel (Anne Hathaway), a street-smart blue macaw. And then, the feathers fly. When a kidnapper nabs Blu and Jewel, Tulio and Linda must search the city for the birds. “It is a parallel romance,” Saldanha says. “And, a journey for both Linda and Blu to find their wings and their heart in the story.” Bruce Anderson wrote the screenplay, which takes the characters from the beach to the jungle and into Rio’s famous Carnival and Sambadrome, filling the film with music and color.
“I was very picky,” Saldanha says. “I tried to keep the film very true to the city so it felt authentic. But most of all, I wanted to capture the vibe, the state of mind. As soon as the movie was greenlit, I brought in a gazillion books about Rio and gave lectures every day to immerse the crew in the culture. But it wasn’t enough. People missed the point. So, I grabbed six people, including the head of story, the art director, and cinematographer, and we went to Rio for Carnival. We all had costumes, and we paraded. We went to all the locations, and I explained how to get from point A to point B. No one had been to Rio before, so everyone was Blu. When we came back, I had six allies to help disseminate the story.”
Although most of the crew on
Rio had worked on Blue Sky’s previous films, this adventure provided three new technical challenges: birds, humans, and crowds.
Birds and their Feathers
For the birds, the goal was to create characters that could fly and talk at the same time, and were comedic yet retained a hint of reality.
“We brought in bird experts and learned that macaws are very loyal and very smart, so that affected Blu’s interaction with Linda,” Saldanha says. “We also learned that cockatoos love eating chicken, which is creepy but a great thing to do with Nigel because he’s a villain.” Because cockatoos puff up and release powder when they get angry, Nigel (Jemaine Clement) does, too.
As for flying, although the domesticated Blu hasn’t flown in his Minnesota bookstore, the birds in Rio do. “We wanted the birds to use their wings like arms, but we didn’t want them to be cartoony,” Saldanha says. In addition, the drawings from production designer Sergio Pablos showed birds using flight feathers at the ends of their wings like fingers.
“Wings are somewhat like arms, but the bird anatomy is so different,” says rigging supervisor Adam Burr. “The bone structure is different. But, the director wanted the birds to fold their wings into the body and present a neat silhouette. Our complication was in defining one rig that could do both, to make the transition smooth within a shot.”
The trickiest part of the rigging system to solve was the folding. Modelers working in Autodesk’s Maya built the wings, but because they weren’t yet rigged, they couldn’t fold them. “When you look at many birds, when the wing is folded, the body has one clean silhouette,” Burr says. “You might not be able to tell where the wing is. If the wing is too long or too short, it doesn’t fold right.” So, the modelers and riggers worked back and forth within Maya on prototypes to perfect the proportions.
To meet a second challenge, of having feathers act like fingers, the riggers added controls for each feather at the tips of the wings. “We rigged the flight feathers so they could curl up as if they were fingers on a hand and still lay cleanly,” says rigging supervisor Justin Leach.
Feather geometry drove the position of all the hairs on the feather. “You can see that particularly on the cockatoo, which has crest feathers as well as fingers,” says Eric Maurer, fur supervisor. “The animators could squash and stretch, and all the little feather barbs would follow. Usually, when we do feathers, we draw every barb as fur hair. But, having quills with all the little hairs was unwieldy for animators who wanted real geometry they could pose with traditional tools.”
That and the number of feathered friends prompted the crew to develop new methods for describing the feathers. Previously, the fur team drew each barb on each feather, even when that meant drawing millions of little barbs. For this film, they placed quills for each feather using the studio’s fur grooming tools, and then, with a plug-in developed within Blue Sky’s rendering software CGI Studio, utilized that quill to place and deform a “fur file.”
“You could think of the quills as equivalent to guide hairs, but we don’t instance feathers,” Maurer says. “We have a fur file that describes the quill, and a fur file that describes different types of feather. We combine the two. We use the quill to place and deform the feather description, and then replace the quill with the appropriate feather macro. We can also blend between other feather descriptions.”
At Blue Sky, rather than describing guide hairs and interpolating them, the fur system interpolates only the motion. “Historically, the fur artists describe every single hair, and we solve each hair [feather] uniquely,” Maurer says. “The fur artists have the flexibility to create what they want, and I think it gives us a richness to our fur descriptions. But it has also created some overhead. The macro feather system lightened that legacy workflow and reduced the number of unique hairs we were drawing. We used to cook a fur groom with 10 million hairs overnight. Now, we’re cooking 10,000 quills in less than an hour.”
For senior animator Pete Paquette, the birds rank among the most difficult creatures he has animated. “We paid a lot of attention to how they flew from one frame to the next,” he says. “Because we had control of literally every feather on the wing, it took a lot of attention to detail. If we didn’t touch a feather, we could tell.”
The faces presented riggers with the same challenge of finding the right balance between realism and cartoon. “Birds talk, but you can’t see expression in their faces,” Burr says. “We wanted them to do more. We discovered that seeing the corner of the mouth where the beak transitions into the cheek was important; we could give them a smile that realistic birds don’t have.”
An implied brow ridge also helped with facial expressions, and, of course, so did the eyes. “We have eyelid controls for hand-sculpting shapes across the eyeball,” Leach says. “And rigs that squash and stretch the eye but keep the pupil and iris in perfect circles.”
A Bully Solution
Among the other important non-human characters in the film is a bulldog named Luiz (Tracy Morgan). Paquette was lead animator for Luiz, one of Saldanha’s favorite characters. “I can’t pick one favorite,” Saldanha says, “because that would not be honoring the other characters in the film. But, Luiz is one of the surprises in the movie. And, I’ve always wanted to play with a character that had a drooling problem.”
The modeling and rigging challenge for the bulldog was in managing the mechanics for a fat dog with long jowls. “His neck area was the biggest challenge,” Leach says, “especially when he was looking around. Rigging collisions are time-consuming; it means sculpting corrective shapes.”
Working with the riggers in pre-production was Paquette, who describes the job of the lead animator as: “Fostering a character to get it ready for production, finding out what works and what doesn’t work,” he says. “If it doesn’t work, we have to go back to the drawing board, and that happened with Luiz. He was a secondary character, and then suddenly, he had a major role in the film.” At first Paquette tried to use the original model, but that dog hadn’t been designed to talk. “I did a quick lip-sync test, and it looked rudimentary, almost sub-video-game quality,” Paquette says. “We knew we couldn’t do this.”
The redesign took nearly two months. The modelers changed the dog’s face to make him seem more appealing, moving the eyes closer together, making the pupils bigger and the brows thicker. They changed his mouth, and they made him fleshier. “It took a while to pinpoint the things that would make him feel more like a dog, but we got to the point where, standing by himself, he felt like an organic bulldog,” Paquette says. “And then once we changed the model, we had to re-rig it from scratch to make sure the skin was weighted correctly to the skeleton.”
As for the drool, “we did trial and error,” Paquette says. “We tried to see if it would be worthwhile for the animators to do it, but it became too time-consuming.” Instead, the animators turned the drool over to the simulation team, which sent it elsewhere. “It’s just smoke and mirrors from the effects department,” says Keith Stichweh, character simulation supervisor.
With the new rig and new model in place, Paquette created key poses for Luiz’s performance. He started by looking at video reference of bulldogs and at friends’ bulldogs. Then, he pitched key shots to the director using pencil tests. “I like to draw, and the pencil tests cut the time in half or more because it’s such a quick way to do gestures, choreography, poses, blocking,” he says. “I can go through multiple iterations, while someone working on a model might take double the time to get one.”
And all that work gave Luiz the opportunity to play a critical role. Although the kidnappers have chained Blu and Jewel together, they manage to meet Rafael (George Lopez), a toucan. “When Raffi saw the two birds chained together, he knew to take them to Luiz,” Paquette says. “Luiz doesn’t really know what he’s doing, but he frees them, and Blu rides on his back to the Carnival. Luiz’s role is to help Blu and Jewel get to the place they need to be faster.”
Camera Moves
To help create the live-action feeling of Rio, director Carlos Saldanha’s friend Renalto Falco, a Brazilian-born live-action cinematographer, worked with the layout artists who created Rio’s camera moves. “Having a director of photography is new to Blue Sky,” says Rob Cardone, layout supervisor. “The initial idea was that he would consult on camera, lighting, and story arcs like a live-action DP would. But as production went on, the focus for our DP became more in the camera phase.”
Layout artists set the initial camera and staging, working from storyboards and a low-resolution, exploratory version of the set that was often as simple as cubes and cylinders. They work with camera rigs and lenses that mimic live-action cameras.
“We have a dolly track the camera can follow, a crane” says Karyn Monschein, layout technical lead. “We can add camera shake and subtle movements. And on this film, we added things to fake depth of field in our playblast.”
Once the director approves the camera moves in the low-resolution set, the files move to set design and modeling. When real geometry for props and set pieces are in place, the layout team checks the camera and the sets, and sends the sequences for approval. At this stage the cameras pointing to the action in one set might extend over several shots, so once the director has approved the sequences, the layout artists break the massive files into individual files for animation. “Each animator gets one camera per file with only the characters and set pieces in their shot,” Monschein says.
Two camera artists attend all the animation dailies and take notes so that if the camera moves need to change, the layout artists can make those changes. “We’ve learned that if everyone can change the camera, everyone will change the camera,” Cardone says. Once the director approves the shot in animation, everything is locked down so the rest of the pipeline can finish.
“I think that the camera in this movie is the most exciting we’ve produced,” Cardone says. “We’ve had the most freedom, and it shows. The camera is
really dynamic.” –Barbara Robertson |
Chain Reaction
Before Luiz frees the two birds, the animators had to create performances for Blu, Jewel, and the chain between them. For the chain, they relied on a system designed by the character simulation department, a new department created for this film. Because the studio uses Maya, the simulation department decided to start with nCloth and customize it as needed. “We didn’t have a cloth-simulation solution,” says Stichweh. “We had eight months to put it together and integrate it into a pipeline the company had developed over the past 20 years.”
One customization handled the chain. “Think of a ribbon made with polygon faces chained together, a 50x1 ribbon of polygons,” Stichweh says. “I can turn this ribbon into cloth, control stretching and sheering and the center, and constrain links. It’s like a poor man’s RBD (rigid-body dynamics).”
In addition, an IK-based rigging system gave animators the ability to pose the chain. “But we needed a third method that was a blend between the two systems,” Stichweh says. “And the blend needed to look natural.” The goal was to give animators the ability to attach two shackles, run a simulation in which the chain would act normally, and then transfer that motion to a keyframed ribbon. “When we achieved that, we bridged the two worlds,” Stichweh says. “Animators could grab a section of the chain that is a hand-animated joint. We had ramps and blends from that point on to transition linearly or with a spline into a simulation. It was almost like picking a necklace off the ground.”
In other words, the tools allowed a simulation to pass through a set of key poses. “I knew it would be great,” says CG supervisor Rob Cavaleri, “when in a test shot the animators had Blu and Jewel jump over a car, and in the middle of the jump, the chain formed a heart and then relaxed and was smooth.”
The Human Touch
The Carnival—during which a crowd of 50,000 people watch a parade with thousands of costumed dancers riding on floats and performing between—represented a huge technical achievement for the crew, as did Linda and Tulio, the human stars of the film. “We did a little bit of work on humans for the first
Ice Age,” says Saldanha, “the baby and the family. But they were crude in terms of elements. They couldn’t talk. And we didn’t do cloth simulation. In this film, the humans play out a parallel story with acting performances and cloth simulation, which was technically challenging for us. So, we definitely built more controls to give the animators freedom to create subtle eye movements and facial expressions.”
The riggers started with model sheets drawn by Pablos that showed various facial expressions, creating blendshapes that made it possible for animators to match those expressions, and then they designed the rig to allow more.
For animators used to pushing facial expressions on the previous cartoony films, that freedom was an issue at first—cartoony expressions on humans can look grotesque. “We had a lot of back and forth with animation to discover what made the most appealing faces,” Burr says. But, the riggers didn’t limit the animators—they had to limit themselves.
“We build a lot of functionality into our rigs,” Leach says. “The animators can pretty much sculpt the characters on a shot-by-shot basis; they can use our controls to sculpt the rig based on the camera angle. And, on top of that, the animators can add in shot deformers. They can push the rig far beyond what it is supposed to do.”
Creating the humans’ skin also pushed the crew into new research areas. Blue Sky is justifiably proud of its proprietary programming language and renderer, CGI Studio (also called “Studio”), a raytracer designed to simulate light accurately. Studio produces motion blur, reflection, refraction, global illumination, diffuse reflection, and depth of field using principles of physics. Furthermore, rather than creating complex and detailed surfaces with texture maps, artists at Blue Sky develop most materials procedurally.
Mathematical Materials
“We’ll use texture maps for graphics on signs, license plates, things like that,” says materials supervisor Brian Hill. “But we generated 98 percent of what you see on the screen procedurally. It’s a different way of approaching surfaces that we started with
Robots (see “Mech Believe,” March 2005), and have kept going that way since. We like it. Because it’s mathematically driven, it’s resolution-independent—if you procedurally generate a gradient between two points for a bump signal and zoom into it, it’s still gradating from zero to one between two points.”
However, because the crew had yet to create procedural skin shaders for humans, they turned to Carl Ludwig’s R&D department for help. Ludwig, now vice president and CTO at Blue Sky, co-founded the studio in 1987 with a group of people who had worked on
TRON at MAGI/Synthavision. He and physicist Eugene Troubetzkoy developed the proprietary software and renderer.
“To create the skin shader, Carl [Ludwig] used the same transmittance code base we had been using to give leaves and foliage a translucent look,” Hill says. “It’s kind of the equivalent of what other studios call subsurface scattering or translucency. Carl rewrote the software, plussed it up, and handed this new amazing feature to us. One of our senior materials technical directors set up the core skin we used on every single human in the movie.”
The transmittance produced the subsurface glow for the skin. To create the surface, materials artists used procedural tools, dialing parameters that added pores, shininess, and so forth to a basic material. “The materials properties are a script, a giant text file written in the Studio scripting language,” Hill says. “In that file, we describe the color, specularity, roughness, density, transmittance, all those things.”
In practice, a materials artist might start with a base color, add three-dimensional noise to modulate that color channel, adjust the frequency of noise, and modulate the hue, saturation, and value of the color. Then, the artist might add another procedure that changed that result, and a third that modulates the result of those two, and so forth.
“To create pores, we throw on a cell-noise procedure, flip it around, and play with tangents and constants to get it to look like pores,” Hill says. Another procedure mixes two materials to create, for example, rosy cheeks. And, using volume procedures, they can add freckles and moles in specific areas.
“Imagine a spherical volume with everything inside as a one and everything outside as a zero,” Hill explains. “I can pop Linda onto the screen, locate her cheek in space, and put a sphere there that I scale, rotate, and translate to a specific location. Then I can sculpt some cell noise to make freckles, and put little cells inside the sphere. We combine procedures in all sorts of ways.”
An interactive renderer makes it possible for the materials artists to see the results of their tinkering quickly. “You can work strictly with the script, or, if you want to look and interact with an image, you can load up the quick render,” Hill says. “It grabs 10 or 15 machines to be your slave, and distributes the rendering across those machines so you can render an area over and over as you tweak the numbers.”
Rio in Stereo
The stereo team starts laying out their cameras at the same time as the layout artists create the initial camera moves. “As soon as layout completes a sequence, we create the left eye,” says Dan Abramovich, lead stereoscopic technical director. “We have special tools all across the pipeline—proprietary software for the camera and how it interacts with the renderer, tools for setting up the camera and controlling the variables, sequence-based tools, viewing tools, rendering tools. We’re able to view stereo imagery all the way from [Autodesk’s] Maya to [The Foundry’s] Nuke.”
The major change with this film was in how the artists used depth of field. They began experimenting with the technique in the third Ice Age film but removed it, and then approached the idea again with Rio.
“We defocus the background,” Abramovich says. “And, we use a lot of rack focus. In some sequences, defocus became a visual effect. Instead of re-adjusting the defocus, we’d let it stay and offset colors to create a magical shimmering effect. We leapt off into a new creative area. We had the defocus and the depth. Now we had beautiful light blooms. We animated colors between the left and right eyes. The slight offset gives the image a pixie-dust feeling that you see only in stereo.”
The magic happened accidentally. “We discovered a slight difference between left and right, and we decided to keep it if the background was defocused because it felt magical. If they were in focus, it could be a problem. The shimmering lends itself to highly reflective material, like water. We also used it to create a nightclub feel, creating a stylized defocus with the club lights. It feels magical, but there’s a hugely technical process behind it. It’s never a home run. We had to put a lot of work into it.” –Barbara Robertson |
Cloth Simulation
To move the characters’ costumes, the simulation crew evaluated several cloth simulators before settling on Maya nCloth. “We’re a Maya studio for the most part,” says Stichweh, “so staying within the same 3D package was a positive thing.”
Then, they adapted the program to Blue Sky’s style of animation. “Our style is cartoony with squash and stretch, but simulation, by default, follows physically based rules,” Stichweh says. “We created a lot of pre-simulation and post-simulation tools to adjust the sim to the animation style. Our bar for whether the simulation was successful was whether something reads as cloth but isn’t distracting.”
In sum, the system the crew created used garments rigged to move with a character’s body so that the animators could turn a switch and see that silhouette. “The cloth had the rig weighting of the body so, for example, a shirt bent with an arm,” Stichweh explains. “It wasn’t really simulation; it looked like Saturday-morning cartoon cloth.”
Then, a system called “Anim to Cloth” ran a script that automatically prepped the clothing for simulation. “We realized the task lists were the same all the time, so we automated the prep,” Stichweh says. “The system could even grab the default library for a particular garment, say Linda in a raincoat, and submit the simulation without anyone touching the file.”
The rig had a switch that would send clothing that didn’t need simulation—costumes worn by background characters in a crowd, for example—straight on to rendering. “If there was a simulation cache, it went to simulation,” Stichweh says. “No one had to keep track. If we cared about five characters out of 50 in a shot, we’d tag those up front.”
Stichweh likes to think of the system they developed as a set of on/off ramps to the highway that is the studio’s legacy pipeline. “Our communication in and out was just caches,” he says. “That way, we could do pretty much what we wanted without worrying about complying with the pipeline.”
Road to Rio
Systems such as that made it possible to bring Rio alive. “The task was larger than we expected,” Saldanha says. “We had a crowded beach. We had a parade with thousands of people partying. The animators had a list of I don’t know how many cycles. I was impressed with the work the crew put into it. It felt like we were shooting live action in Rio.”
Modelers working from true topology and maps simplified the landscape to include the main landmarks and then built the city with a combination of procedural tools and hand-built models. “I wanted the city to look from afar like you’re there,” Saldanha says.
To add such details as a mosaic-tiled sidewalk on the Copacabana Beach, the materials artists again used procedural techniques rather than texture maps. “It’s harder than making something organic,” Hill says. “We had to fit together little black-and-white tiles into an intentional pattern. Procedurally generating that to the degree of handmade-ness that Carlos [Saldanha] wanted was difficult.”
In fact, it took a collaborative effort between R&D, effects, and the materials department to create a method that worked. “Imagine a curvy spline in the middle of a box, with everything to the left black and to the right white,” Hill says. “We fed a series of these curves and random tile shapes into a stacking algorithm one of our effects TDs wrote. It fit the tiles together like a puzzle, keeping the least amount of grout between.” The same technique paved the streets with cobblestones.
The team also developed procedural methods for the forests and jungles. “We couldn’t use displacement to create trees at a distance because the RAM footprint would have been too much,” Hill says. “But we wanted that degree of detail. So R&D and the materials artists came up with an implicit surface function that created tiny implicit surfaces shaped like trees.” Procedural rules dictated that the plants would grow only on horizontal surfaces, and various parameters controlled frequency and scale. “We could sprinkle trees all over Rio with no RAM footprint at all,” Hill says. “But in the extreme foreground, the procedural modeling crew did all the work.”
Similarly, the water washing onto the beach combined procedurally animated materials and particle effects. “Most of the shots were fairly far away, so we didn’t have to create super-close waves,” Cavaleri says. “The crowds, though, were an enormous task. It takes so much effort across so many departments to get the data to a reasonable number, and a lot of up-front homework to pull together the assets, place the performance cycles, and pass them through the Maya pipeline into our renderer in an efficient way. And then, we needed a variety of techniques to render everything efficiently. We heavily leveraged our voxel technique.”
A crowd of approximately 50,000 people enter the stands of the Sambadrome, and then the parade-goers march through. “There are maybe 100 on floats, and we had 10 or 12 floats, and then of course we had parade-goers in between separated by costume and color,” Cavaleri says. “They could be guys wearing alligator costumes, people on stilts with palm trees. We had amazing costumes.” The crew stored animation cycles for each dance in containers that included three to five cycles. For many of the costumes, each cycle went through a simulation pass.
To render the trees, crowds, hair, fur...entire landscapes, the crew relied on a proprietary voxel system developed by Maurice Van Swaaij. “We project a kind of camera-space voxel grid over a scene, cut up everything into voxel-sized pieces, and filter into each voxel,” Maurer says. “Instead of the raytracer tracking to hairs or leaves, it tracks to a voxel body. For fur, we’d store the average orientation, color, and density in each voxel. For geometry, we store the average normal. In addition, for this show, we started to voxelize material properties, specular, roughness, and transmittance.”
The voxel rendering method provided two advantages in particular: pre-filtering and RAM efficiency. “When we’re filling those voxels, we might have part of a hair land in one voxel and part in another, for example,” Maurer says. “So we end up with a 3D voxel body without abrupt changes from voxel to voxel.”
As for RAM efficiency, Maurer points to shots in the Sambadrome. “With a raytracer, you need everything in RAM all the time,” he says. “It isn’t like [Pixar’s] RenderMan where you can break up a scene into parts and deal with packets. So, 30,000 characters would need 30g of RAM or more for a scene, and we had an 8g RAM budget per frame. By pre-processing all that data into a voxel body, we reduced the RAM to 5 or 6 gigs, and the rendering wasn’t noisy. The trade-off was the pre-processing time, so we’re always balancing that.” Compositors working in The Foundry’s Nuke assembled layers of the parade rendered independently.
The last scene to wind its way through rendering and into compositing was a romantic sequence. “The two birds are beginning to fall in love,” Saldanha says. “The scene takes place in a mountainy neighborhood with stone streets. There’s a trolley and petals flying around. It’s a super-expensive sequence to put together, and I was biting my nails, afraid it wouldn’t happen. But, it was worth the wait.”
Saldanha, who had held his passion for Rio and
Rio at bay since 2002, could well have said that about the entire film.
Rio marks a transition for the crew, too. “We moved forward on so many fronts, so many areas of new technology,” Burr says. “It’s the biggest step this studio has taken in terms of movies.” And the crew promises that this is only the first.
Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.