Similarly, in a region near but far, another culture tells the tale of mythical creatures just as frightening, with peculiarly tiny feet, misshaped, hairless bodies, and small, squeaky voices – enough to send shivers down your spine. That is, if you are a yeti and talking about (gulp!)… humans!
The CG animated feature-film Smallfoot, from Warner Bros. Pictures and Warner Animation Group, turns one particular myth on its head, one in which humans are thought to be imaginary, mythical creatures – at least in the land of the yeti. But legend turns to reality for both man and beast when the young yeti Migo briefly encounters a human who accidentally parachutes into the yeti world before descending over a cliff, through the clouds, and into the human realm below. When Migo tells the elder yetis of his discovery, they do not believe him, so he sets off on an adventure below the clouds to locate a smallfoot and prove their existence once and for all.
“We’ve all heard stories of these mysterious creatures with strange habits. But what about our own strange habits? Let’s face it, we’re weird creatures in many ways. And it’s fun to take a comedic look at that from different perspectives… a yeti perspective,” says director Karey Kirkpatrick (Over the Hedge), who also co-wrote the film.
The film was animated by Sony Pictures Imageworks, whose main campus in Vancouver performed the majority of the work, assisted by the LA facility. Karl Herbst served as visual effects supervisor.
Production got off to a quick start in January 2016 but soon slowed while the film underwent revisions. Meanwhile, the team at Sony Pictures Animation continued its work on snow and hair development, two of the film’s larger technical challenges, as production geared up last October. All told, production spanned 13 months and wrapped this past August. The film hit theaters in late September.
Character Development
The film’s characters come from two very different worlds – one hidden above the clouds (home of the yeti) and one below (inhabited by humans). Outwardly, Smallfoot’s two species are about as different as you can get. Inwardly, they are scarily similar. Migo (Channing Tatum) is a young, happy-go-lucky yeti who, despite everything he has been told, still believes in the existence of the smallfoot, even before he discovers Percy (James Corden). He is joined by other yetis in the clandestine organization SES (Smallfoot Evidentiary Society), including Meechee (Zendaya), who hopes the legend is true; Gwangi (LeBron James), a burly, wild-haired yeti who loves conspiracies of all nature; the science nerd Kolka (Gina Rodriguez); and the annoying Fleem (Ely Henry). Among the adult yetis are Meechee’s dad, the Stonekeeper (Common), who is dedicated to maintaining the status quo of the land and squashes any talk of smallfoot’s existence, lest the laws of their society, which are written in stone, are suddenly questioned; and Migo’s very non-curious dad, Dorgle (Danny DeVito). Of course, there is the human Percy, a former TV personality hoping to get back into the spotlight with his new “discovery,” his non-believer assistant Brenda (
Yara Shahidi), as well as animals such as yaks, bears, goats, and even bioluminescent furry snails with fiber-optic hairs that light up.
When development on the movie initially started, production designer Ron Kurniawan and art director Devin Crane, along with Herbst, examined various styles for the cast. “The aesthetic we wanted was something slightly right of center, if you had Cloud with a Chance of Meatballs, which is more stylized, on the left and
The Good Dinosaur, more realistic, on the right. Meaning, a little more ‘cartoony realistic,’” explains Herbst.
Make no mistake, though, the film bears threads of Looney Tunes inspiration throughout – from the shape of the characters’ eyes to the simplified shape language of their bodies. The Smallfoot yetis are not scary, they are soft and lovable. Their body structure is based on ovals, from their torsos to their eyes, with pear-shaped faces that are quite appealing. A new software system enabled the artists to squash, smear, and stretch the head and eyes while the iris and pupil shape remained the same, eliminating the need for animators to spend countless hours counter-animating the iris and pupil to keep them circular when the eyeballs became more egg-shaped.
The yetis also have long legs, which they use for leaping, in addition to wild, physical comedic action indicative of Looney Tunes style. When the animators and storyboard artists would ask Kirkpatrick how far they could push things in terms of this type of animation, his response was “Make me tell you you’ve gone too far.”
Imageworks has a history of this type of animation – squashing and stretching characters and pushing them to the extreme – as evidenced in the Hotel Transylvania franchise,
Cloudy, and others. “That’s something we’ve developed and improved over the last eight to 10 years as something that’s in our tool kit,” says Herbst. “Our rigging team knows how to create rigs that can support these types of performances.”
The rigs for Smallfoot were custom built for each character, from the giant yetis to the smaller humans. As Herbst points out, the scale between the character types is exaggerated, which made it difficult at times to frame the shot with the human Percy next to the yeti characters. “Part of our rigging system for this squash and stretch allows us to change the proportions and scale of the characters on the fly, so the animators had a lot of freedom to change that relationship in scenes,” he explains. For instance, there are moments in the movie when the filmmakers wanted Percy to feel like a mouse compared to the yeti Migo. And then there were other times when they wanted the characters to be face to face without it feeling overwhelming. The ability to change scale at any given time was key, whether it was to stretch, or to squash, the snow giants to fit the scene.
The front end of the animation pipeline is based on Autodesk’s Maya, which was used for the modeling and animation, in addition to a range of custom tools, solvers, and deformers that were developed in-house. The hair simulation is done within Maya Nucleus, with custom tools built over top of it, as is the case for grooming. On the back end, the effects are all driven through SideFX’s Houdini, while the artists use its own in-house version of Arnold for rendering. For lighting, they use Foundry’s Katana, while shaders are developed using the OSL open-shading language. And, compositing is done in Foundry’s Nuke.
As for the generic yetis (and humans) in the film, the team developed a mix-and-match process using the studio’s Kami geometry instance system, whereby procedurals could be layered on top of the hair at render time. And they would have different base hair grooms – short hair or long hair for the arms, legs, and mid-torso, while the shoulder area comprised one of three shapes (U, V, or an arc). Using the procedurals added different looks at render time to define the hair as being curly, wavy, or silky. Adding horns, head hair, and beards helped create the entire yeti village.
Hairy Beast
One of the most difficult elements to animate in CG is hair, and in this film, the majority of the characters are covered in thick fur. “There was a lot of R&D,” says Kirkpatrick. “Just to get the hair looking real and moving would have taken 200 hours for just one frame, so we had to find a way to get that time down.”
For several years, Imageworks had successfully tackled hair and fur using its legacy pipeline. However, when the crew assessed the huge amount of fur they were facing in this film, Herbst, along with look-dev lead Nicki Lavender and head of effects Henrik Karlsson, began reassessing that setup, looking for a new, more efficient solution in terms of simulation and especially shaders.
“When we looked at how much hair we were going to have to do, we sat down and said, ‘We need to tear this down to bare bones and look at it to see if we can improve it,’” says Herbst. “It was limiting the resolution of how fine and silky you could make the hair look, and simultaneously it was overdriving opacity in a ray tracer, which shot our rendering times through the roof when we were trying to make it do more than it had done in the past.”
So, they created a new hair-shading system specifically for animal, rather than human, hair, which used a multi-scattering approach. So instead of using ribbons that looked like clumps of hair, they now had individual strands of hair, resulting in much higher fidelity in terms of detail and qualities, like texture and softness, for a wider range of hair/fur types.
With the previous method, the opacity for a white character would be very thin so a lot of light could transmit through it, but that was increasing render times because the body of the character was stopping the rays within the hair volume was the body of the character. But by making the hairs truly opaque and writing the new shader so it could transport the light and prevent the shadows from becoming too dense within the hair volume. And now the rays do not penetrate deep into the volume, enabling the artists to add more hair.
Without question, the amount of hair on each yeti is impressive.t Migo alone has 3.2 million individual strands of hair, while Meechee and Fleem have 2.5 million each, Kolka has five million, the robed Stonekeeper sports 1.3 million, and the curly-haired Gwangi has a whopping nine million (3,000 of which are simulation hairs).
On the simulation side, Karlsson and his team developed layers of tools that worked inside the Maya Nucleus solver to drive the control hairs. The studio’s plug-in to its version of Arnold, called Kami, then takes that information and generates the hair on the fly at render time and deforms it on the fly. “So the control hairs are generated in Maya and that gets sent to the renderer for the lighters and then they control the number of final hairs that we render,” explains Herbst, pointing out that Meechee, for instance, has over 200 constraints controlling her hair, “which, in the past, we wouldn’t have done because it would have taken too long to solve. So, we built layers of simulation on top of each other to help control the amount of time it would take to solve each step and not have to start at the beginning every time we had to stop and restart.”
In all, a number of developments resulted in the ability to render all the hair. When the team started working on the film, the studio was working on new developments within the Arnold renderer, which came to fruition after the crew had started on Smallfoot, “but we were banking on them,” says Herbst. One of the biggest was adaptive sampling. Most ray tracers out of the box, artists dictate how many rays will be thrown equally at each pixel in the frame, no matter the pixel’s complexity, in their anti-aliasing samples. However, this results in a frame that might look good except for some noise in one spot that you can’t control.
“We came up with a system that looks at each area, and if the area doesn’t need any more resolving, it stops the rays and concentrates those rays in the area that needs it,” Herbst explains. “So you could have three-quarters or even 90 percent of your frame rate that is solved very, very quickly, and then the rest of the rendering power is only going toward the complicated area. That was a huge plus for us. If we did it the old way, the render times would have been through the roof, resulting in hundreds of hours. By doing this, coupled with the next development, we were able to get render times back into production standards.”
This one, which occurred facility-wide, enabled the group to continue with part of a render, picking up from a certain point, rather than start over from scratch when moving from a mid-level render with a lower anti-alias setting to get a feel for the lighting, before committing to a higher-res frame. “We would do everything at a lower quality, and then looking at that in final composite, lighters could actually pick which layers needed to go back into the que and get up-res’d, and it would pick up from where it previously left off,” explains Herbst. “So if that medium-quality version had already burned 20 to 30 hours of clock time, you didn’t have to lose that. It would just start from there and keep going upward.”
This was especially important because, in addition to the furry characters, the way the team did interactive snow also had a very high render cost. But because of the adaptive sampling, layered with what the studio calls its “Plus AA” for proofing and continuing the frame rendering, they were able to solve almost all its noise issues and not use as much rendering power as it did in the past.
Heart of Stone
Giving the yeti a nice head (and body of hair) – and maintaining it with the extreme form of comedic animation – was a multi-step process. After the character animation was completed, the group ran simulations on each to capture the natural dynamics of the fur.
Meechee was one of the more complex characters in this regard, with her long, flowing hair that creates a kind of dress, along with another layer on top that looks like a shawl, then a braid on top of that. All of these elements overlap and interact with each other, so when there was a change to the character, those simulations had to be rerun.
Herbst believes that one of the most difficult characters to create for Smallfoot is Stonekeeper. “One of the worst things to do in any animation pipeline is circle back in the middle to finish a character, and, unfortunately, that was really the only way we could solve Stonekeeper’s requirements,” he says.
Stonekeeper is a hairy character who wears a robe of stones. He also sports head hair that is very long, with layers of braids, and a long, braided beard. “The pipeline here went through animation, into our character effects team to simulate a cloth patch of what would be underlying the stone so it felt more like a robe – a really heavy robe,” says Herbst. “Then that would go to our effects team. While our effects team was working with that robe, adding the stones, the character effects team was actually starting the hair simulation, to get those moving. So many departments worked on Stonekeeper.”
The effects department would do a rigid-body collision simulation for the stones on the robe. Once that was approved, the character effects team would merge the layers together and finish the final simulations for the hair. If something was not working in terms of performance, then the model would be sent back to animation, and the process would begin again.
Rocky Mountain High
Smallfoot takes place in the snow-covered mountains. Initially, the film underwent a number of revisions, including script rewrites, so Kirkpatrick and his team were unable to hone in on the specific environments until later on in production. As a solution, the group first started building set pieces they knew would exist in this world – which would be snowy, icy, and rocky. They also knew there would be a Yeti Mountain and a human city. Everything else was done on the fly as the environmental artists received sequences from the story department.
The big environment (literally) is Yeti Mountain. It is a beautiful, natural environment of blue sky and billowing clouds – a frozen paradise located high in the Himalayas. For this location, the modelers began by building the outside of the mountain and all the pieces that would later be used to construct the yeti village. From those kit pieces, the group could make just about any other environment that was needed.
“We could go from the top of Yeti Mountain out to this thing called the Ice Field, from which Migo launches himself down through the clouds and into the so-called New World, the human world,” says Herbst. “In the human world, we changed some of the proportions of size, but all of the language of rocks and trees and so on are interchangeable in both locations. So, we were able to assemble them as needed. We never really did any procedural texturing on the fly at render time in any of our movies before. So for this film, we developed a new set of tools that ran on top of some of our other existing tools to leverage that procedural texturing.”
Another significant location in the film is the human city – built from preset parts. “Karey [Kirkpatrick] comes from a live-action directing background, so he would tell us, ‘Build all of this out for me and then I’m going to find my shot.’ That’s not something that you normally want to do in animation. But, we also knew that we needed big vista shots, big helicopter moves and things like that to show the relationship with the city and the mountain. And, to show the scale of the yetis,” says Herbst. “You need those ‘Godzilla’ moments in the film.”
’Snow Joke
Just as hair/fur is a notoriously difficult element for animators, so, too, is water in all its forms: liquid, vapor, and frozen. And where there are yetis, there, of course, is snow and ice. And lots of it! In fact, the film features three different densities of snow: falling snow, surface snow, and effects being kicked up by active feet.
But blanketing the entire environment in freshly fallen snow is time-consuming if done by hand. “Everywhere you see ice and snow in the movie, even on ledges, none of it is hand painted. All of it is actually created procedurally,” explains Herbst. This allowed the team to interchange set pieces at any given moment or even swap ice to rock, or rock to snow, quite easily.
Initially, the effects artists produced a simulation of a snowfall with wind direction, let the snow accumulate, and then restructured the set and skinned the results. But that turned out to be costly and complicated. “Theo Vandernoot, head of the effects team, suggested we just do an ambient occlusion process instead, and still have a wind direction associated with it to break up the snow and give us drifts,” says Herbst.
This new snow-padding system works within Houdini and contains tools for accumulating large swaths of snow into any environment, based on programmed variables such as wind direction, amount of snowfall, and relative stickiness of the objects to be coated, thereby varying the look throughout the movie.
“It gave us flexibility. When Karey found his shot, we could still move things around in the set,” says Herbst. “If we had already modeled all the snow in, then we would’ve had to remodel the snow again. But by dropping it in and letting it accumulate, we could move objects around and control the amount of snow in each shot. We did this to all sets in both the yeti and human worlds.”
According to Herbst, Kirkpatrick wanted a lot of atmosphere in each shot, which was generated by the effects team through cornice-driven snow, whereby volumetric snow rips off from the blanket edges and sort of just hangs in the air. “Effects did simulations of all this volumetric snow, and we cashed all that as hundreds of pieces per location,” he adds. “Then in any given shot, the lighters could go in and selectively choose the ones we wanted for a given framing of the environment and have that in the background to help create the depth we wanted, so we have all this moving texture in the background instead of just flat color.”
In addition, the effects team used a few different systems to tackle interaction with the snow. For footprints, they used a displacement system that took historical data of a character’s feet in a shot and interpolated that back out of the shot and across a sequence. It would displace at render time and add bits of granular chunks of snow around the edges of the displacement as to not appear smooth and perfect.
The group also devised new system, called Katyusha, that offers artists a more efficient way of producing high-resolution granular snow by combining a rigid-body destruction system with a fluids solver. Its name comes from doing each step of the process in small chunks versus a single, large simulation. “Katyusha lets us do a collision rigid-body simulation that breaks up all these chunks that stay stuck together or tumble and break off into a subsequent number of smaller chunks, or even eventually kind of collapse back into flakes of snow,” says Herbst. This was used in a scene when a plane drops into the snow and scrapes through it for a long distance, while Migo dives into the snow and tunnels underneath to keep from getting run over.
Rather than rendering this type of snow as geometry, Imageworks rendered it as a volume for a few reasons. First, if left as chunks, the snow looked more like ice because of the rigid, sharp edges. Also, the volume enabled the light to transport through it better, so the crew could mix and match the various types of snow. “We could have the volume, chunks made of geometry, and we could have the grain really small, as well as dust in the air, and they can all render together in the same light,” Herbst notes.
Meanwhile, not all of the yetis are “white”; some have more of a purple, brown, or peach hue. Still, making mostly light-haired yetis pop against their environment of snow and ice was difficult, largely accomplished through lighting techniques. At lot of the action occurs in the morning or afternoon, so there is a lot of color. So at times, the lighters would enhance the snow color, making it more golden, or enhance the character’s color with a rim light to create a visual break between it and the snow. Generally, they kept the snow a shade darker, more on the gray side, so the yetis would be the brightest objects in the scene.
Evolutionary Tale
The story premise behind the existence of these two worlds – the yetis and the humans – is that each had evolved along its own evolutionary path, separated from each other visually and by location. But to make this film, the animation team had to conquer a mountain of technical issues, evolving a number of methodologies including those for creating hair/fur and snow/ice.
In addition to these “big” evolutionary steps, there were a host of smaller ones – all of which laid the groundwork for a compelling, animated film that no doubt will melt many hearts.