CTO Carl Ludwig of Blue Sky Studios discusses the latest technology used to render Epic’s rich environment and magical characters
In February 1986, six people who had worked in Elmsford, New York, at MAGI on Disney’s Tron, decided to form a computer animation company. The founders – Carl Ludwig, Dr. Eugene Troubetzkoy, Chris Wedge, Alison Brown, David Brown, and Michael Ferraro – named it Blue Sky Studios. Ludwig was an electrical engineer who had worked for NASA on tracking systems for the Apollo mission’s lunar module. Troubetzkoy had put a PhD in theoretical physics to work creating computer simulations of nuclear particle behavior. Wedge was a classically trained animator with a master’s degree in computer graphics from Ohio State University.
Working for months without pay in the early days, Troubetzkoy, Ferraro, and Ludwig developed proprietary physics-based rendering software called CGI Studio. During the next 10 years, the small studio survived as an effects house for commercials and feature films. Then, Wedge persuaded the Blue Sky team to work on a short animated film.
That film, “Bunny,” won an Oscar in 1998, and caused ripples in CG circles: Ludwig and Troubetzkoy’s software had made possible the first use of radiosity throughout an animated film. The physically based rendering gave the film a unique, natural look. And that film transformed Blue Sky Studios, soon after acquired by Twentieth Century Fox, into a feature animation studio now known, especially, for its Ice Age films. The studio’s most recent animated feature, Epic, sends the CG camera from the human world into a backyard as seen from the tiny characters who live there. (Read a Q&A with Chris Wedge in the May/June 2013 issue of CGW.)
CGW Contributing Editor Barbara Robertson spoke with Carl Ludwig, vice president and chief technology officer at Blue Sky Studios, about the innovative studio’s latest film, Epic.
Did you develop new technology specifically for this film?
We’re in constant development; we develop things all along, but this is the first film where we’ve really pulled out all the stops and could show what we do. The human animation is extraordinary. The lighting is incredible. We have extensive use of subsurface diffusion that allowed us to do forest scenes with glowing leaves and appropriate shadows. The leaves glow when sunlight hits them.
Chris [Wedge] was very clear about what he wanted. It was a challenge, but he had faith that we could to it, and we did. And, it wasn’t just people in research, it was rigging, it was every department. Everyone stepped up to the plate. Chris wanted to take advantage of our strength, and we certainly did in this movie. It’s beautiful. Gorgeous. The amount of detail is amazing.
Is the extensive use of subsurface diffusion due to faster hardware?
In general, it’s just that we’ve moved on with our development. We started with raytracing in 1987, from our inception, because it simulates the way light behaves, and that allows you to simulate the physical reality in nature: the subsurface diffusion, the way the shadows play, radiosity. We have all that, and have been doing it for years. Everyone said we were crazy. Now, everyone is jumping on the bandwagon, but we have a big lead.
Each film brings us another notch up, and for this film, we really pushed it. We can do almost real-time lighting now – we can change one light at a time and recalculate shadows – and that helps with production speed.
Didn’t the amazing amount of detail affect rendering speed, though?
Yes. But, there is a reason we can do a tremendous amount of detail. Most people raytrace the polygons. We don’t do that. We don’t subdivide into polygons. We track directly to bicubic patches; we solve the intersections directly and quickly, and that makes a huge difference. It saves a lot of memory. It took years and years to develop this capability; it didn’t come easy. But we’ve been doing it for years, and we’re cashing in on it now.
We also have a power voxel capability that allows us to render fine detail at huge distances. Saving memory is extraordinarily important, and we do that very well. It allows us to render a complex scene and still contain it within the allotted memory.
And, the raytracer is built with intelligence in mind. It’s not a simple path tracer that starts a ray and lets it go. At every intersection, we make decisions about what to do next, how many rays should be fired next, and where they should be fired.
How is the raytracer making decisions?
You could consider it an information-gathering exercise. The ray reaches an intersection and asks, ‘What do I do now? What information do I need? What don’t I need?’ It makes decisions based on what it knows so far and what it needs to know, and decides what to do. Maybe the ray reaches a surface and there are a number of lights on that surface. It asks which lights contribute more and which contribute less. That’s a very simple decision to make.
We spent years trying to get something to look right, and then we figured out how to make it faster. When CPUs were slow and memory was sparse, we had to really concentrate on those things. We laid that groundwork at a time when there was no other way to do it, and it serves us well now. And, all that background of making the renderer faster led to this way of making decisions properly.
Tell us about your interactive lighting.
A number of years ago, we threaded all the code, and now machines have multiple cores. So, now we can bring in a scene, and once we have it, we can move one light at a time and re-render it. Of course, the first render takes longer, and if you change more than one light at a time, it slows down, but you usually change only one at a time. I have a frame in front of me right now with water flowing into a little pond. It is at half-resolution. I can interactively change one light at a time and render it in two seconds, changing shadows and everything. We’re happy with the way things are working. The other thing we do is handle materials differently.
In what way do you handle materials differently?
Most people map materials on. We do some mapping, but we handle most materials procedurally. The surface of a rock with a little moss is done procedurally. The beauty of this is that it saves memory. We began using procedural shaders with Robots, and now we have a powerful materials department able to create the shaders. At times, we need maps. But, procedural shaders work especially well for organic things, which have a tendency to be random. And, in this film, we have an organic forest with leaves, grass, bushes, water, boulders, all kinds of things. You name it, it’s in there. The other nice thing is that we have a global lighting solution.
A global lighting solution?
I don’t want the artists to be clerks who have to keep track of details. I want them to be able to set up the lighting and know how the materials will interact with the light. I want software that allows them to be artists. So, the renderer allows them to set up a scene and, in the case of a forest, a little glen, let’s say, you have sunlight coming through. You decide where the sunlight is – you have a skylight. Then, you may see a dark area. So, you light that with a soft light and that’s it. I want the lighters to think the way a live-action director would: Go in, set up the lights, have a couple reflectors for fill, and that’s it. They don’t always do that. Sometimes they go crazy with a zillion lights.
How many people do you have working in R&D?
Only 14, including me and Eugene [Troubetzkoy]. It’s a small group, but very special. They are very, very good at what they do. Some of us have been together for a long time; some are new people out of college. We’ll probably add a few more. The most important thing is to find people passionate about what they do, excited. That leads you to places you wouldn’t ordinarily get to. Everyone has particular strengths but can reach across a broad area. We have some beautiful work on hair collision, propagation, rigging. We have Bomba [the father in Epic] run his fingers through his hair, and those calculations are all proprietary. We use some third-party software for fluid simulation, but if no one has done what we need, we do it ourselves. We always look for – not a precise scientific result, which might take forever to compute if you could do it – something that’s visually correct. It’s like the way an artist creates a painting that captures the emotional essence of what’s there. We want something that just looks perfect.
Did working with such small characters in a microworld create any particular challenges?
Two-inch-tall things behave differently – they move faster and quicker because there’s not as much mass to move around. And, if you illuminate a tiny arm, for example, you get subsurface diffusion coming through. So, we had to change the density of some things.
Is there something in Epic that you’re particularly proud of?
Proud? I’m proud of everyone here. The most constituent of any effort like this is the people, and the people here made a huge difference. You develop software that’s capable of doing something, but it becomes an exquisite tool in the hands of a passionate artists. We have an animator who does something incredible in a scene lit beautifully with exquisite models. It’s exciting. When you’re doing that, it’s no longer work. It’s more like you’re having fun. And, you know, we really encourage people to take risks and have fun here.
Read part II of this series of Q&As on Epic.