Super Showdown
Marc Loftus
May 20, 2016

Super Showdown

I first had a chance to speak with MPC VFX Supervisor Guillaume Rocheron two summers ago, not long after he had finished work on the feature  The Secret Life of Walter Mitty . Rocheron recently headed up the MPC team to create 450 shots for the new Warner Bros. film  Batman v Superman: Dawn of Justice , which was the topic of much discussion, especially among comic book fans who, no doubt, debated for years on the subject of who would emerge victorious if these two superpowers squared off .

Directed by Zack Snyder, MPC served as the lead visual effects house on the film and was responsible for the fight sequence in Gotham and the action featured in the film’s third act. Much of the fighting relied on digital doubles of Batman (played by Ben Affleck) and Superman (portrayed by Henry Cavill). 

Rocheron recently Skype’d me from New Zealand, where he’s supervising the VFX for the 2017 film Ghost in the Shell . He took some time to reflect on MPC’s Batman v Superman work, and how they achieved such realistic results at 4K/IMAX resolution. 

 


What was MPC responsible for on Batman v Superman: Dawn of Justice?

Most of the work we did was around the actual sequence where Batman and Superman fight in Gotham City. And then the big third act battle – the three heroes and Doomsday in Gotham City, as well.

How many shots does that represent?

That’s probably around 450.

You are a VFX supervisor for MPC. Who else made contributions?

Scanline VFX, Double Negative, and Weta Digital. We were the lead vendor on the film. We had the majority of the work, and I would say the bigger sequences, even though the other facilities had some big sequences of their own. Everything is big in the movie. The Batman/Superman fights and the big end scene all combined represented the biggest chunk of the work.

MPC has multiple locations. How was this film broken up?

We did most of the work in the Vancouver facility. As usual, there is always work going through London for building assets. Bangalore, India, did some matchmoves and roto animation, and paint. We relied quite a lot on the Bangalore facility on this movie because we did a lot of roto animation, which is re-creating actors’ movements in 3D so we can add a cape to him or add armor, or a piece of armor. 

It was a challenging project for Bangalore because the Batman/Superman sequence was done in 4K IMAX format, which is unusual for us, and it involved doing very high resolution digital doubles, and also adding the armor to Ben Affleck – the big armored Batman suits. In many shots where Batman and Superman are fighting, they have no capes, so we add them in and choreograph them. The capes are digital elements, except when the characters are just standing or walking.

What were some of the challenges?

The large challenge for us on that scene was creating a digital version of Batman and a digital version of Superman so we could really get them to be in mid-flight and fight at the same time. The IMAX format was a challenge because it was eight times more detail over a standard 2K film. It was very challenging on our end and led us in building the most complex and highest resolution digital doubles at MPC to date.

Did MPC’s work focus on the characters, or did the studio do other VFX too?

It was a bit of everything – mostly characters, capes, and costumes. We had a few set extensions, when they fight on the roof. When they fight on the ground, it is a location in Detroit – a real train station. Then, they go fight on the rooftop, and the rooftop is an interior greenscreen stage, for which we created Gotham City all around. 

Then they go down through the floors, and the floors are partial sets. The first floor they go to is a set, and then they go down one more floor where the bathroom stalls are, and that’s a partial set, where there is a big staircase that goes all the way down to the train station. That doesn’t exist. Batman throws Superman through that staircase. We created all that digitally. Finally, down in the lobby, the lobby is an existing location in Detroit.


  


What scenes are you most proud of?

In terms of complexity, the end scene where Doomsday, Wonder Woman, Batman, and Superman fight in Gotham City. These are very technically complex shots. There’s a lot of digital doubles. Obviously, Doomsday is all-digital. Gotham City doesn’t exist. It’s a lot of work technically. But artistically, I really like the Batman/Superman fights because it’s not only an iconic fight, but there is something very graphic and photographic about it all. I think the visual effects work is very interesting because of the IMAX format. There are all the capes and costumes and digital actors, and it’s pretty seamless. That’s something that stands out from the bulk of the work.

What tools does MPC reply on?

It’s [Autodesk] Maya mostly as a 3D platform. We use [Pixar] RenderMan for rendering. We used the new RenderMan RIS, which is the latest upgrade in RenderMan 19. It makes RenderMan a full-on raytracer, a bit like [Solid Angle] Arnold and [Chaos Group] V-Ray. I think it was one of those things that it became available just at right time because it played a key role in allowing us to render everything we had to render. 

It allowed us to push the quality of our digital doubles, which really helped make the skin more realistic and the fabric more realistic. There’s a level of detail that we couldn’t reach before. But also for Gotham City, every shot has two square kilometers of buildings and streets and street props. When they destroy it and the ground, which we call the lava ground – kind of like the dry lava fields that you see – you are populating whole shots. You have hundreds of fires and smoke plumbs and crushed props. The volume of that render was absolutely incredible. And we were able to render it all together in a fairly coherent way because we updated to this new way of rendering.

Do you render locally or in the cloud?

We do all the rendering locally within the facility. We have multiple renderfarms. We have a renderfarm in London, Montreal, and Vancouver. We rendered the movie mostly on the Vancouver renderfarm. I can’t remember the exact size of it, but it’s probably around 10,000 processors – something around there. You need a lot of those.



Has the challenge been solved for creating digital doubles at this point?

I think there is still some work to do in terms of creating digital actors. We are able to create photorealistic digital humans. But I think now, the key is how to make them digital actors. It’s always a complicated thing. You can make skin look realistic and make eyes look realistic, but it’s all about the movements and the subtleties. I don’t think we’ve cracked that yet 100 percent. 

I think in this movie we made a step forward in being able to do that on IMAX format in 4K resolution. We have some digital actors that are almost full frame at that resolution, and I think we were able to get them rendered and moving realistically. I don’t know if they’d be able to sit down and read a book for two hours, and do that realistically. It’s still going to take another few years. There has been very good attempts into doing it, and I think every time we are getting closer. I was pretty pleased on this one. 

We got quite a bit closer in terms of working at very high resolution on very large formats – things are holding up. We managed to maintain the movements and the likeness of the actors, but I think there is always going to be a little bit of a challenge to push it then to the next stage.

So the next challenge is solving the performance hurdle?

It’s all about the performance. A good performance can only be given by a good actor. How you translate that into the visual effects world, I think is the key.

How did you animate the digital characters? Was it motion capture or keyframe animation?

It’s a bit of a combination. We did a lot of keyframe animation. And we did what we call ‘stunts capture.’ We get stunt doubles and great performance-capture suits, and we shoot them with multiple cameras so we can re-create their body movements. We did that for digital doubles. We did that for Doomsday, as well. 

For the faces, very often we have helmet-mounted cameras with a little HD camera on a rod that films the face, and we put tracking markers on the actors so we can film the facial performance only. Then we take that and solve that facial performance onto our rigs to re-create a copy of this performance on the digital character.

What’s next?

I am working as VFX supervisor for Ghost in the Shell. I am overseeing the VFX for the whole film, like I did on Walter Mitty. I’ve been here since I wrapped up on Batman v Superman. I flew to New Zealand to start prep, and started shooting two months ago, and we have another two months of shooting.

Will much of the VFX go through MPC?

We don’t have a full breakdown yet. We know MPC will do the majority, and the rest of the work hasn’t been broken down yet. It’s a very exciting project and has been long awaited. There’s something very interesting to do there.

Marc Loftus is the senior editor and director of Web content at Post Magazine, CGW’s sister publication.