Summer Effects
Issue: Volume 38 Issue 4: (Jul/Aug 2015)

Summer Effects





The 2015 summer blockbuster season is one of the hottest in recent memory. It started off on shaky ground – for no other reason than because of the “devastating” visual effects in San Andreas. There were other films whose CG work sent shock waves of sorts. But the one to take the biggest bite out of the box office was Jurassic World, which set a number of records.

In one movie, digital artists did the impossible, putting Ethan Hunt (actor Tom Cruise) in very dangerous situations for Mission: Impossible – Rogue Nation. Double Negative handled the majority of the work, completing 1,000 of the film’s 1,200 VFX shots, including a complicated sequence that takes place at the Vienna Opera House, another involving an underwater heist, and one very unique scenario with a masked man.  

Whereas Rogue Nation is about action and intrigue, San Andreas is about havoc and devastation at the hands of Mother Nature in the form of California earthquakes. A number of facilities helped create the destruction, including Method Studios and Cinesite. Cinesite started the film off, sending a woman’s car flying off a cliff during the first quake. Method, meanwhile, got to destroy downtown Los Angeles before moving on to San Francisco.

It’s all about the future in Terminator Genisys. Double Negative was the major VFX provider, creating the T-1000, T-5000, and T-3000 Terminators and more. Naturally, the focus on this latest film in the franchise was the return of a younger version of the Guardian, played by Arnold Schwarzenegger. That challenge was left to MPC.

What would a summer be without an alien-invades-Earth film? This year, though, Pixels puts a unique spin on things. Here, the enemy has a familiar look: characters from video games. A number of studios contributed to the feature, bringing these characters to 3D life while still retaining their iconic 2D game look. 

Indeed, these feature films show off a range of visual effects – from realistic to stylistic; from in-your-face to hidden – but they all have one thing in common: They make movies magical.



Staying Real

by Barbara Robertson

With a little over 1,200 visual effects shots in a movie, you might expect a good portion to be all-digital shots. Not so in Director Christopher McQuarrie’s Mission: Impossible – Rogue Nation. Actor Tom Cruise reprises his role as Ethan Hunt in the Paramount Pictures release. To help make Hunt’s mission possible, approximately 600 artists at Double Negative (DNeg) in London, Singapore, and Mumbai, India, worked on 1,000 of the 1,200 visual effects shots. An in-house team composited approximately 200 shots, One of Us contributed a small number of shots, and SPOV provided graphic content for computer monitors. The Third Floor did previs.

“The brief for this film, and I suppose for all the films in the Mission: Impossible franchise, was to keep within the realm of reality,” says Visual Effects Super­visor David Vickery. “Maybe not today’s reality, but reality in the next 10 or 20 years.”

As a result, the crew tried to shoot as many effects as possible in camera, including the stunts. Tom Cruise did all 
his own stunts in all his shots. 

“When he’s driving a car and the camera focuses on his foot, it could have been anyone’s foot, but that’s Tom’s foot,” Vickery says. “He does everything. The dedication to his trade and his commitment is incredible.”
Take a sequence in the Vienna Opera House, for example. In the movie, a production of Puccini’s “Turandot” is on stage, and thousands of people are in the audience. Backstage, Cruise fights a would-be assassin on a series of lighting trestles that move up and down. 

“No one wanted to make a digital environment or a digital Tom Cruise,” Vickery says. “So, we filmed Tom Cruise fighting on a trestle 40 feet in the air. He was actually up there, with wire rigs for safety. We painted out the rigs and did set extensions.”

The challenge for the visual effects artists was to composite footage filmed in the Vienna Opera House with greenscreen shots taken in two other locations, and to seamlessly combine the audience with the backstage action.
“We didn’t want to go to complete-CG environments,” Vickery says. “We captured multiple crowd plates and multiple tiled background plates, and we then projected this photography onto LIDAR-scanned geometry.”

In Vienna, an ongoing show of a non-film related production meant the crew could only shoot the front of the opera house and the auditorium at night and for a short time during the day. And, of course, they couldn’t film the audience; instead, they used an audience from a London location.

“We had 50 extras,” Vickery says. “We shot them from seven or eight camera positions once the main unit finished their work. We wanted a real location with real people in real seats.”

Separately, at a studio in West London, the crew filmed a full operatic production of “Turandot.” Then, they put the jigsaw puzzle together – the Vienna Opera House, the opera that would appear in the film, the audience, the action backstage that they shot in Leavesden Studios, and the set extensions. 

Underwater Heist

The most complex visual effects work takes place during a sequence in which Tom Cruise swims inside a completely sealed torus filled with water. In the film, the torus served as an under­water secure computer facility.

“The director envisioned a series of single shots, ‘oners,’” Vickery says. “He had me look at the ‘Spielberg Oner’ on YouTube that describes the incredible elegance of Steven Spielberg’s one-takes. You see entire scenes in a single take. The beauty of those shots and what Chris [McQuarrie] wanted to achieve is that you can replace an entire scene that would traditionally have many cuts with a single shot and never be conscious of the camera.”


Double negative added chaos around Tom Cruise to amp up the action.

But in this film, that seamless shoot needed to happen in a hostile environment for the actor. “The only place more hostile and technically challenging that we could have found to shoot would have been outer space,” Vickery says. The visual effects crew needed to figure out how to make the shots work: Cruise would not be able to have any kind of breathing apparatus.

“Tom [Cruise] went into deep, deep training with a professional freediver,” Vickery says. “In training he actually held his breath for over six minutes.”

The crew filmed the sequence in Leavesden Studios, where they had built a partial, but large-scale, set for Cruise to perform in. The camera, an Arri Alexa 65, was mounted on a motorized circular track.

“It’s a brand-new, digital, large-format camera,” Vickery says. “It shoots 6.5k, which gave us a super-high-resolution, large-format image. In post, we could re-frame and push in to help tell the story. Because the sequence was one shot, we picked a 24mm lens to make sure we could always keep Tom in frame. We knew we’d have the flexibility to push in and reframe, move around, and add additional layers of texture on top.”

A second motorized track helped move Cruise through the water for a couple of moments that required very specific direction.

“There are moments in the scene in which he has to fight against an incredible current,” Vickery says. “[On set] he’s swimming and he’s tethered, so you can really see him forcing himself against the current. He wanted water pushed into his face, so at times he had to be tethered with safety harnesses to help him swim and stay in one spot. The water movers are beneath him.”

The water was as clear and clean as possible. 

“Once we had water moving, we didn’t want any particulate so we could capture an incredibly crisp and clear picture,” Vickery says. “We added the particulate and layers of bubbles in post.”

The camera was always in the water with Cruise, moving in a way that would give the sense that he was swimming inside a circular interior.

 “Sometimes the camera needed to be stationary,” Vickery says. “Sometimes it moved behind him and came around. Sometimes it focused on his face. The real challenge in creating the incredibly long, single shots was that Tom had to perform them over and over for 18 days. It was a punishing shoot – especially for him.”

In post, DNeg artists added robotic arms, extended the digital environment, added the particulate, and graded the images to help match the digital footage to a 35mm film look, adding color bleeding, diffusion, and lens aberrations. 

“The whole scene is about four and a half minutes long,” Vickery says, “with about three shots making the four-and-a-half-minute piece. It’s a multithreaded story, so we needed to cut away from Tom because a concurrent story is happening.”

Other action scenes take place on dry land, as chase scenes throughout the film amp up the action. As with the other sequences, Cruise did his own driving and bike riding, but for safety, the production unit thinned the traffic.

“He’s riding motorbikes at 100 miles per hour,” Vickery says. “When it got to the point where that was dangerous, we used visual effects to enhance the practical work. The biggest part of our process was rig removal, but we also knew that in post, we could put a car two feet away from him and it would be much safer!”

Masked Man

The torus was most difficult, the chase scenes and Opera House compositing most traditional. Another sequence, the mask reveal, was the most fun. 

Traditionally in Mission: Impossible films, one character removes a mask to reveal he’s actually someone else. For this film, McQuarrie wanted the visual effects crew to do the reveal without cheating.

“I came up with several options for Chris that we filmed on our iPhones,” Vickery says. “But they always involved one person passing behind another at the right moment to hide the transition between characters. That was cheating, and we weren’t allowed to do that. Chris [McQuarrie] wanted to clearly see the person in camera throughout the mask reveal.”

 For example, in the scene they were working on, one of the film’s antagonists takes off his mask to reveal it is actually Cruise.

“We should lock off the camera to shoot the multiple passes we need,” Vickery had said to McQuarrie.

“No,” McQuarrie had answered. “The camera needs to be moving.”

After puzzling about that for a while, the crew decided to use a combination of motion control repeat camera moves.

“First, we had Tom stand still and pretend to take the mask off,” Vickery says. “That was our reference pass, so we could see Tom’s facial expressions without the prosthetic mask getting in the way. Then we had our antagonist stand in the same place and repeat the same action so we could shoot a clean pass of that. Next, we shot Tom taking off a mask that was an intricately detailed likeness of the antagonist. The last pass was Tom without a mask on, but with a neck prosthetic partially glued in place. We filmed the moment when he tore the neck part of the mask away because it gave us the physical interaction between the prosthetic and Tom’s skin. Then through a combination of 2D and 3D re-projections, we combined the live-action plate of the antagonist’s face back onto the prosthetic mask that Tom tears off.

“The prosthetic artists make wonderfully detailed masks, but they don’t look 100 percent real when you get close up,” Vickery explains. “If you’re full frame on a head, there’s no way in the world you could make a mask that looks like a real human. They don’t fool anyone unless you’re 10 feet away.” 

And yet, McQuarrie still wanted one shot with no cheating.


Some VFX, especially those concerning the masks, are traditional. Most CG effects are invisible.

“Chris said, ‘I want to do one shot in camera. I don’t want to cheat in any way. No visual effects,” Vickery says. “In the end, he suggested we build two sets ­– one a mirror image of the other. Between them was a wall with an empty frame where the mirror should be. Every department did an incredible job. In the reverse set, the actors’ costumes had to be backward, even the books and CDs on the shelf in the background. Simon [Pegg, who plays Benji] sat on one side and the actor he was impersonating sat on the other. There were body doubles for Tom and Rebecca [Ferguson, who plays Ilsa]. It then became a simple matter of where to place everyone and, of course, the camera.”

The trick was getting everyone to move in perfect sync with the camera.

“The illusion was quite effective,” Vickery says. “When Chris was looking at the monitor behind the set, he shouted directions at Tom when he meant to be directing his double. He said: ‘Well, if I’m this confused, it must be working.’”

For audiences curious enough to wonder how the scene was possible, the filmmakers left one small detail as an intentional “tell.” 
 “It’s a visual effect,” Vickery says, “no doubt about it. But it’s old-school smoke and mirrors – without the smoke and, actually, without the mirror. It took 20 takes to make sure Tom, Simon, Rebecca, and their doubles were doing the same actions at the same time. The finished shot is really elegant. It was one of the most fun things we did on set, and it was all done in camera. It’s a very beautiful thing.”

Mask Machine

In the film, the Mission: Impossible team makes the masks with a futuristic RPT machine. On set, Cruise interacted with a prop, but the DNeg crew replaced that with a CG machine. 

“We looked into cutting-edge rapid-prototyping techniques and tried to project what we might have in the next 15 years,” Vickery says. “We wanted to suggest [the IMF] would stay just ahead of the game. So they also have a new round of Mission: Impossible graphics and gadgets: fingerprint re-coding, a gadget that shatters glass into fine dust, and a fantastic computer that looks like an opera program but opens up to reveal a Kindle-esque laptop. In modern films, there is often a tendency to go one step too far, but for Rogue Nation, we always strived to stay within the realms of believability.”

Helping create that plausible reality were artists at SPOV, who created the graphics.

“I never appreciated before how difficult the graphic storytelling is,” Vickery says. “They did a fantastic job of getting into the director’s head.”  

It’s all part of supporting a story that suggests many futuristic technologies, but one that is grounded in reality. Can effects such as these be considered state of the art?

“Absolutely,” Vickery says. “It’s invisible work. The audience won’t see 90 percent of the effects. They’ll assume it’s real and in camera. That’s the key thing. It’s visual effects that support the story in the truest sense. We’re trying to help the filmmakers take what they’ve already done and turn it up to 11, adding the final touches.”

It will be interesting to see how people react. Will audiences grown jaded by in-your-face digital effects appreciate the invisible visual effects in this film? Or, will they assume there are no visual effects at all? For the crew at DNeg that worked so hard on this film, having people believe they did nothing at all would be an ironic reward.


Earth-Shattering

by Linda Romanello

Earlier this summer, Warner Bros. Pictures took on the disaster film genre with Director Brad Peyton’s San Andreas, shaking up the early-summer box office.

The film, whose story line takes place during one of California’s worst earthquakes, features a rescue-chopper pilot (Dwayne Johnson) making a dangerous journey across the state to find his wife and daughter. To create realistic-­looking sequences, Visual Effects Supervisors Colin and Greg Strause called on a number of VFX houses, including UK-based Cinesite and LA’s Method Studios.

Cinesite, lead by VFX Supervisor Holger Voss, shared the work between its two locations in London and its newly opened Montreal facility. Together, they completed the film’s opening scene involving a young woman driving along a mountainous cliff, only for her car to go over the edge during the first earthquake. 

“We did 160 [shots for that scene] — it was a lot of CG,” says Voss. “Every shot had to have mist and flying debris. It was a lot of work, but once you figured out one shot, it was like another 150 of the same kind.”

Initial shots of the car leaving the road were achieved through a combination of live-action footage with a CG landslide, car panels, debris, and so forth. A digital double was used for shots where the car goes airborne.

Voss flew to the Glendora Mountains in California to capture extensive photogrammetry from various vantage points, both on the ground and by helicopter. The geometry from this was used to re-create the location digitally, complete with a CG cliff face and vegetation.

“We did some aerial shots, the helicopter flying through the canyon, but there were a lot of scenes where the set piece was shot in Australia (on Arri Alexa and Red Dragon cameras), and we were basically shooting the scenery for it,” says Voss. Because it was hard to mesh all the shots, Cinesite captured “tons of stills out of the helicopter and also from the ground up so we could actually re-create the whole environment in case the plates wouldn’t line up.”

 That’s actually what the crew used most of the time. “The scene depicts one side of this canyon, where the actual car crash was, and then the other side of the canyon, which didn’t even exist,” Voss continues. “We had to find a cliff we were building in CG anyway, so it turned out in the end that it was so much easier to do both sides digitally and just put the greenscreen element on it – and that was it.”
Half the shots of the helicopter are entirely CG, and for others, Voss’s team added CG rotors to a hydraulic rig filmed with the actors in Australia. 

Cinesite’s pipeline included various software such as Auto­desk’s Maya, Chaos Group’s V-Ray, The Foundry’s Nuke, and Agisoft PhotoScan for photogrammetry processing. 

Downtown Destruction

Creating nearly 250 VFX shots for the entire downtown Los Angeles destruction sequence, as well as contributing to the film’s San Francisco sequence, was LA-based Method Studios, led by VFX Supervisor Nordin Rahhali.

On set for several months during the shoot, Rahhali oversaw certain shots in Los Angeles and Brisbane, Australia, to ensure proper integration between CG elements and environmental shots in post. He did so working closely with Bruce Woloshyn, who was overseeing Method’s Vancouver team.

At the center of the Los Angeles sequence is a continuous, three-minute shot following actress Carla Gugino through the chaos as she attempts a rooftop helicopter escape (previs was done by The Third Floor). 

According to Rahhali, “Brad wanted the audience to feel like participants with what was going on with the main characters — kind of like being involved and inside the earthquake. One of the larger shots we worked on, the three-minute epic shot of us traveling along with Carla, evolved from that idea, which is make the audience feel what it’s like to be inside a 9.0 or 9.6 earthquake. They weren’t going for anything other than trying to make it gritty and real.”




Method Studios created the downtown destruction.

As Rahhali explains, the LA earthquake is the “predecessor to the main quake that ends up happening in San Francisco, and it sets the tone for the entire film. That’s what they wanted us to do. They wanted it to start off with a massive bang that just never let up. They said we would be setting the bar for the look and feel of this earthquake throughout the entire film.”

Using a combination of tools, Method’s pipeline comprised Maya, Side Effects Software’s Houdini and Mantra, Nuke, Massive Software’s Massive, and Autodesk’s Shotgun and RV. 

“The Los Angeles sequence was a huge challenge in terms of the scale and the complexity of what was needed,” explains Rahhali. “We had full-CG environments where everything from high-rises to trees are collapsing, and everything needed to look photoreal and behave realistically, even down to the type and behavior of the smoke clouds.”

To reconstruct downtown LA in photoreal CG, Rahhali and the team captured extensive LIDAR scans of the area from both street level and rooftops, and collected aerial shots to use as photography and lighting references. Artists used the data to build CG environments, which were stitched with the live-action plates shot in Australia, then added atmospheric effects, such as smoke and pyroclastic clouds, to bring everything together. 

 “The film itself looks like a $200 million dollar production, but it was shot and executed for half that,” says Rahhali. “That’s a testament to the artists and their work.” 

Indeed, the effects in San Andreas are earth-shattering. And in this context, that is a good thing.  


He's Back

by Marc Loftus

MPC spent a year producing visual effects for the new Terminator Genisys film, but the studio’s involvement dates back even further, to the start of filming.

MPC and its VFX supervisor, Gary Brozenich, had worked with Terminator Genisys VFX Producer Shari Hanson on Lone Ranger, and she once again turned to the effects studio to help fulfill a vision for the big screen, approaching Brozenich in the early stages of filming. “Gary introduced me to the ideas. He explained from the start the potential sequences,” says MPC VFX Supervisor Sheldon Stopsack, who oversaw the studio’s work on the new film.

According to Stopsack, MPC was responsible for 250 shots in the film, directed by Alan Taylor. Double Negative, however, was the film’s lead VFX provider, and both Lola and ILM made contributions, as well. MPC’s biggest challenge involved the much-discussed sequence involving a much younger version of the Guardian (the character played once again by Arnold Schwarzenegger) to support the film’s story line, which returns at points to 1984. 

“The prospect of re-creating an iconic figure is both appealing and scary,” says Stopsack of the studio’s work on the digital character, a much younger version of Schwarzenegger. “First, I thought it was crazy to take on. Not only is it difficult to create a human digitally, but that multiplies by a hundred if you start applying it to an iconic person like Arnold. We realized we can’t leave any stone unturned, and right from the start, we had to put in all the energy to take it to the next level.”

The MPC team referenced footage from the original Terminator film, as well as from the 1977 bodybuilding documentary Pumping Iron.

“We used any material we could get our hands on: photos, footage, the original movie,” says Stopsack. The material would provide guidance for modeling and texturing, and was constantly cross-referenced as the CG character was being developed.

During production, a bodybuilder was shot in front of a bluescreen, but Stopsack says very little of that material was used in the final visual effect. “Arnold’s appearance is so unique,” he explains, “and a stunt guy would not give you that.”

Ultimately, MPC approached the sequence with the intent of 100 percent replacement of the live actor, but the final effect involved closer to 80 or 90 percent. The screen time of the digital character represents nearly 2,800 frames, consisting of over-the-shoulder, wide, close-up, and dialog-driven shots.
MPC’s Montreal studio handled the bulk of the work. Its London and Bangalore (India) studios also pitched in, and the Los Angeles location hosted meetings and presentations.

“Time was our only enemy,” says Stopsack. “The model was in flux until the last day 
for corrections and changes. There were changes to the face mesh — it was always being questioned. We never really called it ‘finished.’”


Double Negative wrangled with the various terminators.

MPC relied on a combination of tools to accomplish the shots. Autodesk’s Maya was used to create the 3D, and the rigging was also Maya-based. For texturing, The Foundry’s Mari 3D texture painting tool was used quite a lot, says Stopsack, adding that the texture maps alone represent 18gb.

More Termination

Double Negative (DNeg) in London handled approximately 900 shots for Terminator Genisys, including the T-1000, T-5000, and T-3000 Terminators, the helicopter and bus chase sequences, and the explosion at the Cyberdyne headquarters.

Peter Bebb, DNeg’s in-house visual effects supervisor, says the studio got involved in the film back in January 2014.

“They wanted to split up the major ticket items,” Bebb says of the filmmakers. “MPC doing the 1984 Arnold stuff was a major undertaking. They wanted another vendor to do the major T-3000 and John Connor [work], and that was us. And along with that, because of the interaction with him and all the other characters, I think it came as a package.”

DNeg’s team grew as large as 500 when in full swing. 

The studio relies on a combination of proprietary tools for rigging and animation, but starts the work in Maya. On this film, because of the complexity of the visual effects, the studio employed Side Effects’ Houdini. Compositing was done in The Foundry’s Nuke.

One of the challenges DNeg faced was designing the new Terminator, the T-3000. “We knew [the filmmakers] really wanted something unique for the new Terminator,” says Bebb. “We did not want to go down the classic robot/metal [road]. It had to be something that was individual, and we knew it was going to be a huge design undertaking.”

The T-3000 is driven by design and battle efficiency. “The form follows function,” says Bebb. “It’s how a computer designs something. It’s got to have a reason it’s that shape. We started with something that looked good, but then said, ‘This doesn’t make sense. Why would it look that way? Would a computer actually do that?’ The main design behind this thing is that Cyberdyne is basically infecting John Conner and trying to kill him.”

John Conner’s cells are being replaced, so the character is a combination of his human form and Skynet’s nanotech. “We had to blend those two together, which is why it doesn’t look like the T-800 or T-1000, because those are pure, bespoke robots. Obviously, the T-1000 replicates what it forms. The T-800 is a pure robot that sits under a flesh structure. The [T-3000] is pure design and functionality for combat effectiveness. Everything else is tossed aside.”

Will the Terminator return once again in a sixth installment to alter the course of history? If so, you can bet that next-gen CG technology will be ready for the next-gen Terminator.  


Pixelated

by Linda Romenello

In Columbia Pictures’ new comedy Pixels, directed by Chris Columbus (his first major feature in five years), aliens misinterpret US video feeds of 1980s classic arcade games as the Earth’s declaration of war. In response, they attack the planet, in the 3D form of iconic game characters, such as Pac-Man, Centipede, Galaga fighters, Q*bert, Donkey Kong, Tetris tetrominos, and Frogger frogs. The president (played by Kevin James) calls on his old school chums and game masters (led by Adam Sandler) to head off the attack. Game on!

Shot on location in Ontario, Canada, with Arri Alexas, Pixels required heavy-duty CG work for the characters, environments, and support elements. Visual Effects Supervisor Matthew Butler looked to several VFX studios to help conceptualize and complete the work, including Sony Pictures Imageworks and Digital Domain. 

Imageworks’ VFX Supervisor Daniel Kramer oversaw approximately 245 VFX shots, with somewhere between 20 and 30 CG characters (Q*bert, Froggers, Galaga spaceships, and so forth). He says Columbus was specific in his direction that he didn’t want the characters “to look plastic, like building blocks  or LEGOs. That was a big mandate,” Kramer says. “He wanted them to look like nothing we’ve ever seen before.” 

The feature is based on a short film also called Pixels, which Kramer says included “very simple renderings of some cubed characters, and that’s sort of where we started.” 

For the big screen, the group added a lot more detail, including light emission, which also gave the characters more scale. “Our characters actually emit light. If you think of it like a CRT for video games, it’s a lit screen on which [the characters] appear. So we wanted to bring that emissive light quality to our characters in the real world,” explains Kramer.

Imageworks partnered closely with other studios, including Digital Domain, and VFX Supervisor Marten Larsson, who completed CG work on the film’s Pac-Man, Donkey Kong, and Centipede characters and scenes. Using a combination of Side Effects’ Houdini, Autodesk’s Maya, and The Foundry’s Nuke, Digital Domain completed more than 360 shots. Kramer agrees that the characters themselves probably presented the biggest VFX challenge.

Character Creations

“The characters were definitely the trickiest — they had to look like they did in the game because they’re the iconic characters, right?” says Larsson. “At the same time, you want to have a balance so they look real enough so they are believable in the environment.”

Larsson uses Pac-Man as an example. “He’s a sphere. If you make him out of boxes, you’re actually looking at a flat surface,” he says. “So the first thing we ran into was that he looked like a sphere but was reflecting like a mirror — giant reflections running across him that didn’t really show the shape of the character.”

Another design challenge was the fact that all the characters were emitting light. “So we had to figure out how to make them emit light without completely flattening them out,” says Larsson.

Easier said than done, however. After all, many of these are classic 2D characters. “If you make Donkey Kong look like Donkey Kong, well, we’ve really only seen what he looks like from the front. So if you nailed him from the front and he looks perfect, he still might look a little weird if you look at him three-quarters. That required a bit of back and forth design-­wise,” Larsson points out.

Larsson, along with Kramer, was involved with the film in the early stages, working closely with Butler and helping out with tests, “trying to figure out what these very low pixel, flat, 2D characters would look like in 3D,” says Larsson.

An additional challenge was that many of the sequences took place during daylight hours. “We had a more difficult time conveying to the audience that these were light-emissive characters because the amount of light they are actually emitting is overpowered by the sun, so we had to figure out how to play that light energy in ways that would show up and would read in broad daylight,” says Kramer. “If we were to light up our characters where every single cube on the character is emitting light, it just looks flat and shapeless, and it’s very difficult to make that character feel like it’s living within the environment.”




Dozens of CG game characters invade earth in Pixels.

As a result, the crew ended up lighting only selective cubes. Kramer explains: “A cube would be bright right next to a dark cube, and it would emit light onto that cube and would turn off, then another cube would turn on by sort of dancing and moving the light around and lighting selective cubes and allowing most of the cubes to catch the natural light of the environment.” 

A large part of the work completed by Imageworks, using a combination of Solid Angle’s Arnold, Maya, and Houdini, was in the DC Chaos scene, the final attack on Washington, DC involving all the characters.

“We developed pipelines to destroy the city, which was fun because the destruction was different than you might see in other [disaster] films,” says Kramer. “Everything we would do would pixelate or, what we would say, would ‘voxelate’ the environment. It’s a term I use a lot because these characters are built out of these cubes, or voxels, which are basically 3D pixels. To attack something, they basically turn it into voxels.”

So, for example, when a Galaga drops a bomb on a building, some of it will be destroyed in a practical manner, but big sections of it will just kind of cubify into voxels, and those cubes will then just fall apart and collapse. 

“We have another shot where Tetris comes down and sort of locks into a building, and once a line is complete, it destroys that section of the building and collapses on itself,” Kramer says. “We had to develop a whole language for what that looked like.” 

While Pixels is indeed another summer “destroy the Earth” film, it adds a unique spin with your not-so-average alien characters. Alas, eventually for the characters, it was game over.