You could, as do some artists who worked on Ready Player One, think of this movie as three films in one in terms of the work involved. There's the real-time game engine version used by Director Steven Spielberg for previs and production, the 90-minute photoreal animated feature created by ILM in which actors appear as avatars, and, of course, the final film, which includes that virtual world called the OASIS plus live action with visual effects set in the real world.
The Warner Bros.' action/adventure/sci-fi feature tells the story of one young man's quest to save the virtual world he thrives in, and gain a fortune in doing so. Wade Watts [Tye Sheridan] lives in dreary Columbus, Ohio, circa 2045, in one of many metal trailers stacked one above the other, seemingly randomly. It's a dystopian future filled with people effectively drugged by virtual reality. But, in the OASIS, Wade can participate in an exciting virtual world filled with pop culture references from the 1980s. His avatar in OASIS is Parzival, who looks like an anime hero with a mop of silver hair. Parzival drives a DeLorean from Back to the Future. As the camera pans past one trailer after another, we see that Wade is not alone in his goggle-eyed pursuit of a more interesting life.
The film's conceit is that Halliday (Mark Rylance), the OASIS creator, has died, leaving behind a quest: Anyone who meets three challenges gets three keys, the last of which leads to an Easter egg. Find the Easter egg and you win control of the OASIS and millions of dollars. It is a quest Wade can't resist.
To organize the production that made it possible for Spielberg to film in a virtual world and to create visual effects for the film, he relied largely on two studios: Digital Domain and ILM. Artists at Digital Domain managed the virtual production under Gary Roberts' supervision, and created effects in the real world, with Matthew Butler supervising and Scott Meadows managing the previs. Artists in all four ILM studios created the OASIS, with Roger Guyett as overall supervisor, Grady Cofer supervising artists in the London studio, Dave Dally supervising the Singapore studio, and David Shirk as overall animation supervisor. In addition, ILM's Alex Jaeger was the virtual production concept design supervisor working with production designer Adam Stockhausen and artists from Digital Domain and ILM. The Third Floor provided initial previs, and freelance artists, along with artists from Framestore and the ILM art department, helped Stockhausen with concept art.
Jaeger was among the first to begin working on the project, creating concept art for Stockhausen in 2014 - vehicles, spaceships that didn't end up in the movie, and environments.
"We wanted to get some of the key scenes rolling," Jaeger says.
Soon, Stockhausen asked Jaeger to stay on as virtual production art director.
"Adam wanted someone to see things through the whole show, from beginning to end," Jaeger says. "He hadn't done a movie with as many visual effects."
At that point, the concept art needed to move into virtual production and become digital environments with CG props and rough versions of characters. The goal was that when Spielberg walked onto the stage and put on a pair of VR goggles, he'd be in the OASIS.
"All the sets, the main props, the characters, and the environments had to be figured out early on, so Steven could walk around in the virtual environment and preplan the movie," Jaeger says. "So, we had a big front-load of design work. Most of the work for virtual production was for shots where the main characters interact with things, like Aech's garage, the Distracted Globe nightclub, and the starting line of the race in New York with big metal cages. For the race, we had to build representations of cars and Aech's bigfoot truck. The fountain. The sets in The Shining. The places in Halliday's 'journals.' And then, the final battle was the big one."
Jaeger would start by designing the sets, props, and characters in Foundry's Modo, render them from various angles, paint over them with Adobe's Photoshop, and give them to Stockhausen. When Stockhausen was satisfied, he'd show them to Spielberg, and once Spielberg gave his blessing, Jaeger would send 3D models and artwork to Digital Domain. The props department would build matching wireframe proxies for the actors to have on set.
"Before we got to London, we would do test runs with Digital Domain to be sure the sets were ready for Adam to walk around," Jaeger says. "He'd put his goggles on. When he said it was OK, we'd lock that and move to the next one. That went on for a good year and a half, getting everything set up and ready for shooting in London. Then, we were in London for five months. Whenever Steven [Spielberg] wanted something, we had an art crew there ready to bang it out and make it happen."
OASIS DESIGNER HALLIDAY'S AVATAR ANORAK (MARK RYLANCE) LIVES ON IN VR.
Virtual Production
Roberts began working with Warner Bros. to prepare the virtual production for Ready Player One soon after finishing
The Jungle Book.
"The first thing we did was to set up a virtual art department at Digital Domain with eight artists and a supervisor," Roberts says. "They worked directly with Adam [Stockhausen] and Alex [Jaeger]."
Once they received approved designs, the artists used Autodesk's Maya, Allegorithmic's Substance, and proprietary tools to create virtual sets, props, vehicles, characters, and environments that could run in a modified version of Unity's real-time game engine. Three of the artists moved on to London, the better to get immediate feedback, and given the time difference, the LA-based artists could make changes overnight.
"One of the game changers on Jungle Book was that we could scout the virtual sets in VR," Roberts says. "It was even more so for
Ready Player One. Alex [Stockhausen] and Steven [Spielberg] had HTC Vive headsets, and there was one in London, too. Being able to walk around in a virtual set was super useful; it gave everyone a real-world sensibility for lighting and placing things."
To help Spielberg, the crew built a set of custom tools. While in VR, Spielberg could maneuver his avatar around the world by pointing to where he wanted to go and teleporting it there. A scaling tool allowed him to change his avatar's size to, for example, grow 200 or 300 feet tall for a bird's-eye view. With an annotation tool, he could point, draw, write notes, create arrows, and make targets.
"We recorded all that during a scouting session," Roberts says, "all his comments and marks."
While in VR, Spielberg's avatar could carry a virtual Alexa camera, and Spielberg could look through the viewfinder of this virtual Vcam, a virtual virtual camera, if you will.
"It was like shooting in the real world," Roberts says. "He could look through the viewfinder and still see the virtual world with his peripheral vision."
The low resolution of the headset meant, though, that when Spielberg was on set, he would use a real handheld Vcam instead.
"We created a custom Vcam for him," Roberts says. "It was like the traditional iPad-based virtual cameras. It was more advanced and had higher resolution than the virtual virtual camera in the VR headset. When he used the Vcam, he was still connected to the physical world."
On Set
At Warner Bros. studio in England, the crew built two soundstages and four capture volumes. On one soundstage, there was a volume for performance capture of the actors who would be avatars in the OASIS, a calibration volume, and a "Vcam lounge" where Spielberg would do virtual camerawork. The second soundstage was used for bluescreen shots that would take place in the film's real world. The crew on the performance-capture stage used Oculus headsets. The Vcam lounge had HTC's Vives and Microsoft's Hololens.
PARZIVAL AND ART3MIS VISIT A VR DANCE CLUB.
THE CG IRON GIANT BACKS UP PARZIVAL.
"We used the Oculus on the main stage because it was easier to use their on board sensors along with the mocap system to generate the ability to walk 100 feet and be immersed," Roberts says. "In the Vcam lounge, in addition to the Vive, we used the Hololens to see the CG characters with real people. Steven could get an idea of the eyeline for an eight- or nine-foot character."
Actors on the motion-capture stage wore traditional body mocap suits and custom head-mounted cameras.
"We worked with ILM on a custom helmet, on the makeup, the camera positions, and the lighting on the faces," Roberts says.
Each actor had four cameras attached to his or her helmet running at 48 fps (47.972, precisely). In addition, eight witness cameras trained on the actors' faces provided reference for the ILM animators, who would translate the actors' performances to their avatars.
"It all paid off," Roberts says. "It was amazing to see how well the facial performances went straight through."
Before directing the actors, Spielberg would look at the virtual set through a VR headset, check to be sure the physical world - the practical elements on the motion-capture stage - matched the virtual world.
"Sometimes he would change the blocking because he could see the space in VR as if he were really there," Roberts says. "That happened especially in Aech's garage. As the actors were going through their lines, he was changing the blocking and repositioning the Iron Giant to make the framing more interesting. He couldn't have done that without putting a VR headset on."
Then, Spielberg would take the headset off, pick up the Vcam, and shoot the motion--capture performances.
"One of our guys would stand next to Steven and could pull focus, adjust the lighting, and change the controls in the virtual camera," Roberts says. "It was easy to follow him around and react quickly. We recorded all this in real time."
Multiple people on set could also view the virtual world simultaneously, and the multiple views would all be in sync.
"Steven would have his own view into the virtual world, and another workstation might be rendering another view that Janusz [Kaminski, cinematographer] would be lighting," Roberts says. "As Janusz made changes, they were propagated to everyone else in real time. A set designer could be working on the set from a set deck view. And anyone could pick up a VR headset and walk the entire capture. You could see the characters one to one, and you could see where Steven was in the world. If you wanted to see what he was looking at, you could walk to him in virtual space and look over his shoulder. If Janusz was lighting, you could see his view and see his lighting changes in your world."
AVATAR FRIENDS SHO, AECH, PARZIVAL, ART3MIS, AND DAITO FACE THE AVATAR FOR HALLIDAY'S BUSINESS PARTNER OGDEN MORROW.
Because it would be easy to have a situation in which two or three people wanted to change something simultaneously, the crew developed a "conch shell" system. The person with the conch shell, the last person who selected it, was the one who could make a change; no one else could.
"One of my philosophies is that the only way for filmmakers to really come into this virtual world and use their tools to do their work is to give everyone their own view into the world, just like in the real world," Roberts says.
Video from the eight witness cameras, sound, the four HD camera feeds of each actor's face, and the real-time view of the Vcam Spielberg held all traveled through video assist to editorial for Spielberg to make his "selects." The actors' motion--captured performances were recorded in Autodesk's Motion Builder and Unity along with lighting and camera information.
Then, a team called "The Lab," comprising 36 artists, would record the real-time 3D version for each select and generate a master scene running in Unity.
"Steven's selects might include multiple performance takes from the stage," Roberts says. "For example, in the Distracted Globe we might have one set of performances for one dancer and another for another dancer. The team would combine all those performances into one scene file so Steven could put a virtual camera on it. The Lab would sometimes be prepping until midnight to get ready for the next day."
In the Vcam lounge, Spielberg would view the master scene in VR through the Vive headset, perhaps make small changes, then switch to the Vcam to drive the master scene.
"Steven would roll onto stage between 6 and 6:30 in the morning," Roberts says. "He'd go straight into the Vcam lounge and start shooting. The stage would be ready between 9 and 10. He'd come out and start directing and shooting the performances. Then during a setup to build the next scene, he'd be back in the Vcam lounge. When we wrapped the main stage, at 5 or 6, he'd be back in the Vcam lounge. He had more energy than anyone. He shot and rendered to editorial almost 7,000 camera takes in the Vcam lounge.
Once Spielberg approved the shots, the virtual production team sent files for each shot to ILM for the OASIS and to Digital Domain for the real-world scenes, as well as motion-capture data - 50,000 seconds of character animation shot on stage - to ILM.
"It's like an animated storyboard with exactly the film Steven wants and all the data for creating it," Roberts says. "We tracked every asset all the way through. Every time you see a cup of coffee, it's the same asset."
Creating the OASIS
The "animated storyboard" provided the directions for creating the OASIS. It was up to artists at ILM to produce the fully detailed, high-resolution final environments and believable avatars.
PARZIVAL IS WADE WATTS' (TYE SHERIDAN) CG AVATAR IN THE OASIS.
"The thing that makes working in this business interesting is the challenges you get involved with and overcoming those, hopefully successfully," Guyett says. "Like all good challenges, we understated just how much of a challenge this would be. We did something crazy, like 90 minutes of the movie at ILM. We had close to 1,000 key character facial performances. It was a massive undertaking, a massive design exercise, and we were fortunate to have an incredible team working on it."
In the OASIS, characters from the real world could become any type of avatar they wanted. Thus, ILM animators needed to translate motion captured from actors playing those real-world characters onto characters that might look similar or not.
"As Steven turned over shots, Dave Shirk [animation supervisor] and I would put together shots and come up with options for him," Guyett says. "We'd still be assembling scenes in postproduction, trying to marry the performances with the big action. There was a tremendous amount of animation involved; Dave Shirk's contribution to the movie was enormous."
And, those characters had to perform in believable, yet fantastic, fully digital worlds.
"We designed and created an alternate world, not just a gaming world, that felt rich and textured enough that people would want to lead their lives there," Guyett says. "Because, if it's just a game, what are the stakes? Steven wanted to be sure the stakes are high."
In addition to Guyett and Shirk working largely from San Francisco, ILM had supervisors at each of the studio's facilities - in Singapore, Vancouver, and London - leading teams that worked on the show.
"The London studio did the biggest chunk - close to half of the OASIS. They had some of the most complex scenes, including most of the third act," Guyett says. "Grady Cofer led that effort with assistance from Daniele Bigi."
ILM created the CG New York skyline and sent the vehicles on a wild race.
Cofer had been a fan of Ernest Cline's book long before he started working on the film. In fact, after reading "Ready Player One" in 2012, he campaigned to be on any ILM crew that might ever work on a film based on the book.
"I'm a product of the '80s," he says. "I loved all the '80s references, and it references so many movies that ILM helped create. I said that ILM should be part of it; I knew we could create something unique."
During pre-production, Cofer spent time at Spielberg's Amblin Entertainment helping develop storyboards and previs with first, The Third Floor, and then Scott Meadows at Digital Domain and Shirk at ILM. Once shots started, he moved to ILM's London studio.
"The OASIS as imagined by Ernie Cline is a completely virtual utopian escape from a dystopian reality," Cofer says. "A key aspect of translating the book to the screen is world building. What does it look like? Feel like? ILM created the entire OASIS and its inhabitants, so every aspect had to be designed and built in the computer. We created 63 fully-dressed, distinct environments. We had to create and use proprietary tools to be able to work with these massive environments quickly."
ILM's Metropolis tools, first deployed on the animated feature Rango, helped with set dressing.
"We used Metropolis to populate Aech's garage with a massive amount of detail," Cofer says. "Imagine the biggest mechanic's garage you've ever seen, on steroids. One of the fun parts of this show is the Easter eggs, and we have tons all over the garage. Tools you might have in a workshop. Machines. All sorts of stuff. We wanted to give audiences the same thrill as they would have reading the book."
ILM artists had created every location needed for performance capture to a level that would represent that world well in real time, and sent the environments to Roberts' virtual production team at Digital Domain, where they were turned into real-time assets. Then, after filming, the data came back for shots Spielberg approved.
"That was helpful from a layout standpoint and when the characters were interacting with something," says Barry Williams, global environment supervisor. "We would match what was directly around the characters. But otherwise, everything was redone after the shoot. People change their minds. Often, what might have been intended on the day of the shoot needed to change after the fact."
And other environments needed to be built, as well. For the New York race - the first challenge Wade enters, Spielberg shot Sheridan (Parzival/Wade) in a motion-capture suit "driving" a DeLorean set piece. Otherwise, there was little motion capture for the race. Similarly, the introduction to the OASIS happened largely in the digital world with little motion capture.
A visual development team spread over ILM's four studios moved 2D concept art into the 3D world as quickly as possible for Spielberg's initial approval before modelers worked on final models. To help modelers work with the massive environments, the ILM tools group developed a new proprietary "nesting" tool. The tool nested layers with increasing levels of details inside sets so that, for example, an artist could zoom into a street and neighborhood in New York City, turn a corner, see another nested set, and dive into its increasing levels of detail.
"We built over 900 assets - creatures, ships, cars, and so on," Cofer says. "Not 900 models total. I'm talking about assets we tracked. Set dressing and props were another level. I discovered that on a typical visual effects movie when I ask an artist to model a car, the artist does a diligent job. On this movie, I asked for the DeLorean from Back to the Future, or the Iron Giant, and other iconic assets. The modelers went bananas. It was a labor of love."
The environment artists worked diligently, one layer at a time, adding detail after detail, getting approvals along the way, and tracking rendering times as they worked, to know how much data they could add. ILM's set dressing and layout teams tend to use the studio's Zeno to check models in, move them around, place props, and so forth. Modelers use Maya, 3ds Max, and various third-party apps.
"Our bread-and-butter rendering goes in two places," Williams says. "We have Katana with RenderMan RIS or 3ds Max and V-Ray. We use a lot of packages to create and assemble the environments. Where it gets pushed will be into either our mainline side when there's a lot of interaction with characters; that's Katana and RIS. Or, if the shots are mostly about the environment, we might go to Max and V-Ray. We also use Maya and Arnold quite a bit as well - we have some amazing Maya Arnold artists in London."
ILM'S NEW CROWD SOFTWARE FACILITATED CG CHARACTERS ALIGNING AND WORKING IN GROUPS.
A separate team managed the set dressing, adding newspapers on the ground, trash cans, lightbulbs, billboards, and so forth to the New York City environment, for example. Easter eggs to Aech's garage. And, on through the 63 highly detailed environments.
"We use instancing as much as possible," Williams says. "We use procedural textures as much as possible. We do matte-painting tricks where we can. We try to incorporate all the cost savings possible. One of the cool things we did for the New York race was to look-dev all the buildings in Maya using procedural shaders in Arnold. Then we translated it to Katana/RenderMan to render everything else. We did that for the end battle, as well."
Avatars
In addition to creating the rich and stunning environments, the team at ILM also worked on character design and animation.
THE PARZIVAL AND ART3MIS CHARACTERS COMPLEMENT EACH OTHER.
"Parzival looked the most human, so the challenge was to have him not look like a dude in makeup or get into that Uncanny Valley of creepiness when you make a digital human," Jaeger says. "We went through hundreds, if not thousands, of pieces of artwork. He was everything from having metal scales for skin to a T-Wolf. Eventually, we honed in on who the character was and who Steven wanted him to be: an '80s reluctant hero. We chiseled his cheeks and gave him stylized anime hair that clumps unnaturally. But, we locked Art3mis's design early. Parzival and Art3mis had to look good together - if you didn't know humans were behind them, you could imagine them getting together, not being in two different movies."
In addition to the main characters, ILM artists created massive crowds of avatars and performed them with the help of a new proprietary Side Effects Houdini-based crowd system.
"The crowd system is based on mocap vignettes, on cycles," Cofer explains. "We captured people fighting, walking, running, and with various interactions. The rule-based system can instruct different crowd characters to do different things, like avoid obstacles and work cooperatively. We wanted to explore the idea that the avatars can work as a group, not just fight alone, as in other crowd systems."
Working from Stockhausen's art direction and Cline's book, the artists broke the crowds into clans in like-minded games: sports, fantasy, medieval, and so forth. Each clan had a kit of parts. The crowd tool could do random permutations of kit parts to generate thousands of different versions and map them onto different body types.
"We have shots in the film with a half million characters," Cofer says. "Steven really enjoyed watching and discovering the little vignettes happening across the battlefield. We could swap out areas and change the actions. We could add a physics layer and have the characters all ragdoll down. We embedded an effects layer. We didn't want to develop effects on a shot-level basis; we wanted to be guided by the crowd tool. It's pretty cool."
DIGITAL DOMAIN MATCHED ON-SET TRAILERS AND EXTENDED THE SETS WITH MORE TRAILERS TO CREATE THE DISMAL REAL WORLD.
Real World
Filming for the real-world scenes of the dystopian world of Columbus, in 2045, took place on a large outdoor set made with stacks of trailers. Artists at Digital Domain created all the visual effects for this very real world.
"The real world slaps you in the face," says visual effects supervisor Matthew Butler. "It's gritty, impoverished, pathetic, anamorphic, desaturated. It isn't apocalyptic; it's a future where people have given up. It's the shit. That's what Digital Domain made. The shit."
To reproduce trailers on set that would later explode, the crew Lidar-scanned and photographed the existing sets, and then created models prepared for destruction. They also extended the sets by creating hundreds of additional digital trailers.
In addition to the "stacks," the Digital Domain artists extended a set in Sorrento's factory using motion capture to add crowds of workers, put miles and miles of people in the "Loyalty Center," and created other effects.
"Some of the biggest and most enjoyable and satisfying effects we did were the holograms," Butler says. "This will blow your mind. There's a cool scene where Sorrento is in the real world but thinks he's an avatar. Parzival is a digital hologram in the room. We pitched the idea of having Parzival be corpuscles of energy that inherit properties of Parzival's motion. We have the idea that the computer is trying to catch up so we get digital dropouts."
Working with artists at ILM, the crew created the heavily effects-based hologram by first filming Tye Sheridan in a motion-capture suit. ILM did the motion matching.
"We used that as a base and then broke him apart in a particle procedural system within Houdini and rebuilt him on the fly," Butler says. "We used 2D and 3D information throughout. We were rendering in Houdini and V-Ray."
Digital Domain artists also gave Wade [Sheridan] a haptic suit that he wears while inhabiting his avatar, Parzival.
"We show off how effective it is with an interesting visualization of energy in the suit," Butler says. "For that, we had to have a full digital replacement for Tye's suit."
One difficult shot created at Digital Domain is probably the most invisible. The first time Wade pulls on his VR visor, the camera pushes in and becomes his point of view. It's the audience's introduction to the OASIS.
"It's a powerful moment," Butler says. "They filmed the first part of the shot in camera. But, to have the camera get close, we had to make a digital Wade [Sheridan]. For that, we used a high-resolution scan from ICT to get pore details on the surface of his skin. We modeled his eyelashes, gave him peach fuzz. At the point where Tye pulls on the visor, he's fully digital."
VR for VR
It is appropriate that a movie which takes place in a virtual-reality world was created in part with virtual-reality tools. At times, it must have seemed to the filmmakers that they, too, were in a kind of OASIS.
"I was very impressed with Steven's grasp on how the tools worked for him and what was important to him," Guyett says. "He's not hugely technical, but he's way more of a gamer than I am. This was his third or fourth motion-capture movie. He was acutely aware of the process."
Thanks to the efforts of hundreds of visual effects artists, the virtual production helped make it possible for a great filmmaker to work in a familiar way even though the world he was filming and the characters in that world would be digital. And at the end of production, deliver a sophisticated template for the digital work to be done. Then, thanks to the efforts of thousands of visual effects artists, the digital characters and their virtual world became engaging, compelling, and immersive, as did the real world the characters tried to avoid.
"Steven was focused on the story and the emotional track of the characters," Guyett says. "However complicated this film was in terms of sheer number of components at any moment and multiple motion-capture performances at the same time, our job was to make all that invisible and let Steven be the director. We just want people to enjoy the movie."
Barbara Robertson (BarbaraRR@comcast.net) is an award-winning writer and contributing editor for CGW.