Using the latest CG techniques, theme parks are taking visitors for a fun-filled virtual ride
It seems that today
, every major theme park in the
US—from Walt Disney World to Universal Studios to
Busch
Gardens
—is entertaining guests with a simulation ride film. The concept is hardly new. Early attempts featured footage from a camera attached to a person or vehicle traversing an exciting course, offering a “first-person” view of the action. The start of more modern incarnations using a motion base and large-screen projection began in 1987 when Disney unveiled the Star Wars-inspired Star Tours. Since then, ride films have become a major theme-park attraction—in some cases, a bigger draw than the mega roller-coasters. New Wave International (now Nwave Pictures) even created an IMAX film for Sony Pictures Classics called Thrill Ride—The Science of Fun, which chronicles the history of theme-park rides and highlights how the motion simulator has affected today’s attractions.
The technology for ride films was born from the marriage of NASA/military and
Hollywood
. Flight simulators provide the basis for today’s motion bases, while high-end computer graphics enhance the movement with stimulating visuals. More recently, parks have upped the thrill level by infusing ride films with stereoscopy, and some have added a fourth dimension for a “touching” experience. Now, a new evolution is beginning, whereby rides use CG to augment a dark-ride experience that also uses set pieces to further blend reality and simulation. The next step, according to Yas Takata, director of large-format films at Blur Studio, is to use CG in a supporting role to alter the audience’s perception.
Ride films, like roller-coasters, arestar theme-park attractions, andvenues appear to be in an armsrace to offer the biggest thrills inlocation-based entertainment.Often this means pushing the stateof the art in terms of computergraphics technology, as was thecase with (from left to right)Curse of the DarKastle, Mickey’s PhilharMagic, Escher Revisited in VR Valley, and Time Riders.
In the past decade, the number of ride films rose dramatically, only to decline recently. One reason is because they are expensive, ranging in price from $1 million to $7 million. But business has been booming in certain parts of the world—Asia and the
United Arab Emirates
, for instance—where new theme parks are popping up. As park-goers crave new adventures, studios will be pressured to find totally unique experiences for today’s generation, which has been raised on high-def, wide-screen TVs and immersive computer games. “We have to find a way to entertain them in a novel way,” Takata says.
Recently, Super 78 developed a unique approach to ride films, with Curse of the DarKastle, a stereoscopic dark ride through several rooms of a virtual castle featuring practical set elements along with projected 3D imagery. Taking advantage of digital content delivery, Super 78 can easily make changes to the ride-film visuals, keeping the attraction fresh year after year. Disney, a master at theme-park experiences, turned up the volume when it opened Disney’s PhilharMagic. A collaborative effort between Walt Disney Imagineering and Feature Animation, the attraction unites Disney’s classic characters with new classic characters in a stereoscopic musical adventure that, for the first time, features 2D Disney characters in stereo 3D.
In another groundbreaking attraction, the unique 2D sketches of graphic artist MC Escher serve as the inspiration for the stereo film Escher Revisited in VR Valley, which takes visitors through 3D versions of Escher’s “impossible” structures.
And, Blur Studio rolled out its latest attraction, Time Riders, a non-stereo ride that exemplifies the current state of the art in ride films with high-quality immersive imagery.
Curse of the Darkastle
When animation company Super 78 was hired to create a dark ride for Busch Gardens Williamsburg in
Virginia
, the crew ended up creating an attraction that was, well, befitting a king.
Unlike many theme-park attractions, Curse of the DarKastle is not based on a pre-existing story or popular character like Spider-Man or SpongeBob. Rather, it is an original story with
Germany
’s King Ludwig—known for his grand architectural projects, including Castle Neuschwanstein, as much as for his madness— as the central figure. In this tale, though, he is depicted as a bloodthirsty ghost bent on recruiting others—the audience—to join him in the hereafter. During the 3 minutes and 20 seconds of the attraction, the king pursues guests through the castle’s haunted chambers with cunning, ruthlessness, and diabolical zeal, while riders encounter eye-popping stereoscopic effects until they are able, just barely, to escape.
As a matter of fact, the technologically sophisticated attraction is quite a crown jewel in terms of location-based entertainment. First, it transcends the stationary motion base by incorporating state-of-the-art motion technology that utilizes a cart and track system that takes audiences through a number of rooms where the stereo imagery is projected onto various types of screens. Second, the attraction also takes advantage of the digital aspect of the media, enabling changes to be made relatively easily; therefore, the attraction can be updated yearly if the park desires. Third, a hybrid camera system blends two types of stereoscopy—parallel and integrated—for effects that are closer to viewers than ever.
A unique ride film from a technological standpoint, DarKastle is a so-called dark ride, which takes the audience through several rooms of
the king’s “castle,” where rear-projected stereo imagery pops out from a number of large screens.
Images © 2006 Busch Entertainment Corp. Courtesy Super 78.
A Royal Ride
“
Busch
Gardens
wanted an experience that was world-class and would blow people away,” says Dina Benadon, the film’s executive producer. To accomplish that goal, an experienced team was assembled by Falcon’s Treehouse, which designed the ride.
In addition to Super 78, that group comprised Falcon’s Tree house architect Cecil Mag puri and Oceaneering Entertainment Systems, as project engineer. Assisting with the stereo effects was Visual Effects supervisor Chuck Comisky (James Cameron’s Ghosts of the Abyss and Terminator 2-3D).
In Curse of the DarKastle, stereoscopic 3D imagery
is augmented by actual set pieces, which further
blurs the line of reality.
“We have a lot of experience in what we call integration, where we see the project all the way through, from content creation to installation,” says Brent Young, creative director at Super 78. “So we aren’t just creating the media; we also become involved in how that media gets translated or projected onto the displays. You can make a great piece of software, but if it doesn’t get displayed properly, your great piece of software is now degraded and is not what you intended.”
Collaboratively, the groups created the next-generation attraction, which offers a bone-chilling journey through a frozen Bavarian castle. The audience travels to 12 rooms within the 40,000-square-foot environment while seated inside computer-controlled, motion-based “sleighs” as rear-projected 3D imagery appears all around them—swords, knives, and other objects swoop toward riders. What is so unique about DarKastle, Young points out, is the way the effects interact with and occur on all sides of the sleigh. For example, when the ghosts fire arrows at the riders, they seem to whiz right past the visitors’ heads. The sleigh is also jostled by various objects, and by the king himself. One of the scariest moments occurs when the sleigh plummets into the dark; through the magic of CG, it seems as though the sleigh is falling faster and farther than it actually is. Augmenting the overall experience are various 4D effects— wind, heat, cold, water, smoke, and more.
The vehicle has a complete range of motion—forward, side-to-side, 360-degree spin—traveling past the screens as it moves from one scene to the next. “You may be looking at one screen when something hits the vehicle. It turns you in the opposite direction, and suddenly you’re facing another screen, and it happens very fast,” explains Young. “There’s only one other attraction like it in the world, and that’s Universal Studios’ Spider-Man ride. Most stereo ride films are situated on a locked-down motion base in front of one screen. We had several different types of screens of varying sizes, including two that are 80-feet high, onto which the imagery was projected.”
Matter of Perspective
aving such a unique setup presented the group—under the direction of Super 78 animation director Mario Kamberg—with additional challenges, from the hardware to the software. “There are many hurdles to overcome to make this work, and it’s expensive— that’s why there aren’t more attractions of this type,” Young notes.
The integrated physical sets were built and lit to match seamlessly with the
CG so that the edges of the projection screens are invisible to the guests.
The ride is designed to make it impossible to tell where reality ends and cinematic illusion begins. The physical sets were built and lit to match up perfectly with the CG so that the edges of the projection screens are invisible to riders. “It all blends together,” Young says. “As a result, we were able to make it appear as if some of the rooms extend 40 or 50 feet with mountain-scapes appearing in the windows in the backgrounds. When you add the 3D effects that appear to project out of the screens, you get the feeling of being completely immersed. It’s no longer a 3D film; it’s a 3D environment.”
Super 78 used a hybrid camera system that blended the two types of
stereoscopy in the film—parallel and integrated.
In the opening scene, for example, the viewers happen upon the castle gates, and they can see hundreds of yards into the distance. But, in actuality, the persons are looking at imagery projected onto a screen that is 14 feet in front of them, with strategically placed practical set pieces. Some scenes contain a number of set pieces, while other scenes have only virtual objects. These CG items, along with the characters, were modeled, animated, and lit within Autodesk’s 3ds Max, and rendered with the Chaos Group’s VRay, which accommodated Super 78’s specially scripted camera shaders. Compositing was done inside Autodesk’s Discreet Combustion, and occasionally within Eyeon Software’s Digital Fusion. “
As you travel past these scenes, you see a different perspective, so we had to develop a camera system that could translate where you were in the actual environment and give you a virtual perspective that matched your actual perspective,” says Young. “That was one of the most diffi - cult tasks for this project.”
Once the artists were given the ride profile, they developed the camera system for the distortion correction and the perspective correction, so that as the vehicle moves past the screen, the proper scene perspective is presented based on the velocity of the sleigh. “We developed the camera and the camera rig, and how that rig would be going to have a chance to test it out in the theater; it had to be done virtually as a simulation. We had to be dead-on, or we would have been in trouble. That’s why it was so important that we be involved with all phases of the ride build.”
Moving in Stereo
Typically, there are two ways to accomplish stereo: through a parallel render or a convergent render. Super 78, however, created a hybrid of the parallel and convergent cameras, which enabled the artists to pull out individual objects farther than other stereo rides like Spider-Man.
“If you run the cameras parallel, you get a stretch effect at close proximity to the guest, and if you converge them, you can get the objects farther out and closer to the vehicle. It’s also more comfortable for the eyes to converge on close objects,” says Young. The hybrid approach enabled the artists to select the specific objects to pull forward and those to push back.
Sup
er 78 developed a camera system that translated where visitors were within the environment
and provided a virtual perspective that matched the viewers’ actual perspective at each turn.
To accomplish that, the group rendered the two sets of elements. The majority of the elements, which had a precise perspective relationship, were done with the parallel cameras. The floating elements and various distant background elements were accomplished with the converging cameras. The artists then integrated the sets during compositing, and rendered each eye view again.
Because the dominant depth cue for humans is binocular disparity (a term used to describe the perceived horizontal separation between objects), a convergent camera can deliver a more accurate representation of the human depth perception, Young notes. But if you animate the convergence, the perspective cues shift and degrade the experience. “It’s a trade-off,” he says.
Next, the group evaluated the 3D effects within a quarter-scale mock-up room containing two projectors to accomplish the dual eye effect. According to Young, every scene had at least one major 3D moment. “Stereo is the key to the ride’s realism, and gives it another level of immersion,” adds Benadon. “You need the depth to sell the fact that you are in a room and in a certain space.”
According to Young, having these stereo moments is important, but having a compelling story and high-quality imagery was needed, as well. “It all comes down to the mood, the pacing, the ups and downs, the drama, followed by the calm, and then the question of ‘What lies around the next corner?’” he says. “The ride is a ballet of 3D. What separates this from other rides is the role of 3D. We rely less on fast action and more on depth, richness, and quality.”
Changes for the Better
The theme park opened the 4D stereoscopic attraction last year. After conducting surveys to determine what visitors liked best about the ride,
Busch
Gardens
asked Super 78 to revamp it. As a result, the group added more than one minute of new content, giving audiences a brand-new experience for this season, including a heart-pounding moment when a 7-foot stone wolf leaps out from nowhere and lands directly in front of visitors.
“A huge innovation for this ride is that we can amend the content or completely change scenes for a new experience. It’s revolutionary in this industry,” says Benadon, noting that the capability to do so keeps an attraction like this fresh. “In this way, we see ourselves as a software development company, and even call the updated ride DarKastle 2.0. We’ve already made major changes to the first version of the ride, including new scenes based on guest feedback. This was done right from our studio in Los Angeles, and then
Busch
Gardens
downloaded those changes from a server.” In fact, Benadon predicts that the ability to update a ride will eventually become commonplace at theme parks everywhere.
“Ride films are expensive to make,” Benadon says, noting that the price can range from just under $1 million dollars to several million, depending on the complexity. “Updating them yearly allows a park to bill the attraction as all-new.”
In any case, these are not cheap thrills. But, they are thrilling nonetheless.
Disney magic and music have always made a good fit—Mickey Mouse’s early debut in the black-and-white “Steamboat Willie” was the first “sound” cartoon to achieve wide recognition, and shortly thereafter, “The Jazz Fool” marked Mickey’s foray into shows where song and dance took center stage. And then there was Fantasia, a symphony for the eyes as well as the ears, which utilized a new stereo system developed at Disney that was heralded at the 1941 Academy Awards. Since then, the studio has continually reset the bar in terms of sight and sound, and recently, with the location-based entertainment presentation Mickey’s PhilharMagic, Disney has once again created a state-of-the-art animated musical production, this time using stereoscopic and 4D effects.
The 10-minute show hit a number of high notes, both artistically and technically. In a Disney first, Walt Disney Imagineering mixed some of its most beloved characters from the past 50 years: The 3D show stars Mickey Mouse, Donald Duck, and others from Disney’s earlier years alongside those from more recent 2D animated productions, including Ariel, Aladdin, and Simba. In addition, Disney animation artists used CGI to transform these hand-drawn characters from flat drawings to models with dimension for their entry into the 3D stereoscopic world. And, PhilharMagic plays out on the world’s largest seamless projection screen, representing the most immersive wraparound image Disney has ever created.
Show Time
PhilharMagic opened last year at Hong Kong Disneyland and in 2003 at Walt Disney World’s
Magic
Kingdom
. The idea was to celebrate the early Disney classic characters by having them interactively entertain park visitors. “The big question was, how could we bring them forth along with those from the second golden age of animation 10 to 15 years ago? So we conceived a 3D film, with its principal objective to entertain families,” says Walt Disney Imagineering’s (WDI) George Scribner, animation director for Mickey’s
PhilharMagic. “We wanted the film to have reach moments as opposed to flinch moments, prompting people to get out of their seats and grab at objects coming out from the screen.”
George Scribner, animation director for Mickey’s PhilharMagic, tries to
discuss the animation sequences with two stars appearing in the attraction.
Images © Disney.
The project, headed by WDI (responsible for the park attractions), incorporated the latest technology from Imagineering. These included a number of in-theater effects—the scent of apple pie during the Be Our Guest scene, a hint of jasmine in the air during the Aladdin sequence, a squirt of water complements of Flounder when Ariel is present. Interactive lighting, developed for the
Tokyo
Disney
Sea
resort, synchronizes a series of automated and conventional theatrical lighting fixtures with film-frame accuracy. As a result, the light within the film world extends to the audience—smoke effects, for example, enable guests to see the lights, casting shadow elements that are integrated into the performance. Also, a leading-edge audio system, with nine audio clusters, produces a symphony of sound, including “traveling sound,” such as when Goofy’s footsteps move through the theater as he runs from the back to the front.
“Music is an important element. Our goal was to convey the storyline through music and not have to rely on dialog, thus making the production more universal in scope,” says Scribner. The premise of PhilharMagic is that Mickey is to perform with his orchestra, but before the show starts, Donald Duck loses the magic sorcerer’s hat. Donald chases the hat through the Disney universe, as it continues to stay just out of his reach. The show ends with an explosion, literally, as Donald is launched straight into the audience.
With the integration of so many stars from the Disney archives, WDI partnered with Walt Disney Feature Animation in what marked the first large-scale collaboration between the two groups. As a result, WDI brought onboard many of Disney’s best feature-film artists and animators, including Glen Keane and Nik Ranieri, who originated the look and movement of certain characters such as Ariel in The Little Mermaid and Lumiere in Beauty and the Beast, respectively. “We were fortunate to have many of the animators who worked on these films still at the studio. Their story and animation ideas were invaluable to the project,” says Scribner. “The strength of their work contributed to the strength of artistry in this show. They are extraordinary performance animators, which are essentially actors with pencils; they have a fl air for mime, performance, and subtlety of movement, and they bring a level of believability, sincerity, and truth to the characters’ performances.”
A Production with Character
With so many stars to choose from, how did WDI decide which would make it into the show? According to Scribner, the team was looking for a visual mix from the two eras, as well as characters that would offer an emotional variety. Donald, who shows his emotional extremes quickly, was perfect in the role of hero/protagonist; from there, it was a matter of finding characters whose personalities would give the film more emotional range, he explains.
After WDI selected the characters and finished the storyboards, Feature Animation took the reins, and the CG artists modeled, rigged, and animated the characters, adding textures, shadows, lighting, effects, and more. This marks the first time feature-film classic characters were completely modeled and animated via the computer, let alone delivered within an extremely technical medium: stereo. “The whole world knows what these characters look like—they know them, they love them, they grew up with them,” says Marcus Hobbs, CG supervisor at Feature Animation. “The bar is very high. If there is one thing wrong with the characters in any way, the audience will know it. This is especially true with the human characters.”
PhilharMagic Facts
Rendering
+ Number of renders per final film frame: 4 2K renders
(two for the stereo, 1 for the left mono projector,
and 1 for the right mono projector)
+ Render hours: 1.5 million (over a 2-year period)
using SGI machines
+ Data rendered: 6TB (an uncommon amount at the time)
Magic Moments
+ 10 stereo reach events
Models
+ 3000 3D elements that had to be modeled,
painted, and animated
Characters
+ Classic characters appearing: Mickey Mouse,
Donald Duck, Peter Pan, Tinker Bell
+ New classic characters appearing: Lumiere, Ariel, Flounder,
Simba, Zazu, Aladdin, Jasmine, Iago, Magic Carpet
The attraction marks the first time that Disney’s
2D characters, including Mickey Mouse, were
brought to life within the stereo 3D world.
To create the models, the CG artists used a mix of commercial and custom software, including Autodesk’s Maya for modeling, rigging, and facial animation; Pixar’s RenderMan for rendering; and Apple’s Shake for compositing. Surfaces,
Hobbs
says, are “lovingly created by a small crew of texture painters.”
Throughout the creation process, traditional artists assisted Disney’s CG artists and reviewed all the models. “They made meticulous craftsman-like notes on every subtle nuance— ‘the forearm is not curvy enough,’ or ‘the leg is too long’—all those things that make you believe that this is really Aladdin, Tinkerbell, or any of these characters,” says Hobbs. “If we didn’t get those things perfect up-front during design and modeling, they would have come out on the screen fl awed. It was a struggle for us to find a balance between an art-directed look and a realistic look. It’s a delicate balance to find that middle ground where the character looks believable—after all, the audience fell in love with them as fl at, shaded, line drawings— yet contain enough detail so they fit within this 3D stereo world. And, you have to navigate that [boundary] on every piece of film.”
The traditional artists also drove the facial animation; each human character had an average of 100 expressions that the CGI artists modeled and plugged into their animation system. Moreover, the traditional animators incorporated a lot of squash and stretch into their characters, and most animation rigs don’t allow for it on this level, thus requiring the computer animators to build a number of custom rigs. In fact, the animation package was later used in the production of Chicken Little (see “The Sky’s the Limit,” November 2005), and with updates, it continues to be used today.
The artists also made sure the CG characters achieved the proper silhouettes and subtle movements during animation. “You pay for those subtleties with your time; you have to build in all the exceptions,” explains
Hobbs
. “CG tends to be very physical and correct, and that’s not what we always want. So we have to write something special to make a rig do what we want it to do. And we had to do this on every level.” For instance, the traditional animators are extremely aware of a character’s silhouette, and in 3D, when an artist cheats a silhouette, the model becomes distorted. “If I round out Tinkerbell’s elbow for the silhouette, I may also have to pinch it fl at, but when I light it, I get shadows that reveal I have distorted her,”
Hobbs
adds. “So we have to figure out how to make the models in the computer reflect the cheats while still having the model look good so it can be lit.”
According to
Hobbs
, the slightest pinch, roll, or fl at area will become apparent when a model, particularly a human model, is lit. Yet, because of the stereo, the group couldn’t simply paint over the defect as they normally would; the fl aw had to be fixed in the model itself. “With two rendered versions [for the stereo], each paint fix would have to be identical, and that would have been diffi cult,” he notes.
Multi-dimensional Characters
The models had to be perfect for another reason, too: The characters would be appearing on a 150-foot-wide by 24-foot-high canvas. The 3D experience comes alive via four 70mm projectors: two in the stereo center with a mono-cinematic projector on each side. “We’ve seamlessly blended screens before, but this was the first time we blended them with stereo in the middle,” says Scribner.
In the attraction, some of Disney’s early characters, such as Donald Duck, interact with
traditional characters from the more recent past, such as Aladdin and Jasmine. But the
real challenge for the artists was achieving the traditional 2D look within CG.
As
Hobbs
notes, the groups had to choreograph objects that would go in front of the screen plane so they would occur in the middle; objects like bubbles, which travel across the screen, had to be placed behind the plane. “We used the sides to make you believe you were immersed, not to drag your eyes across from one end of the screen to the other,” he adds. “Our main objective was to achieve long reach moments with the stereo.”
The teams spent the first nine months of production determining those magical 3D moments and what was required to hit those marks. To test the effects, WDI built a mock-up theater, complete with foam-core cutouts of human figures to determine how close the image could stretch outward before it dropped off from the optic plane. “How a person perceives depth differs when you have elements, such as people, in front of you,” Scribner points out.
One of the big stereo moments occurs when Ariel reaches out with her jewels. “The characters can’t break the screen plane; when Ariel throws out her jewels, her body is cut off by the screen. We animated her so her body is behind the screen plane and her arms and chest are in front, and they couldn’t go off to the sides,” explains
Hobbs
. “This was something unusual for us. We don’t have that limitation in cinema.”
Animating Ariel
One of the most difficult characters to bring into Mickey’s PhilharMagic 3D stereoscopic world was Ariel from The Little Mermaid. As with any human character, everything—the subtle look, skin tones, lighting—had to be spot on. But when the digital artists re-created her for the show, there was something about the character that wasn’t quite right, but the CG artists couldn’t pinpoint the problem. However, Feature Animation artist Glen Keane knew what the problem was, and he should know: It was his vision, as supervising animator, that gave life to Ariel in her original feature-film debut.
Imagineering had asked Keane to look over some of the early models for PhilharMagic, including the little mermaid, and brought him on as a consultant, mainly to make the CG Ariel look exactly like her hand-drawn counterpart. “At that time I had not transitioned into computer animation, and I was unsure whether it was something I wanted to do. But then I realized this was an opportunity to see if we could replicate [the 2D] in the computer,” says Keane, who is now creating Rapunzel in CGI for an upcoming Disney movie. “So I approached the situation the only way I new how—by drawing.”
Under Keane’s instructions, digital artists began to transform Ariel for her new role. As Keane points out, the task was diffi cult, as computer animation tempts artists into becoming satisfied with a dimensional form, making them believe it is a pleasing shape because it looks volumetric and solid. But in reality, he says, it is still just a fl at graphic element on the screen. In this instance, Ariel did not have that appealing and pleasing shape that Disney’s “Old Nine Men” had continually encouraged.
“There are certain shapes and rhythms you want in line and form,” Keane explains. “So I divorced myself from the shades and lights. Sure, she had more dimensionality than my drawings, but it didn’t feel like her. Then I squinted my eyes to look at the CG so I could no longer see the razzamatazz through the blur; I could only see simple forms and shapes. Then I knew: A lot of her anatomy was missing.”
Keane had several changes in store for Ariel, including a drastic reduction in the number of movements she made. “I was once told that you are not moving drawings, you are drawing to move people,” he recalls. “And she was being moved around way too much, and there were too many poses and expressions and attitudes.”
To uncover the appropriate “golden drawings” (important poses), Keane, assisted by fellow animators John Rippa and Marc Smith (Treasure Planet and Tarzan), translated the entire Ariel sequence into a traditional animation. Using this method, the group cut the poses necessary for achieving attitude and expression from 50 to about 15. “Then we focused on building those out,” Keane says.
Keane also whittled down the hundreds of facial expressions the CG artists had created for Ariel to just five. He then put Ariel back into the hands of the CG group, but still kept a watchful eye on her, like any loving father. He communicated with the digital staff via sketches and Post-its as they continued to craft features such as her mouth, eyes, and hair. “Her eyes were very difficult. It was so easy to make them doll-like, by wrapping the lashes around the eye, using hand animation. I could make one line thicker for emphasis, for example, but the computer doesn’t do that,” he says. “I wanted them bigger and bigger. We ended up with eyes that are 10 times the size that people were initially comfortable with. The eyes had to speak to people; they had to be arresting and powerful.”
To help Ariel transition to the new medium, Disney called on her original creator,
Glen Keane, who worked with the CG artists.
In the end, did it look like Ariel? “It’s very close,” says Keane.
Overall,
Hobbs
says this project had more in common with a typical Feature Animation film than not in terms of the content creation. “For PhilharMagic, I think the art pushed the technology, more so than the other way around. The artists here are so demanding and expect perfection, and they want as many bites at the apple as they can get. So most of the technology we created for the show in Feature Animation was in service to the artists. But we did push the scale of production more so than ever for such a short-length production. And, it was a great experience bringing these characters to life in 3D using modern tools and CGI.”
Escher Revisited in VR Valley
Maurits Cornelis (MC) Escher (1898—1972), one of the world’s most famous graphic artists, is best known for his complex architectural mazes involving perspective games and the representation of impossible spaces, spatial illusions, and repeating geometric patterns (tessellations). One such example can be seen in his print Ascending and Descending, in which lines of people ascend and descend stairs in an infinite loop on a construction that is impossible to build and possible to draw only by taking advantage of quirks of perception and perspective.
Today, this artistic 2D illusionist from The Netherlands continues to fascinate viewers with his unique craftsmanship—so much so that a theme park in Germany built a stereoscopic ride film that allows viewers to explore Escher’s unique architecture in a most unusual way—by riding through them. But, how does a 3D artist translate works that have been crafted so masterfully on a two-dimensional plane? That was the challenge that artist Daniel Dugour, owner of 3D computer animation boutique Anitime, had to overcome when he accepted the project.
Dugour, however, is no stranger to Escher’s works. In 1998, he, along with his former art instructor Leon Wennekes of Wennekes Multimedia, created the 8-minute VR motion ride Escher Revisited in VR Valley for the Kunsthal
Museum in
Rotterdam
, The Netherlands, in celebration of what would have been the artist’s 100th birthday (see “Revealing Illusions,” March 1999). “I have never felt satisfied with the end result; it wasn’t up to my standard of quality,” says Dugour, noting this was mainly due to integration issues between his landscape software, Questar Productions’ World Construction Set, and his 3D content creation software, NewTek’s LightWave. “Also, landscape rendering was in its infancy, so the tools weren’t that flexible,” he adds.
The stereo ride Escher Revisited in VR Valley gives audiences a
special look at the unique spatial perspectives of the graphic artist.
Images © Cordon Art, The Netherlands. Courtesy Anitime.
In December 2004, Dugour, together with Wennekes (again acting as producer), continued that journey, reworking the artistic presentation, whose title remained the same, to make it state-of-the-art stereoscopy. “For seven years I had been pestering the producer (and originator) of the first ride to do a remake, but there was never an occasion nor a budget,” Dugour recalls. “Suddenly there were both, and I devoted seven days a week for four months to making this as beautiful as I could.”
Today, the attraction is a semi-permanent installation in the theme park’s HD stereoscopic theater, which plans to show other films eventually. A monoscopic version and an anaglyphic version (red/green stereo) are planned for other possible locations. “The purpose of the ride is to transport the audience into a world where the magic of Escher comes to life,” says Dugour. Visitors sit in a seemingly regular theater, but the chairs are kinetic and move in sets of four to the motions on the screen. The projection is full HD (1920x1080 pixels at 25 fps), and the stereo is achieved through polarization. Enhancing the experience is 6.1 surround sound and 4D effects (water spray).
Escher’s World
At the start of the ride, a fl at version of Escher’s 1956 woodcut Smaller and Smaller appears, showing an endlessly diminishing pattern of interlocking lizards that vanish toward the object’s middle. (This is shown on a vast wall that is set far behind the projection screen.) The guide, who is heard but never seen, takes viewers to the center of that object, where a tunnel opens with Metamorphosis II on its walls, repeating seamlessly head to tail. The tunnel then leads the audience to the first act of the ride, set in early-morning light. As cliffs loom over the sea underneath, the journey halts at a 3D model of Belvedere, the first of three buildings based on Escher’s works featured in the ride.
Artist Daniel Dugour built the stereo imagery using LightWave,
timing the camera paths with the ride’s originally created score.
As Dugour explains, Belvedere is shown from all sides and explores the hidden inconsistencies of the original drawing. On the other side of the lushly forested island is a variation on an “impossible” triangle that marks the portal into the second act. Set at midday, the imagery depicts a river fl owing through a canyon. On a hill sits Waterfall, serenely fl owing and turning its waterwheel slowly—yet it is only when viewers follow the flow of the water that the real spatial structure becomes clear.
The third act features several of Escher’s “knots” floating above the desert floor and slowly rotating. The journey brings the viewers to the top of a mountain, where they see Ascending and Descending, in which people are forever traversing the stairs only to return to the starting point. But the ride takes them crashing down the stairs and into a well, to the last part of the ride. Deep in Escherian space, an enormous school of fish, from Escher’s woodcut Depth, swims past.
Perspective is key to viewing Escher’s art, so the ride-film creators
had to maintain a fixed viewpoint that matches the original works.
“There’s no real story, characters, or plot. It’s more of a sightseeing tour—one that takes you smack into the middle of the magic of Escher,” Dugour says. “Escher’s works were originally meant to be seen in a fl at plane, and only interpreted as a three dimensional shape. In the interpretation lies the trick and the magic.” So, to prevent the ride-film audience from immediately seeing through his strange constructions, Dugour had to invent ways of tricking the eye once more.
Although the storyline had been preserved intact from the original, the same did not hold true for the CGI. Using the original buildings as a template, Dugour rebuilt the structures with far more detail by incorporating radiosity and better shaders and textures, using LightWave 8 running on a custom workstation with an AMD64 3800+ and an Asus Nvidia PCI-E card. As he points out, in 1998, LightWave did not support UV mapping, which limited the possibilities when it came to texturing the perspectively distorted structures. He re-created all the other objects anew, as well as the animations. “The animation of moving objects and the camera is completely new in the film, although I retained crucial parts of the timing to coincide with the music, which was remixed with new sound design,” Dugour says. Like the visuals, the music is not quite what it seems. “It features melodic and rhythmic tricks that mimic the way Escher fools his audience—a waltz that is not a waltz, for instance.” The landscapes are noticeably different, too.
“The landscapes, in my opinion, were the weak spot of the original. Landscape rendering software has evolved a lot in recent years, and I chose Eon Software’s Vue 4 Professional mainly because of its procedural vegetation,” Dugour explains.
Reconstructing Art
Starting with the three main art focuses, Dugour remodeled the buildings Belvedere, Waterfall, and Ascending and Descending. According to Dugour, the visually complex buildings were the most diffi cult models to create, though he had solved most of the nagging issues in the original version. “It was a matter of keeping a fixed viewpoint that matches the one in the image, and slowly building a template that gives the right result—from that viewpoint,” he explains. “It involved looking at each image for a long time, and analyzing the visual trick and then visualizing it in my mind. Either you can do that last part or you cannot.”
Although Escher favored black-and-white imagery, Dugour preferred bright colors and a “fresh” atmosphere for the film. Moreover, the ride had to be in color, which challenged him to find an atmosphere for each landscape that fitted well with the featured buildings. “I gave them colors that ‘felt right,’ not necessarily realistic, but fitting.”
The new textures, created in Adobe’s Photoshop, were derived from high-res photographs of actual surfaces, but some, like the water in Waterfall, were painted by hand. Dugour even acquired some surfaces from Escher’s original prints. “Texturing the buildings presented the same challenges as before, though,” he says. “For instance, Waterfall had to have matching bricks from the main viewpoint, although the visually attached parts were, in fact, quite far removed from one another. UV mapping did make that easier.”
Using LightWave and Vue 4 Professional, digital artist Daniel
Dugour crafted colorful settings for Escher’s buildings as well
as his geometric objects, such as Driehoek, shown here.
One of the more complicated issues involved timing the animation to the original score. When rebuilding the tunnel for the opening sequence, for instance, not only did the artist have to get the aesthetic correct, but the timing had to be perfect: The woodcut Metamorphosis II had to repeat an exact number of times through the length of the tunnel, without being cut off at the beginning or the end, or stretched one way or another. Also, all the events in the new version of the ride had to coincide with specific points from the original in terms of the music. To accomplish this, Dugour animated the entire camera path in LightWave, first staging the scene with a preliminary setup that used placeholder objects in key locations. This allowed the artist to fine-tune the speed of movement and view the angles without the landscape software.
At the same time, Dugour had to determine how to use the stereoscopy while retaining Escher’s magic. Using Adobe After Effects, he was able to achieve a red and green anaglyph for a low-res left and right render, allowing for depth checks on a typical monitor. While the artist won’t reveal the “secret” tricks he used for the buildings, he did describe the technical details. “I used a virtual dual-camera rig, and didn’t bother with plug-ins or stereo render functions. I needed total control,” he says. “A null object (LightWave-specific non-rendering object) followed the main camera trajectory. Attached to that was the calibration camera. Two other null objects were attached to the left and right of the parent null, which supported the two mirrored stereo cameras. A target provided a focal point for both cameras. This is very similar to what is used when shooting 3D film, involving a dolly with a dual-camera rig.”
Once Dugour had completed this step, he imported the camera paths into Vue, where he began sketching landscapes. “As soon as any part of the sequence was finished, I had to start rendering it,” he says, noting there was little time for rendering all 25,000 frames at HD resolution. This job was allocated to two render farms running 24/7.
“Computer graphics for years has been used to deliver extra information and emotion to make existing information more clear; it also makes invisible things visible, understandable, and, above all, more adaptive,” says Wennekes. “Escher’s pictures are small and gray, and in the early 1990s, I noticed youngsters becoming less and less interested in these works. I decided to use a new medium to create a visual catalyst, an overwhelming ‘experience’ to the work of MC Escher. I created the storyline and all the effects in such a way that it would ‘trap’ the youngsters in an emotional storyline and would create interest in the original works of this immortal artist. And we did this by creating a magic metamorphosis from genuine 2D images to 3D images, at the same time explaining its wonders.”
Indeed, Wennekes’ prediction was correct. “Young people were fascinated by the work and paid increasing attention to the original works after experiencing the MC Escher ride,” he says. “Artificial imagery is not just for creating outstanding imagery for Hollywood movies. There is still a broad, unexplored field where these techniques can be used to learn more about life, art, and cultural heritage, and the MC Escher ride is a perfect example of this.”
Time Riders
For Time Riders, a 4D simulation ride that opened last year at a theme park in
Germany
, Blur Studio had just 4 minutes to take viewers on an adventure through time, from prehistory to the future.
The attraction is typical of a ride film, whereby visitors are immersed within an environment created with CGI. While Time Riders is not shown in stereo, it nevertheless provides the added sense of touch through some in-theater (or in this case, in-pod) effects, one of which makes the experience particularly electrifying. “The client wanted a straightforward ride film,” says Tim Miller, Blur’s president, noting that the film could be made into a stereo version fairly easily. Blur was hired by Jon Corfino of Attraction Media and Entertainment, who served as executive producer on Time Riders and with whom Blur created a 3D SpongeBob Squarepants attraction for Paramount Parks three years ago.
The Time Riders story is based on The Time Machine by HG Wells, and features Monty Python actor John Cleese—filmed on a greenscreen set—as the narrator/scientist who sends visitors on their journey through a number of diverse environments. “Time Riders provided us the opportunity to showcase a handful of environments and scenarios throughout time. By doing a time-travel film, we had the chance to mix up the subject matter and choose some cool moments that each of us on the team would have liked to have seen ourselves,” says Tim Wallace, director of the film. “Each scene was a product of the Blur group’s imagination.”
Though it is not shown as a stereo film, Time Riders could be made
into a 3D attraction rather easily, says Blur’s Tim Miller.
Images © Star Parks. Courtesy Blur Studio.
After climbing aboard a pod, visitors are “transported” to the end of the ice age, where they must dodge a herd of stampeding woolly mammoths. Next, they enter the prehistoric period, barely escaping the jaws of an ocean-dwelling beast. Soon thereafter, they are deposited into the middle of a medieval siege, and then into a battle at sea between two wooden vessels. Last, the pod is thrust into the distant future, where a collision in space sends an electrical charge into the pod (an effect that is both visual and tactile).
While creating the attraction’s hyper-real CGI, the group followed Blur’s typical creative process within the studio’s pipeline: The assets were created in Autodesk’s 3ds Max, rendered out using a combination of Splutterfish’s Brazil and Max’s default scanline renderer, and composited together using Eyeon’s Digital Fusion. On the hardware side, Blur used Boxx workstations, along with some Dell systems.
“Many people like to say they invented new technology for a project like this, and the school of thought is to make it sound like rocket science every time, but I don’t subscribe to that,” says Miller. “The tools are very good today so rarely should you have to invent new software to accomplish your goal.” As a matter of fact, the basic approach to creating a ride film and a short film is similar, he says. The differences occur in the camera paths— where it goes and what it sees. “There is no room for cheats in ride films. You are moving through things, so you can’t get away with matte paintings and cards; you must use 3D geometry.”
Story Time
According to Wallace, there were two particular challenges with Time Riders, both typical to large-format ride films. One was the sheer size of the rendered frames. “Our output size was slightly smaller than IMAX resolution. Everything was rendered out at 2048x1556,” he says. “Not only does everything take longer to render at that size, but compositing is more memory-intensive so the whole process is slowed from the beginning.” Another challenge that comes from such a high resolution is making sure the models and their textures hold.
Furthermore, Wallace points out, ride films usually rely on one seamless first-person camera move, which lends to the viewers’ experience that they are actually in the ride themselves. “The trick for us in Time Riders was getting from one moment in time to the next,” he says. “We used a wormhole or rift-in-time effect. Each section of the film was broken out by using this transition effect and executed so that right before the viewer was facing imminent danger, we’d zap them through the wormhole just in the nick of time and get them out of there.”
For the most part, though, Time Riders lacks a real story— a problem that plagues all location-based entertainment. “Everyone starts out with the idea to do something different, by adding a real story, and as you go down the path, you take out that element more and more until you are back to the point of being just like every other ride film you’ve seen,” explains Miller. “The reason for this is because the story takes away from the simulation experience, and at the end of the day, it is the ‘ride’ and thrills that people would rather have.”
However, Yas Takata, director of large-format films at Blur, sees a time in the near future when such a trade-off will be unnecessary. “It’s no longer good enough to strap a camera onto someone’s back and send them down a ski hill, which is how this all got started,” he says. “A story element takes the film to a second level.” Currently, pre-shows are attempting to add a story element, establishing a context for the ride
The Future Revisited
Most recently, ride film experiences were bolstered by pre-shows, and then by brands such as SpongeBob. Lately, stereo and 4D effects have advanced the genre. The next evolution, which is occurring now, is the use of simulation to support the experience, similar to what has been done with the Spider-Man and DarKastle rides (see page 10). “For Spider-Man, [the creators] took the time and really nailed it, blending other stuff—effects, smoke, real sets—so you don’t know where the set begins and the film ends, and vice versa,” Takata says.
Yet, such a production is extremely expensive—far more than what most parks are willing to shoulder, Takata notes. “That’s why there are only two of these attractions right now.” First, these rides require large footprints, and land is expensive. Second, there are moving seats, physical sets, and multiple screens, each requiring a projector. In fact, Blur is currently in pre-production on a mixed-technology simulator attraction for the Restless Planet, a dinosaur-themed park in
Dubai,
United Arab Emirates
. “It’s no longer enough to whip out a new sim film that’s branded; you have to go further,” he adds.
Based on HG Wells’ The Time Machine, the ride film journeys through
history, from the prehistoric (first) to the medieval (second) to the
future (last), each with its own style of imagery.
Another new trend in location-based entertainment is using CGI to incorporate “an added layer.” Takata uses the example of a museum display of a bobcat in a 4-foot space with a painted background: “CG can make the space seem larger, adding a dimension you can’t get otherwise,” he says. “Here, the object— the bobcat—is still the main attraction, but the CG is adding to the overall perception in a supporting role.” Takata’s prediction is that in 20 years or so natural history museums will be incorporating this type of imagery, or even a hologram, into their displays to augment an attraction, such as a dinosaur.
“Theme parks are all about creating a unique experience, a special place that you don’t have at home,” says Takata. “Games are getting sophisticated and home TVs extremely large and high in resolution, so theme parks have to offer something else with large spaces and dynamic movement. Big rides give real sensations and thrills. And then beyond that, ride films need to infuse real stories into the experience. Expectations rise with every new advancement. It’s like a cold war with customers. If you can get a story in there, it will give the medium new life.”
The big question is, how can this be done? Each film is about 4 minutes long, but how much story can be inserted into such a limited time? It can be accomplished with a short film, but a short film does not have what Takata calls “jollies.” “You can’t hit major plot points and jollies at the same time. You have to stop and hit the fun mark, which takes time away from story development. Plus, during a ride film, people are in fight/flight mode, and at that point they are not receptive to a story line.” Extending the film to 10 minutes, for instance, isn’t usually a viable solution because a larger footprint would be required to maintain the same throughput—here, measured in butts (the number of riders per hour) rather than bits, so parks can accomplish their quota, which in turn will keep visitors happy.
So what is the ultimate in theme-park ride films? According to Takata, it would be a version of a holodeck for a totally immersive experience. “And that’s something the industry is working toward. Computing power is increasing and the price is decreasing, and rendering is easier. Look at the quality of some Xbox 360 games,” he says. “We’re getting closer all the time.”