As any character animator will attest, one of the more tedious jobs in animation is animating lip sync. The task requires animators to scrub through a dialog track one frame at a time, picking out the phonemes or syllables of every word, then assigning the proper mouth shapes to match. When doing a lot of character animation, this can become very tedious. FaceFX is OC3 Entertainment’s solution for creating facial animation and lip sync directly from audio files. It promises to save animators valuable time.
The software runs as a stand-alone application, with plug-ins that connect to major 3D applications (Autodesk’s Maya, 3ds Max, Softimage, and MotionBuilder.) These plug-ins basically are file format converters that export models to the main FaceFX application, where the character’s facial and body motions are matched to the sound tracks. Once complete, the plug-ins can then bring the animation back into the desired package for finishing.
When preparing a character for export, the facial deformation can be set up in one of two ways. Morph targets or blendshapes can be used to manipulate the face using shape animation. Bones can also be used to manipulate the surface of the face directly. Since these are the two most popular methods of rigging a face, most productions will have no problem exporting their characters to FaceFX.
The exported character is loaded into FaceFX, where the real meat of the character setup begins. The interface is tab-based, with each major task organized under its own tab. The animation is viewed in the preview tab, the audio track analyzed in the phoneme tab, and so on. Surrounding these main tabs are a time slider and scene browsers.
In order for FaceFX to work, you need to match up the facial shapes in the model to the possible phonemes the character might speak. So, for example, an “OH” mouth shape needs to be assigned to the “OH” phoneme, and so on. The main interface for doing this is the graph tab,
which facilitates the process using a node-based interface, much like Maya’s Hypergraph window. Phonemes can be mapped one phoneme to one mouth position, or you can use what is called a combiner node to blend multiple shapes to one phoneme. In order to speed along this process, FaceFX provides a number of scripts to help streamline the process.
In addition to facial animation, FaceFX can animate the head and body of the character. These gestures, such as head nods, blinks, and brow lifts, can help to add realism to a character. These events can be generated using specific triggers or in a pseudo-random fashion. A lot of it requires some scripting, so getting all these to work on a new character probably requires a technical director with some skill to get it working. It’s not something the average animator can just use out of the box.
Once a character is set up, however, the process is relatively straight‑
forward and can be used by almost anyone. The actual track reading is done using the phoneme editor. An audio file of the voice is loaded to start the process. To help the voice recognition, FaceFX also requests a text-based version of the audio track. This is not required, but does help reduce errors. (The software currently can read seven languages: English, French, German, Italian, Spanish, Korean, and Japanese.)
After that, the process goes very quickly. The audio track is analyzed, and the next thing you know, your character is talking. If the animation sync is off, the results can be tweaked in the phoneme editor, where the duration and timing of each phoneme can be adjusted.
Animation Talk
The results do sync up fairly well, and the mouth movements reflect the audio, but FaceFX does not bring the character to life. The animation produced is fairly conservative. This isn’t a bad thing, though, because FaceFX is working mostly with the audio and phonemes to generate basic facial animation—one of several tasks required to truly bring a character to life.
The rest of the work happens outside FaceFX, and this is where FaceFX’s plug-ins come in handy once again. The facial animation pass is exported to a 3D package, where the animation can be incorporated or built upon using any number of tools. This is where non-linear animation editors, such as those in Maya and MotionBuilder, can come in very handy because the FaceFX animation can be used as just another motion track.
In fact, FaceFX’s output would be a big benefit for those productions using motion capture. The FaceFX-generated lip sync should lay over a motion-captured scene fairly seamlessly and not fight the motion capture. In this situation, the motion-captured actor would provide a lot of the additional life needed to make the scene pop.
For keyframe-based productions, FaceFX also could be of value to the user. While it doesn’t bring a character to life, it does do a lot of the groundwork and allows the animator to focus on what’s really important. In that context, the software does about a third of the animator’s work.
Overall, FaceFX is certainly a huge timesaver for animators. The software is very good at creating accurate lip sync and basic animation. It will not animate an entire scene, but it can certainly relieve a lot of the drudge work required in animation to free animators and artists for more rewarding tasks.
George Maestri is a contributing editor for Computer Graphics World and president/CEO of RubberBug animation studio. He also teaches Maya for Lynda.com. He can be reached at maestri@rubberbug.com.