A stunning development in real-time human-driven digital characters occurred not long ago featuring actor Andy Serkis.
The real-time rendering from Epic Games’ Unreal Engine combined with 3Lateral’s Meta Human Framework volumetric capture, reconstruction and compression technology brought this breakthrough digital human performance to life.
The volumetric data was generated by capturing a series of high-quality, HFR images of Andy Serkis from multiple angles under controlled lighting. 3Lateral’s process involved various capture scenarios, some focused on geometry, some on appearance and others on motion. All of these inputs were applied to generate a digital representation of Andy Serkis, and to extract universal facial semantics that represent muscular contractions that make the performance so lifelike.
In the resulting real-time cinematic, a high-fidelity digital replica of Andy Serkis recites lines from “Macbeth” in nearly indistinguishable video and performance quality from his real-life acting. The “Macbeth” performance data was also used to drive 3Lateral’s fictional digital creature, Osiris Black, to demonstrate how the same capture can drive two vastly different characters.
Serkis was a willing and ideal subject for this proof-of-concept demonstration as in addition to his remarkable acting talents, he is deeply versed and experienced in digital performance process and technology.
In order to display these massive data sets, 3Lateral’s semantic compression reduces data sets while preserving the integrity of the data, enabling the ability to retarget the performance onto a digital character while easily altering gaze and subtle performance nuances. This incredibly high-fidelity capture is pre-processed offline to a data set that can be loaded into Unreal Engine to enable real-time volumetric performances.
While this is a stunning proof-of-concept achievement that for now will remain in the realm of professional visual effects, someday photorealistic digital humans will be used in interactive entertainment, simulations, research, non-verbal communication as an interface with the machines, artificial intelligence and mixed reality applications as well.