We’ve all seen the efforts by digital artists to create lifelike human faces. It is arguably the most difficult task in computer graphics, and many have made progress inching forward in this regard, coming ever so close to achieving this milestone.
At GTC 2019, I saw something that sent me across that Uncanny Valley. Nvidia Research is using generative adversarial networks, or GANs, to create photorealistic images of people who do not exist from photos of real people.
(I can see teenagers feeding in pictures of themselves and their current boyfriend, to see what an offspring might look like!) It blends features from various photos to create the synthetic image.
Like the GauGAN application (see story online), the GANs can produce convincing results because of their structure as a cooperating pair of networks: a generator and a discriminator. The generator creates images that it presents to the discriminator. Trained on real images, the discriminator coaches the generator with pixel-to-pixel feedback on how to improve the realism of its synthetic images. After training on real images, the discriminator knows that real humans should look like, so the generator learns to create a convincing imitation. In summary, it is AI teaching itself.
The possibilities for this technology in the media and entertainment industry are widespread.
But wait, this application is not limited to faces. It can be used to generate landscapes and objects.