The GLP is committed to full transparency. Download and review our 2019 Annual Report

‘Synthetic media’: How AI could make the era of ‘fake news’ far worse

| | November 14, 2018
Deepfakes
Deepfake files. Image credit: Tech Talks
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

In the emerging world of “synthetic media,” the work of digital-image creation—once the domain of highly skilled programmers and Hollywood special-effects artists—could be automated by expert systems capable of producing realism on a vast scale.

In a media environment saturated with fake news, such technology has disturbing implications. Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent.

Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else.

Related article:  Artificial intelligence can determine your ‘brain age’ by analyzing MRI scans

“Why did Stalin airbrush those people out of those photographs?” he asked. “Why go to the trouble? It’s because there is something very, very powerful about the visual image. If you change the image, you change history. We’re incredibly visual beings. We rely on vision—and, historically, it’s been very reliable. And so photos and videos still have this incredible resonance.” He paused, tilting back into the sun and raising his hands. “How much longer will that be true?”

Read full, original post: In the Age of A.I., Is Seeing Still Believing?

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend