We live in an age of ever-growing distrust and misinformation. Whether sanctioned and spread by political parties or disseminated by independent citizens via one of the many platforms available, the intent is usually clear: to mislead the public. This growing activity has led to a rising distrust in the media and press, a very dangerous and slippery slope seen frequently as a step taken in authoritarian governments rise to power.
Most notably this misinformation has been touched upon in Special Council Robert Mueller and his investigation into Russian interference into the 2018 US elections, as well as the topic of conversation in the Senate hearings with Facebook and Google executives. However, one of the lesser mentioned aspects of manipulated information comes in the form of highly sophisticated, doctored photos known as “deep fakes.” The term stems from the online handle of a 2017 Reddit user “Deepfakes,” who brought his creations to the internet spotlight with the manipulation of pornographic videos to include prominent celebrities.
The Guardian reports that the tool used by “Deepfakes” was introduced by a graduate student Ian Goodfellow as early as 2014 in the form of a ‘generative adversarial network’ or GAN. This piece of software creates new data given existing data sets, in this case the data sets consisting of videos, audio and photo files of the celebrities. While incredibly inappropriate and demeaning, it was not the sexualized videos of celebs of course that pose a threat to our society.
In an interview with the Wall Street Journal, Hanry Farid, a professor working in the Darmouth College Computer Science department speculated,
“How are we going to believe anything anymore that we see? And so to me that’s a real threat to our democracy…”
A fair point when one considers the impact that viral videos now have on us as a society. Take the recent example of the Covington high schoolers clash with Native Americans in Washington DC. A short video of students wearing MAGA hats face to face with Omaha Nation members sparked viral outrage, and while a genuine video, it is easy to see the immediate damage that can be done with video clips.
As technology and the research into AI learning advances, programs such as GAN and their evolution represent a new level of awareness that social media users will have to undertake when viewing content. During a speech given at Missouri University, Lt. Col. Jarred Prier was quoted by the Columbia Tribune expressing his concerns on the implication this technology has on cyber warfare,
“The next evolution of this social media attack as a national security issue is going to be precision messaging,” Prier said. “It is going to be somebody who looks and seems a lot like your friends and starts sending you direct messaging or things that are targeted towards you.”
This is not meant to spark panic, nor to deeper imbed the distrust of the press. The New York Times, Wall Street Journal and so forth are reputable publications that thoroughly check their facts and sources before publication, and the fact they are facing growing public distrust is a terrifying byproduct of this day and age. With social media giants such as Facebook navigating this new terrain or doctored information, it is up to the consumer of that information to be wary. It may be Brad Pitt in pornography now, but one day it may be your loved one asking for an account number.