It has been over a year since Jordan Peele made a video warning us of the impending spike in deepfake technology. In the video, Peele uses ML technology and his legendary Obama imitation to effectively ventriloquize Obama. He effectively makes it appear as if Obama is saying outlandish statements.
Since the video’s release, we have seen a barrage of deepfake videos take the internet by storm. Some notable examples include Bill Hader morphing into Arnold Schwarzenegger and Mark Zuckerburg lusting for data. However, while the videos are amusing, the implications that deepfake videos present for our future are far from amusing.
What are Deepfakes, Exactly?
Deepfakes are human image synthesis, a form of manipulated videos that create hyper realistic, artificial renderings of a human being. These videos are generally crafted by blending an already existing video with new images, audio, and video to create the illusion of speech. This blending process is created through generative adversarial networks, or GAN, a class of machine learning systems.
Deepfakes came into the public consciousness in 2017. In fact, Reddit was the community that coined the term. Many redditors popularized the technique by swapping mainstream actresses faces onto pornographic actresses bodies. Additionally, the practice of swapping Nicolas Cage’s face onto other movie character’s bodies became a very popular meme.
However the rate of deepfake videos has grown considerably as deepfake software continues to be distributed. In fact, these videos are easier to make than something in Photoshop. This is because the videos largely rely on Machine Learning technology rather than manual design skills. The software is also usually free, making it accessible to a lot of casual users. FakeApp, for example, allows users to easily make videos involving face swapping features, while programs such as DeepFaceLab and FaceSwap serve as open-source alternatives.
So what’s the problem?
Deepfakes’ most menacing consequence is their ability to make us question what we are seeing. The more popular deepfake technology gets, the less we will be able to trust our own eyes.
Granted, video manipulation is absolutely nothing new. People have been manipulating videos to trick audiences into believing something is real ever since the advent of film. However, deepfakes have introduced a new level of authenticity into the equation. It has become harder than ever to tell the difference between these doctored videos and the real thing. This inability to detect deepfakes will undoubtedly be used for a number of sinister purposes with the goal of undermining truth, justice, and the fabric of our society.
We live in a social climate in which the mere act of making a meme is enough for someone to believe any set of facts, real or not. In other words, many people hold confirmation bias, and will look for any facts, stats, or hot takes that confirm their previously held beliefs. The probability of people falling victim to confirmation bias increases when video is brought into the mix.
Accordingly, this confirmation bias can easily target specific political figures by creating fake footage of them saying things they never said, or doing things they never did, in order to sway public opinion. Celebrities, foreign leaders, company faces, presidential candidates, religious figures, and other authorities/thought leaders will be bombarded with deepfakes over the next four years. Moreover, it will fall on us, the average media consumer, to decipher what is real and what is fake.
Plausible Deniability and Alternative Threats
The ambiguity of satire versus genuine content is a conundrum that tricks many people. The age old example, of course, is the occasional baby boomer who shares an article with a ridiculous headline from satire news site ‘The Onion’, showing a hysterical amount of outrage at an obviously fake headline. However, malicious figures who spread messages of hate will often use irony and satire as a shield in order to resist accountability. If no one can decipher your tone, you have plausible deniability.
The same principal may very well start applying to deepfake content. It is very likely that public scandals will be dismissed with “I was the victim of a targeted deepfake video”. Disgraced figures claiming a real video is fake is a very plausible scenario. With no motivation from bad actors to play by the rules, these vulnerabilities in the human ability to decipher fake content will be exploited.
The credibility of public figures is not the only consequence of deepfake though. Indeed, there are a wide variety of ways these videos could cause damage. Fake emergency broadcasts, election disinformation campaigns, and terrorist propaganda are just a few of the scenarios that would worsen with the involvement of deepfake technology.
How to Fight Deepfakes
At the current stage, it is still thankfully easy to tell when a video is a deepfake. Slightly unnatural mouth movements, confusing shadows, and a lack of eye blinking are common indications that a video is not real. However, GANs are getting better with each passing day. As the videos we consume embrace realism more and more, it may be up to tech developers to create forensic identification systems. What may be needed is the equivalent of detecting that a picture was photo-shopped by looking at the pixels.
Thankfully, there are talks of developing deep learning classifiers. These classifiers would inspect raw features of videos to indicate authenticity via biometric video watermarks. But on the other hand, GANs, theoretically, could be trained to learn how to evade such forensics, according to DARPA program manager David Gunning.
In the face of an alarming distrust in media, the very notion of truth may seemingly die in Silicon Valley. But in the case of dealing with these types of videos in the day to day, it is important to stick to a set of principals; these include:
- Maintain a healthy skepticism. Be aware that manipulating content is common, and don’t immediately spread information without looking into it deeply.
- Multiple source verification is important. Hold a standard of understanding who released a video and for what purpose. Content from a single source is not as verifiable as content from multiple sources.
- Education is important. Teaching your close ones how to determine and process the trustworthiness of information is important for every individual.
- Advocate on a state level. Journalism should use tech countermeasures for vetting purposes. Companies and government programs should be investing in deepfake awareness campaigns.
Understand that it will take a combination of tech defense and human defense to combat this new era of fake news. Deepfakes threaten the fabric of our democracy. However, understanding the risk of deepfakes in the same way that we understand the risk of ransomware or data breaches is the first step to fighting disinformation.