Deepfake technology is the ability to fabricate apparently real footage of an individual, using Artificial Intelligence and Machine Learning.
Take a real person, let’s say, Barack Obama. Mimick his voice and body movements through existing clips and images and use that to create a video of him saying anything you like.
It sounds complicated, however, deepfakes are now as simple as plugging in content of the individual (images, video, sound clips) & editing a transcript.
What’s next for deepfake?
To get an idea of the pace at which this technology is developing, at the beginning of 2018, Motherboard predicted it would take another year to automate deepfake software. It took a month.
According to the 2019 Future Trends Report, here are some predictions of near-Future Scenarios (2019 - 2024)
- Optimistic: The platforms collaborate with governments and academics to develop and deploy a standard "nutritional label" model to describe what, exactly, is in digital content. This label is read by authentication systems. Generated or aggressively synthesized content is flagged and deprioritized in search rankings, smart speakers and social media feeds. For satirical, but not intentionally misleading content, a team of humans reviews the work and makes their decision-making process transparent.
- Pragmatic: Deepfake content is quickly commercialized, and startups prioritize speed over safety. Synthetic content becomes so popular that questions about intellectual property go unanswered until someone brings a lawsuit. With public complaints and little recourse, regulators try to get involved. Litigation ensues, costing companies lots of money and time.
- Catastrophic: Deepfake content is weaponized by many different actors, including governments, activist groups and individuals. It is treated the same as all other internet content, showing up in search results, on our smart speakers as audio content, on our connected TVs, in our inboxes, and throughout social media. Our sacred information channels – public/ commercial/ cable news, government agencies, even family members – are all compromised. With no way to contain its spread and no watermark to help us distinguish between what's real and what's fake, civil unrest leads to violent protests, corporate boycotts and local government shutdowns.
What are the dangers of deepfake?
Deepfake first entered the public eye late 2017, when an anonymous Reddit user under the name “deepfakes” began uploading videos of celebrities like Scarlett Johansson stitched onto the bodies of pornographic actors.
Deepfake pornography uses smart, face swap technology to place a victim’s face into an existing pornographic photograph or film, replacing the original participant. It started with celebrities but thanks to the software becoming more accessible, it is happening to everyday people too.
With the 2020 US election only months away, the threat of election interference is perhaps the most urgent when it comes to deepfakes. What happens when it’s easy for anyone with a laptop and access to the internet to fake a video of a political leader declaring war on the United States, or vice versa?
“There’s a very high likelihood that somehow, someway, at some point in the 2020 election, deepfakes will play some sort of role,” Josh Ginsberg, CEO and founder of Zignal Labs
Earlier this month, the House Intelligence Committee held it’s first hearing on the situation. Intelligence Committee Chairman, Adam Schiff, called deepfakes ‘a nightmarish threat to the 2020 presidential election that could leave voters struggling to discern what is real and what is fake’.
Russian bots are now childs play in the world of misinformation.
How to spot a deepfake
It’s fair to say that it’s not just the Luddites that will struggle to spot a deepfake from extraordinarily realistic-looking videos. Now more than ever, a certain level of sophistication is needed to navigate the various sources of information on the internet.
What can you look out for?
The first is abnormal blinking. Because deepfake technology relies heavily on digital photographs of the individual - it’s unlikely they’ll have much ‘eyes-closed’ imagery to work with.
Purposeful blurring. Some creators of deepfake videos will purposely reduce the quality of a video to try and mask the giveaway imperfections of a deepfake. This is more obvious when the background quality is better than that of the person being manipulated.
It’s too controversial. If you’re not sure whether it’s real because the content is so crazy, it’s probably a deepfake. Trust your judgment.
Distortion. With a deepfake video, sudden movements or obstructions to the face such as a hand can cause distortion, which is a telltale sign of a fake.
In the absence of a technology-driven way of identifying fakes, we’ve grown savvier as a society. We have collectively accepted that photos and videos can be edited or faked, and approach them with a dose of skepticism.
Who and what is real online is become harder to determine, which is why authenticity is one of the most important trends going forward.
Tech organisations with power and influence such as google, apple & wechat should focus on solutionising the issue and not using the technology to demonstrate their capabilities.