Confronting the Threat of Deepfake Disinformation – in 2020 and Beyond
For as long as recorded media has existed – whether print, photographs, or videos ¬¬– people have figured out ways to manipulate content to change its meaning. Sometimes changes are made for reasons that are perfectly benign: an artist modifying photos of public figures to make a satirical political point, for example, or a movie studio adding special effects to a Hollywood blockbuster. But modifications can also be made for more sinister reasons, and throughout history we have seen scammers, fraudsters and propagandists falsify media to mislead and deceive.
Today we are witnessing the emergence of new technologies that will make it easier than ever for malign actors in the United States and around the world to manipulate media to defame and harass individuals – and to target entire societies with misinformation.
New developments in machine learning, an artificial intelligence technique in which powerful computers train themselves by combing through massive amounts of data, have made it possible for computers to generate “deepfakes” – completely fake videos, pictures or audio recordings that convincingly portray things that never happened and are all but indistinguishable to anyone watching or listening. Research and investments into deepfake technology are leading to rapid improvements and it is already easy for savvy programmers to create a convincing fake video using open-source software and a desktop computer.
Many potential applications of artificial intelligence raise important policy and philosophical questions, but the rise of deepfakes is particularly concerning because of the visceral nature of photographs and videos. “Seeing is believing,” as the adage goes, and forged images and videos can leave a lasting impression on us, even if we later come to rationally understand that what we saw was not real.
This poses serious concerns for our upcoming 2020 elections and beyond. Imagine the risk if a foreign country creates a deepfake video showing a political candidate accepting a bribe, or if a rogue group generates deepfake audio of a private conversation between world leaders to sow unrest. A believable, well-timed deepfake of a presidential candidate created by a malign actor, foreign or domestic, could disrupt the race, changing the trajectory of the candidate – and potentially even change the course of history.
As deepfakes become increasingly realistic, they have the potential to undermine our trust in recorded media as an accurate representation of reality. At the same time, widespread use of social media platforms means that information can be shared faster than ever before, and it is easy for sensational lies to go viral before the truth has a chance to catch up. And experts predict that it could soon become impossible for human viewers to tell whether a video or image has been captured by a camera or generated by an algorithm. When our society loses the ability to discern fact from fiction, the very integrity of our democracy and civil society will be at stake.
Of course, these hypotheticals are worst-case scenarios, and deepfake technology can also be used to create art, satirical images and movie effects, among other productive uses. But we must take steps today to understand the risks deepfakes pose to our democratic discourse so we can limit their potential disruption. That’s why the House Intelligence Committee recently held a hearing with academic and policy experts to discuss deepfakes, and why I remain committed to ensuring that we’re actively preparing to address them. Waiting to act until the first deepfake attack of the 2020 election will be far too late.
Ultimately our democracy is only as robust as its voters are informed. That’s why internet platforms, media organizations and all of us – sharers and consumers of information ¬– must do everything we can to ensure that facts prevail over falsehoods.