Deepfake: Hotter, Wetter and Faker

Photo by Charly SHELTON
The SAG panelists watch the deepfake video made by director Jordan Peele imitating President Barack Obama.

By Charly SHELTON

CV Weekly continues its investigation into the expansion of deepfake tech.

“Deepfake” videos are starting to surface in which famous people appear to do or say anything the video’s creator wishes. These deepfake videos are real threats – not only to the person who is being faked but also to the world.

Jordan Peele released a deepfake video two years ago as an educational tool to show the power and danger of the technology, in which Peele imitated President Obama’s voice and lip-synced a video of the President to match Peele’s statements. This kind of technology is real and now readily available, ready to be sent to half the population of the world in a matter of minutes through internet posting sites like YouTube and Reddit. Last week, we looked at the realities of what deepfakes are in the world today. But the problem lies not with what they are now, but how they will evolve in three months.

“[The Obama video was made with now] two-year-old technology and these things are developing very, very rapidly. We’ve seen significant advances in technology to create this type of fake about every three months,” said Hany Farid, professor of digital forensics and human perception at UC Berkeley at a SAG-AFTRA Summit to address the deepfake problem.

Three months to evolve an entirely new technology. That is astoundingly fast. As soon as a new type of deepfake technology is identified, Farid said, it takes one year from conception to realization of a detection tool to be made to stop that type of video. But the deepfake tech evolves four times faster so, by the time one detection tool is ready to get to work, that type of video is old news and there is a much more advanced version on the market.

“There’s another threat here. It used to be that the ability to create [these type of visual effects] was only [through] Hollywood studios; that’s where the technology was. Now you can go to GitHub, download some code and write on your computer, and the average person can create this type of fake,” Farid said. “There are also, now, websites that are being propped up where you can send them requests for the creation of specific videos and pay $20 and they’ll create fakes for you. It’s the democratization of access to technology in some ways that’s real dangerous.”

This type of technology from now on will have the capability to influence public opinion like a digital, falsely attributed scarlet letter on the chests of celebrities and political figures. Seemingly, the power to control the hearts and minds of the public is now available for purchase affordably and user-friendly enough to be accessible to the layperson.

Currently the truth is subject to media approval and cries of “fake news” are commonly heard. The courts are fighting to stem the tide of requests to teach alternate facts in schools. As foretold in many science fiction movies, these real-world current events are the prelude to another period in which what looks like a human doing or saying something on video is not always genuine and it becomes difficult – if not impossible – to determine what is a real human and what is software.

“It really is a new technological means to assault the truth which is so corrosive to democratic governance everywhere. It also could be incredibly incendiary if fake images and audio recordings are distributed before an election or used to create ethnic tension. The potential for mischief is going up every day and faster than our capacity to address it,” said Rep. Adam Schiff, who spoke with CV Weekly at the SAG Summit. “I think this new technology and the way we communicate now through social media is amplifying some of our worst instincts. It seems to be a medium so conducive to falsehood, traveling at light speed, and fear and anger mushrooming overnight. It’s a brave new world. I don’t think Orwell could have imagined something like this.”

But all hope is not lost. This kind of digital scarlet letter only works if people buy into it. If negativity breeds negativity, the solution lies in not breeding it any more and letting it die out. It is a personal choice to state something negative online behind a wall of anonymity; that choice can be amended easily if each person takes responsibility for fighting this threat on the individual experience level.

“I think self-policing is important. If you see something on my [social media] stream, for instance, that you are uncomfortable with, report it,” said actress and activist Alyssa Milano at the SAG Summit. “Don’t ignore, don’t just scroll past it. If you feel that negative comments are controlling the narrative, post something positive or ‘like’ the positive things. Really just try to take back control of your own user experience.”