Sophisticated AI Will Make The Deepfake Problem Much, Much Worse

Print Friendly, PDF & Email

There were a couple of news stories last week that seemed to offer a little hope when it comes to dealing with deepfakes. Researchers say they have developed tools that are able to detect more than 90% of the fake videos. 

That’s reassuring. But only for now. It’s only a matter of time before more sophisticated AIs learn to make more sophisticated deepfakes, and then we are back to where we started. In addition, whilst a 90% detection rate might sound impressive, it only takes one error for a catastrophic hack to occur.

The Arms Race

AI is already affecting cybersecurity, and the last few years have seen the development of  advanced algorithms that can automatically scan for vulnerabilities. AI is also the foundational technology for creating deepfakes, and understanding the way that AIs are used to do that also lets us see why new detection tools will only work for a short period.

Deepfakes are created using a GAN, or “generative adversarial network.” This kind of system uses two neural networks, operating in tandem. Or rather, operating against each other. One will start to create a deepfake video using publicly-available images and video, and the other will try to spot whether it is fake. Then the first neural network will improve the video.

Sound familiar? It should. This is exactly the way that virus detection works. And just like the arms race that has been going on in malware development and detection for the past two decades, we are now entering another arms race, this time over deepfakes.

Once you understand the way that deepfakes are made, you can understand why these new detection systems will only have short-lived success: their reports on which videos are faked, and how they know this, can simply be fed back into deepfake generation AIs, which can then improve their techniques.

Why 90% Isn’t Good Enough

There is another problem. At the moment, the most successful deepfake-detection software we’ve seen claims an accuracy rate of 97%. That sounds pretty good, but pause a moment and think about where these deepfakes are shared. 350 million images a day are posted on Facebook, and even at 97% accuracy that leaves a whole lot of videos that aren’t detected.

For this reason, we need to radically re-think the way that we deal with deepfakes. Automatic detection systems are not going to cut it, and it’s only a matter a time before deepfakes become so good that they are completely undetectable. 

This will have huge consequences, because deepfakes are not only a threat to democracy and free speech but can also undermine cybersecurity, and are already a huge source of concern for fintech, ecommerce, and other online systems. Whilst quality cybersecurity tools don’t have to be expensive, continuing to throw any amount of money at a limited solution – deepfake detection algorithms – might be a sucker’s bet.

The Future

Instead of looking for technical solutions to the deepfake problem, then, perhaps we should be looking at other ways to fix it. Some have suggested that this could rely on some form of legal watermarking system in which data produced by AIs is verified

With this approach, the deepfake problem would be solved not by looking at deepfakes themselves, but by verifying the truth of videos that are not deepfakes. It would certainly be cumbersome for news crews to have to submit their footage for this kind of verification, but ultimately it might be the only solution.

While this kind of system could – potentially – provide a solution, it is a long ways off. At the moment, we are in a situation in which even the largest social media platforms, like Facebook, have not clearly defined their policy on the videos. Without an outright ban, which some have argued would itself be unwise, even identified deepfakes could continue to be shared on these platforms.

At the broadest level, it’s not clear that even a bullet-proof way to detect deepfakes would solve the problem. The business model of the internet is based on the monetization of attention, and deepfakes certainly get attention. Even when people know a video is faked, they still watch it: for fun, or to see how convincing it is. 

And that’s the biggest problem here. There have been attempts to control the type of content on the internet before, and they have all failed, because ultimately it is the market that decides what people have access to. It’s not clear that any government or industry body has the necessary reach and power to oversee all user-generated content.

The Bottom Line

All this is certainly depressing, but at least we are having the conversation. In fact, the controversy over deepfakes might act as a useful test case for the broader challenges that AI will bring in the next decade. 

At the moment, awareness of these challenges appears to be limited to a small group of people: cybersecurity techs. The last year has seen many articles on AI’s role in cybersecurity and the way the technology is transforming the industry. Deepfakes, as the most visible outcome of current AI technologies, might get more people involved in this debate.

At the end of the day, we are all going to have to develop new ways of consuming media in the coming age of AI. Whilst detection tools and legal verification systems might help us in spotting fake videos, we’re also going to have to improve our own ability to spot fake news, and to resist the temptation to share it. So while AI is certainly scary, perhaps we should stop blaming the technology, and look at why we are so attracted to deepfakes in the first place.

About the Author

Gary Stevens is a front end developer. He’s a full time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*