Spotting Fake News With AI

Print Friendly, PDF & Email

In this special guest feature, Chris Nicholson, CEO of Skymind, discusses ways AI can identify fake news. Specifically, deep learning has the ability to identify fake news based on all sorts of tells. He also provides references to initiatives like the Fake News Challenge and Fakebox that show how AI can be applied with fairly high accuracy to identify fake news. Chris co-founded Skymind and Deeplearning4j, the most popular deep-learning framework on the JVM. He previously led communications and recruiting for the Sequoia-backed Y Combinator startup FutureAdvisor, which was acquired by BlackRock in 2016. Chris spent a decade reporting on tech and finance for The New York Times, Businessweek and Bloomberg News, among others. He attended Deep Springs College and holds a degree in economics.

Democracies live or die based on the quality of information that voters, and the leaders they elect, have access to. Bad information leads to bad decisions, and good information leads to better ones.

So fake news poses a mortal threat to democracy. By manipulating the information that voters consume, the outcome of an election can be rigged, much as Russia influenced the U.S. presidential election in 2016 by hacking the emails of the DNC and spreading falsehoods on social media that supported the candidacy of Donald Trump, and disparaged his opponent.

But what is fake news? The old chestnut that the Supreme Court applied to obscenity — “I know it when I see it” — is not precise enough if we want to use AI to identify malicious propaganda.

Fake news is more than incorrect information. It is often a story that has been fabricated in order to manipulate public opinion and to spread virally via online media. While some fake news might be considered a prank, the false stories of most concern are those with a more serious, and nefarious, purpose.

These are stories that are provably untrue and often wildly popular because they play to their audience’s strongest emotions and reinforce partisan preconceptions; that is, they encourage wishful thinking and confirm biases.

There are various indicators that a piece of text may be fake news. The use of CAPS is one indicator. If the headline doesn’t match the body of the story, or if the story links to sites that are known to spread propaganda, those features could be fed into a machine-learning algorithm that could give a prediction on whether a story is fake news or within the realm of the plausible and sincere.

AI creates filters that produce decisions about data. One type of filter is a classifier that applies labels to individual instances of data (each news story), categorizing according to whether they seem “real” or “fake”.

There are several steps to take when building an AI solution, like a fake news detector. You need to gather the data (a bunch of real and fake news stories), have human experts label the data to the best of their ability as real or fake, choose an algorithm (like a deep neural network) to train on the data until it can distinguish between the two, and finally deploy that AI model to the real world.

The ideal use of an AI algorithms isn’t to have it work in a vacuum, but to aid humans in making decisions about the stories in front of them. Human fact checkers could rely on those algorithms for decision support to rank a story and build a data set of fake news, which in turn could train future algorithms to judge those stories even more intelligently.

Those algorithms might then be used to help social media networks decide which stories should be promoted, and which left well enough alone. Of course, if the major distributors of information — Google, social media and traditional media — don’t make serious efforts to staunch the flow of fake news, all the algorithms in the world won’t help.

AI can be applied not just at the level of the story, but to the accounts and sites that create and distribute those stories. Hackers recently created a program to identify bots on Twitter, for example.

The 2016 elections showed that technology can be used to spread fake news and influence real events. Recent revelations about how Cambridge Analytica used stolen Facebook data to target millions of U.S. citizens make it clear that we need better tools to protect our country and system of government against fake news.

Initiatives like the Fake News Challenge and Fakebox show that AI can be applied with fairly high accuracy to identify fake news. Now, the major purveyors of information need to ensure that solutions are put in place to prevent foreign adversaries from manipulating U.S. elections in the future.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*