Home > Columns > CRM Columns

Fake News: Could the Solution be the Poison? Why We Need AI Bots to Detect Fakes Faster Than Humans


Presented By: inbenta

Fake News: Could the Solution be the Poison? Why We Need AI Bots to Detect Fakes Faster Than Humans

John Forrester, CMO of Inbenta

In a connected world, where a fake news publisher has the ability to make thousands of dollars a month, misinformation is a growing problem that will continue to proliferate on our digital screens. Is it our instant ability, or our seemingly insatiable desire, to share our insights globally with the swipe of a finger, before we’ve had time to fact-check and verify? With teenagers in Macedonia finding a way to make easy money from fake news, the issue is compounded by that basic human nature that compels us to be the first to click the share button.

Holding Influencers Accountable

Initially Mark Zuckerberg, chairman and chief executive of Facebook, claimed that Facebook’s ability to impact the election was “a pretty crazy idea.” The result was a mounting public outcry for these sites to recognize their responsibility in distributing information. In response, Google and Facebook are finally implementing a number of features designed to ferret out misinformation. These measures include flagging stories as unverified, and installing "Fact Check" tags on snippets of articles. Google is also working with established fact-checking organizations, like PolitiFact and Snopes, and opening up the system to verified news publishers, like The Washington Post and The New York Times

But, for many business executives, even these actions are not enough. Investors are now putting pressure on these platforms to hold them financially accountable for the stories that appear. When businesses began to boycott Google for running discriminatory ads on YouTube, Pivotal Research Group downgraded Alphabet, the holding company for Google, shares from “Buy” to “Hold.” Pivotal explained that “Google wasn't taking the problem seriously enough, and accused Google of ‘attempting to minimize the problem rather than eliminating it, which is the standard we think that many large brand advertisers expect.’" As a result, analysts at Nomura Instinet have predicted that YouTube could face a loss of up to $750 million in lost advertising revenue as advertisers continue to pull content.

The High Cost of Fake News

The ramifications of the unchecked proliferation of fake news will likely extend to both the B2C and the B2B world of investor relations and brand management. Contrarian short-sellers and competitors could hire fake news publishers to generate fake articles about a company, for the single purpose of damaging the brand and affecting stock performance (at least temporarily).

These adversarial fake news generation sites could generate a significant amount of fake news articles, syndicate them across a number of channels, and promote them heavily on social media. Companies, overwhelmed by the fake crisis, would face a significant challenge in developing a response to counteract the “bad press,” requiring a real-time emergency crisis communication plan. For example, with an established news video regarding the mishandling of a recent passenger on a United Airlines flight, a fake news bot could easily create a new barrage of fake news stories about abuses of United Airlines passengers, which would spread widely on social media and promoted on fringe web publications.

Nomura predicts that the decline in revenue at Google will extend to other content generators, including Twitter, SnapChat and Instagram as businesses seek protection from and avoid any association with damaging content. 

AIs Can Out-Spin, Out-Shock and Out-Produce Humans

With money as the driving source behind the creation of fake news, many publishers are already trying to out-produce these algorithms, asking, “How can I create more sensational, fake news?” With their use of Artificial Intelligence (AI) bots, fake news publishers have the ability to spin out content faster than reputable sites, and humans can detect them. When he was testifying before the Senate Intelligence Committee, Clint Watts, former FBI agent, used the words, “Russian armies of Twitter bots.” Conversely, these same bots could be designed and calibrated to trick the algorithms.

By its very design, AI is a science of intelligent machines that work and respond like humans. The Associated Press, The Big Ten Network, and The Washington Post are just a few reputable journalistic organizations who are already using bots to generate news stories. 

Outsmarting AIs

But the antidote to the problem may lie in the poison. Currently, developers are creating bots that can help authenticate the veracity of content. Across the globe, a growing number of computer scientists are looking for a way to use AI to combat misinformation online. 

Bots also have the ability to assimilate, decipher, and process historical news, predictable behaviors and events historical news much more effectively than human reasoning. Using technology like natural language processing and machine learning, these bots can work faster than humans to check news articles against certified facts from credible databases.

For example, the recent fake news story that all Supreme Court Justices opposed the Gorsuch nomination, would be handled by an AI bot like this: The bot could parse the conclusion, or headline, of the article and compare it against known, published rules regarding the behavior of Supreme Court justices. This would have quickly flagged the article as fake news and effectively block all similar, syndicated articles. This article was eventually manually flagged and tagged as fake news, but not quick enough to prevent the article from appearing up as a credible story on Google News.

AIs To Restore Faith In the Media

As a proactive solution, brands may need to invest in the production of their own verification and action bots, so that they can quickly counter social post inaccuracies and flag the fake news articles that are false, as false. 

The phenomenon of fake news is helping to promote the general distrust of the media. More serious is the impact misinformation is having upon our democracy. An analysis of news three months before the election found that the top 20 fake news stories were doing better than the top 20 real news stories on Facebook. Technology like AI and bots can help to restore faith in the media and ensure the elimination of rumors and unsubstantiated facts that are promoted on the internet and our social media channels.  Publishers and businesses alike will have to rely on technology to help manage the fake news issue before the problem gets worse.

Unfortunately, it will likely be the fake news publishers who leverage the bot technology first, driven by money, political and ideological reasons.