Will A.I. Solve Facebook’s Fake News Problem?

Facebook’s troubles with abuse–including unwanted content from nudity, to hate speech to serious violence—do not seem to be amenable to an easy solution. However, these days what has proven to be most damaging and sensitive for the tech giant is fake news, false information and hoaxes.

Facebook’s strategy to deal with the Fake News issue

Facebook has hired many human moderators – now over 7500 – to prevent the spread of fake news on its platform. Further, Zuckerberg noted in a recent interview that it could set up some independent body to evaluate which content is genuine and which is not. There are also reports that Facebook might use Artificial Intelligence to weed out fake news.

In an interview to The New York Times, Zuckerberg said that the company rolled out some new AI tools last year for elections in Alabama in order to identify false news and fake accounts.  In his testimony before the U.S. Congress, he also pointed to the use of AI.   Similarly, the tech giant said that it had rolled out machine learning to pick out suspicious behavior without assessing the actual content. Machine Learning and Artificial Intelligence are certainly important tools when it comes to updating the social media platform.

Promises that Zuckerberg made to the Senate    

Senate John Cornyn told Zuckerberg that the Congress had been told in the past that platforms like Twitter, Facebook, Instagram and other similar platforms are like neutral platforms. He asked the Facebook CEO if he agrees that the social media giant and other social networking sites are, in fact, not neutral.

Zuckerberg replied by affirming Facebook’s responsibility for content on its services, and said that Facebook would use a number of tools to  identify more kinds of bad content such as revenge porn, fake news, hate speech, obscenity, and other content that is deemed controversial. By the end of this year, he said that Facebook would hire over 20,000 content moderators to identify and root out most of the illegal and harmful content.

Recently, at Facebook’s annual developer’s conference, Zuckerberg stood before more than 5,000 developers and talked about innovation without thoughtlessness. He spoke of responsibility and idealism, and the need to build a technology that will assist in bringing people closer together—though he noted that it is not going to happen on its own. He said that Facebook’s responsibility is to continue building while keeping people safe.

The question is whether human eyes and artificial intelligence will be adequate to meet the challenges of fake news.


Related Article