A new working paper from Stanford and NYU researchers suggests that Facebook’s efforts to get misinformation and fake news out of its users’ feeds seem to be working. The overall magnitude of the fake news problem may have lessened– at least temporarily – and Facebook’s efforts after 2016 election to control “diffusion of misinformation may have had a meaningful response,” says the paper.
Facebook having more success than Twitter in preventing misinformation
The researchers, who compared the spread of stories from legitimate sites to the spread of stories from fake sites, discovered that interactions have been comparatively stable over time. This further indicates that it’s the fake news sites that are being targeted and affected. Meanwhile, on Twitter, engagements with fake sites have continued to increase.
The researchers wrote that from the beginning of 2015 up to the 2016 election, fake news interactions rose steadily on both platforms, but following the election, Facebook engagements fell sharply, by almost 60%, while fake news shares on Twitter continued to increase.
The authors assembled a list of around 570 sites for the study which have been identified as news sites that publish false stories. The authors measured the engagements for several publishers, including niche business culture sites, small mainstream ones, and big mainstream ones, alongside the fake news sites.
They wrote: “Both platforms show a modest upward trend for major news and small news sites, and a modest downward trend for business and culture sites.” When it comes to trends in the ratio of Facebook engagements to Twitter shares, the ratios have been comparatively stable for small news, major news and business and culture sites but for the fake news sites, the ratio has declined sharply, from over “45:1 during the election to around 15:1 two years later.”
Quantity of fake news interactions still large in both Facebook and Twitter
The working paper found that the absolute quantity of fake news interactions on both social networking sites is still significant. In particular, Facebook, simply because it is so much bigger, has played a larger role in the diffusion of misinformation. The study stated that Facebook engagements declined from roughly 200 million per month at the end of 2016 to around 70 million per month at the end of the study’s sample period. This is an optimistic result and shows that the tech giant – contrary to what many think – is actually doing things to limit misinformation and fake news.
Fake news shares on Twitter have been in the 4-6 million per month range and 20 million per month for the major news sites since the end of 2016.
In sum, Facebook’s success in its efforts to reduce fake news shares have seemingly shown some results, but there still remains a long road ahead.
- Is the Future of Cryptocurrency Bleak or Bright? - April 15, 2019
- Technology a Boon or Bane for Cyber Risks? - April 8, 2019
- $2.47 Million Hacker Crypto Theft - March 25, 2019
- Cyber Attack on Australian Parliament’s System - March 18, 2019
- Facebook Discussed Charging Companies for User Data - March 12, 2019
- Artificial intelligence to Play a Major Role in Cybersecurity in 2019 - March 5, 2019
- Twitter “Toxic” to Advertisers and Investors: Amnesty International Report - February 25, 2019
- Facebook Suspended Accounts Tied to Alabama Senate Elections - February 18, 2019
- Data Breach: Is Social Media Data Ever Safe? - February 12, 2019
- Facebook Executive Denies Allegations of Company Paying To Create Fake News - February 5, 2019