Do any human beings work at Facebook? I ask because its latest attempt at combating misinformation and fake news on its platform involved highlighting and promoting comments that called news stories “fake.” According to the BBC, a small subset of the site’s users were served top comments containing the word “fake” on various news article, both misleading and legitimate. The idea here, one imagines, is to spotlight debunking or skeptical comments — but as anyone who’s listened to the U.S. president describe news he disagrees with as “fake” might have guessed, articles from “the BBC, the Economist, the New York Times and the Guardian” all found themselves with top comments that called the news sources “fake.”
The intention behind this test on a small subset of the site’s users is clear: promoting skeptical comments that called out news as fake or misleading and placing them adjacent to the headline might in some cases promote skepticism and judiciousness. The problem is that in an increasingly polarized media universe where facts, and more significantly, the relevancy of certain facts, are up for interpretation, a tool that classifies everything as false doesn’t really help matters much.
Facebook told the BBC in a statement, “We’re always working on ways to curb the spread of misinformation on our platform, and sometimes run tests to find new ways to do this. This was a small test which has now concluded.” (A note about the word small: one percent of Facebook’s user base is 20 million people.) The company did not specify what, if anything, it had concluded from the test results.