select all

Can Facebook Trust Publishers to Say How Trustworthy They Are?

Photo: David Paul Morris/Bloomberg via Getty Images

Facebook really wants you to trust that the news stories you see on your News Feed are true. To that end, it announced a new initiative today, what it calls “trust indicators.”

Trust indicators aren’t a concept that originated from Facebook — it’s a system devised by the Trust Project at Santa Clara University “to make it easy for the public to identify the quality of news.” The Trust Project wants to essentially tag publishers for reliability, so “digital platforms, such as Google, Facebook, and Bing, will be able to use machine-readable signals from the Trust Indicators to surface quality news to their users.” The test is rolling out with a “select group of publishers” at the moment, “with plans to expand more broadly over the coming months.”

But how fast can Facebook expand the program? As explained on a demo page Facebook showed off in its blog post announcing the project, some of these signals will (at least per this screenshot) be self-reported:

If I wanted to make a little money on the side, I could register realglobal.news for $10, throw up a Wordpress blog, and write up a fun story about Planned Parenthood employees used aborted baby parts as Halloween decorations. I could then make a Real Global News Facebook page and sign up for Instant Article in about 90 seconds, which also gives me access to the Brand and Credibility tab. Left to my own devices, if Facebook ever opened up its trust indicators to all publishers, I could conceivably rate my fact checking as impeccable, my correction policy as top notch, my ownership structure as being made up of “true patriots,” and my masthead as being comprised of Abraham Lincoln, Sean Hannity, and Jesus Christ.

In other words, if Facebook should ever open the floodgates to all publishers, you’re going to need real humans to double check what publishers self-report.

To be clear: I think the Trust Project out of Santa Clara is being run in good faith and Facebook’s attempt to use it is admirable. There’s utility in trying to quantify how trustworthy a news source is and making that into something Facebook and Google’s algorithms can understand — and “trust indicators” will run off more than just what publishers self-report. But to get “trust indicators” to scale and actually have noticeable effect on the fake news ecosystem will require vetting thousands of publishers, which means you need actual living people getting involved in the process. Santa Clara University is unlikely to be able to provide that manpower, which means other tech companies would need to step in — but that cuts against the ethos at all major tech companies, which have become incredibly profitable while employing relatively few employees, even as third-party contractors.

The fake news problem on Facebook is still rooted in how easy it is to start a fake web news site and then use Facebook’s incredible scale and the tendency of its users to share inflammatory headlines to reach many, many people (and then get cheap ad revenue from the resulting clicks). Facebook has made efforts to try to combat this, including starting up a third-party team of “fact-checkers,” displaying other articles about topics to “provide context,” or, in one experiment, making the top comment on any story have the word fake in it. The company is willing to expend engineering resources and contort itself in numerous ways to fight fake news — but until some vast leap forward in machine learning occurs, actual breathing humans being paid actual wages will be required if Facebook wants to keep fabricated stories from spreading across its platform.

Displaying more information about, say, the Washington Post or Vox or Fox News may help Facebook users understand a bit more about who is producing their news. But those sites aren’t the main bad actors in spreading untrustworthy news — it’s the thousands of low-quality, high-quantity churn-and-burn news sites like defense-usa.club or theviralpatriots.com that are the issue. Any objective reader who gives those sites even a cursory glance would mark them as untrustworthy. If Facebook opens up its “trust indicator” to any and all comers, they’ll need someone to start looking.

Can Facebook Trust Websites to Say How Trustworthy They Are?