Anyone who has spent a significant amount of time trapped in arguments on Facebook has probably surmised that the News Feed is, well, bad — for productivity, for mental health, and probably for society at large. By prioritizing engagement as a way of evaluating how to rank posts, it exacerbates anger, fear, and long-held biases in users, in a way that seems to almost distort reality.
“But,” you say, “how can we really know this? After all, everyone’s Facebook feed is different and every instance of Facebook-fueled rage is anecdotal.” Fair point. Luckily, a pair of researchers at the University of Warwick have come up with a compelling study, examining more than 3,000 instances of anti-refugee violence in Germany alongside the Facebook usage of the communities in which they occurred. The conclusion is not encouraging to say the least.
From the New York Times:
One thing stuck out. Towns where Facebook use was higher than average, like Altena, reliably experienced more attacks on refugees. That held true in virtually any sort of community — big city or small town; affluent or struggling; liberal haven or far-right stronghold — suggesting that the link applies universally.
Their reams of data converged on a breathtaking statistic: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 percent.
(It should be noted that an updated draft of the paper lowers the calculated increase in attacks to 35 percent.)
In addition, the increase in violence did not correlate with higher-than-average general web usage. It was Facebook-specific, regardless of the qualities of the community in which violence occurred. In a statement to the Times, Facebook said, “We’re working on it” (I’m paraphrasing).
The other wrinkle — the one that makes Facebook’s job of moderation pretty much impossible — is this: “Experts believe that much of the link to violence doesn’t come through overt hate speech, but rather through subtler and more pervasive ways that the platform distorts users’ picture of reality and social norms.”
In other words, the Facebook posts that over time achieve a critical mass that results in violence are implicit, not explicit. They are not users saying, “I hate refugees,” or using racial slurs, but users making comments about things like “slowing massive demographic change.”
The conclusion to draw from this is that Facebook has two options, neither of which seem particularly palatable for the company. It can do more to limit user speech on posts that are not explicitly hateful but couched in the rhetoric of civil discussion — the types of posts that seem to fuel anti-refugee violence. Or it can tweak its distribution mechanisms to minimize overall user engagement with Facebook, which would also reduce the amount of ad money it collects.
Or there’s a third option: It can just not do anything, even as more and more evidence piles up that Facebook’s social network reliably fuels violence. I guess we’ll just have to wait and see.
Update: Following the Times article, numerous questions about the report have come up. For one, the paper has not yet been peer-reviewed. For another, the method of simulating “regular” Facebook usage was to examine nearly 22,000 posts from German users on the Nutella Facebook page. Whether that’s a sufficient amount of activity, and whether Nutella’s brand page serves as a useful proxy for normal usage — that’s an open question. What all of the questions about methodology really highlight is this: researchers have to resort to scraping data from a brand page because Facebook refuses to allow independent academic researchers to examine the effects of its platform.