Facebook is cracking down. This morning, as part of a broader operation, Facebook has been undertaking a cleanup of some of its policies around the stuff that gets posted to its site. The social network announced new rules for what users could and could not monetize (i.e., what they could and could not place ads in). To summarize the guidelines, anything that might make people unhappy or uncomfortable is not allowed to be monetized. This includes things like violent content, sexually provocative content, excessive consumption of drugs or alcohol, excessive use of inappropriate language, and the “misappropriation of children’s characters.”
Sonic the Hedgehog jokes aside, a lot of these categories are exactly the sorts of things that news-reporting operations trade in (as Facebook itself directly acknowledges). Weirder still, the guidelines contain explicit statements that newsworthiness is not a justification: “Content that focuses on real world tragedies, including but not limited to depictions of death, casualties, physical injuries, even if the intention is to promote awareness or education,” states one guideline. That also includes natural disasters, such as the recent pair of hurricanes that pummeled the southern United States.
“Content that features or promotes attacks on people or groups is generally not eligible for ads, even if in the context of news or awareness purposes,” states another guideline, noting that “debated social issues” cannot be monetized. So if you were hoping to get some revenue from talking about Black Lives Matter or make something like Vice’s recent, heavily shared documentary on Nazis and white supremacists in Charlottesville, Facebook won’t let that happen.
There are a lot of contradictory ideas butting heads in these guidelines. Facebook wants to minimize financial incentive for users who glibly (and sometimes profitably) share provocative, graphic content. That’s good! But in doing so, Facebook is also minimizing the impact that thorough and responsible reporting on these issues can have on the broader debate that Mark Zuckerberg has spent the past year waxing philosophical about. It’s tough to have the open dialogue about tough issues when posting about them is discouraged.
The larger point — and this has been clear for a while — is that at 2 billion users and climbing, Facebook is not equipped to effectively moderate its platform. Different cultures and geographic regions have different norms for what counts as offensive or violent or overly sexual. And at the same time, Facebook is actively trying to reduce the human part of the moderation process, trying to offload as much as it can to nebulous AI functions. The moderation process continues to be opaque and inconsistent.
The network is too large for a one-size-fits-all approach, and yet Facebook keeps trying to come up with one and kick the can down the road. In the current iteration, news organizations that live and die by Facebook’s algorithms, which are steering less and less traffic to outbound links, have been kneecapped by these guidelines. Oh, and also, Facebook continues to claim it’s not a media company.