select all

Leaked Rule Book Details Facebook’s Quixotic Plan for a Global Community

Photo: Justin Sullivan/Getty Images

Over the weekend, The Guardian published Facebook’s extensive internal guidelines for graphic and violent posts on the social network. The rule book has been, until now, secret for two primary reasons. The first is that by knowing the rules, they become easier for users to skirt around. The second reason is that Facebook is, and has always been, a company reluctant to admit that it needs thousands of human eyeballs to monitor the billions of posts made on the platform every day.

Spelled out in explicit detail, the guidelines can seem downright callous. They state what to do — or not do — when moderators are confronted with death threats, violent videos, posts celebrating other violent content, and attempts to self-harm (among many other possible scenarios). As summarized by The Guardian:

Remarks such as “Someone shoot Trump” should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”, or “fuck off and die” because they are not regarded as credible threats.

Videos of violent deaths, while marked as disturbing, do not always have to be deleted because they can help create awareness of issues such as mental illness.

Some photos of non-sexual physical abuse and bullying of children do not have to be deleted or “actioned” unless there is a sadistic or celebratory element.

Photos of animal abuse can be shared, with only extremely upsetting imagery to be marked as “disturbing”.

All “handmade” art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not.

Videos of abortions are allowed, as long as there is no nudity.

Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”.

In succinct form, the guidelines can sound galling. At the same time, Facebook knows what any regular internet user quickly picks up on: “People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.” It’s easier to fire off threats from behind the veil of anonymity or pseudonymity than it is to say the same thing directly to someone’s face. Context matters.

And yet, what is Facebook supposed to do? For months, Facebook has been struggling with how to determine context. Last October, it was taken to task by a Norwegian newspaper, which had been penalized for uploading the infamous “napalm girl” photo from the Vietnam War. As the company admitted in a blog post at the time, “Whether an image is newsworthy or historically significant is highly subjective. Images of nudity or violence that are acceptable in one part of the world may be offensive — or even illegal — in another. Respecting local norms and upholding global practices often come into conflict.”

Facebook still seems determined to construct a single global community of nearly 2 billion people with differing cultures and standards. In February, Mark Zuckerberg published a meandering, 6,000-word “manifesto” about the issue, stating that he wants to make Facebook the global communication layer between vastly differing cultures and governments.

When presented with Facebook’s ultimate goal of uniting the world, and looking at the rules as impartially as one of the company’s many content moderators might, what other choice does Facebook have except to be as permissible as is legally possible? (It should be noted that Twitter and Reddit had similarly laissez-faire models for this stuff.) The company is famously hesitant about acknowledging its need for human moderators to exert even minimal editorial control over the platform, and in his manifesto, Zuckerberg wrote of replacing them with artificial intelligence. That won’t happen anytime soon, and in the meantime, the company is hiring another 3,000 people to monitor the network for video of murders and suicides.

At the heart of Facebook’s moderation dilemma is the News Feed, the algorithmically ranked bundle of posts that is newly calculated every time you load it up. News Feed is architected in a way that encourages cross-pollination between different communities — just one user clicking “Like” on a video may cause it to appear in the feeds of dozens of their friends. Sharing is meant to be as frictionless as possible, because more sharing means more data to put to use for advertising purposes. That ease of sharing, even indirectly, is what makes Facebook’s social network so powerful, and so perilous.

As users continue to migrate to more private forms of sharing (ephemeral sharing like Snapchat or group DMs), and as the News Feed continues to wither on the vine, Facebook’s moderation problem could gradually fix itself. In his manifesto, Zuckerberg wrote that one of the keys to building his globalist megacommunity was enhancing the Groups product — in other words, his big community is actually a conglomeration of millions of tiny, semi-private ones. Many of these closed groups, even the largest ones, have posting guidelines stricter than Facebook’s baseline, enabling them to effectively self-police in private (and reciprocally, groups focusing on objectionable content cannot spread their posts so easily). As fully public sharing decreases, and as users migrate to more intimate online spaces, the need to enforce strict global guidelines lessens.

But, because of how powerful News Feed distribution is, we’re still years away from a world in which you don’t see graphic content and violent threats go viral across the network. The good news is that their eradication is not out of the realm of possibility.

Facebook’s Quixotic Plan for a Global Community