select all

The Answer to Online Harassment Might Be Several Hundred Years Old

Livestreaming service Periscope announced Tuesday that it’s going to radically change the way it handles harassment. Not with machine-learning neural-net AI or finely tuned algorithms or anything that’s currently the rage in Silicon Valley, but with an ancient and familiar system: a trial by jury.

Periscope is a broadcasting app that makes it easy for people to film and share live video, broadcasting it to an audience of friends or, often, strangers. One of the best (or worst) features about the app is the live feed of comments visible to both viewers and the person broadcasting — almost like an old episode of TRL. Like most (okay, all) comments sections online, the Periscope feed can easily get jammed with stupidity, spam, or, worse, harassment and abuse.

Periscope’s entire existence would be very difficult to explain to a Viking or citizen of ancient Athens. (You’d have to start with electricity.) But its new method of comment moderation would be immediately familiar to them: Whenever a comment is reported, randomly selected users — a jury of the user’s peers — will be asked to vote on whether the comment is spammy, abusive, or kosher. If the majority of voters believe that the comment is spam or abuse, “the commenter will be notified that their ability to chat in the broadcast has been temporarily disabled,” the company explained in a blog post. “Repeat offenses will result in chat being disabled for that commenter for the remainder of the broadcast.” All in, the company says the process should only take a few seconds.

It’s important to offer a path to rehabilitation,” Aaron Wasserman, a Periscope engineer, told Fast Company about the new jury system. “That person may not have intended for it to be as harmful as it really was. We’re inviting you to stick around and do better next time.”

Moderating abuse and harassment at the enormous scale of a successful social network is a huge problem, and one that tech companies have generally approached the same way they approach most problems: by outsourcing the job to underpaid, overworked contract workers (there’s a huge number of social-media moderators in India and the Philippines) while also trying to code solutions that would eliminate the need for humans at all (Facebook just announced that more abusive images on the social network are reported by AI than humans).

In this light, Periscope’s new method — despite the fact that it’s a small twist on a millennia-old method for obtaining justice — is refreshing. Periscope’s parent company, Twitter, has faced a persistent and seemingly intractable harassment problem — spurred on in part by Twitter’s own inconsistent moderation, which ranges from heavy-handed to weak to just plain baffling — and it’s hard not to think that a similar system could lead to quicker and more satisfying outcomes in cases of abuse. As Periscope acknowledges, there’s no substitute for human perception to determine the causes and faults of culturally complicated interactions — an insight for which we can thank the ancient Greeks.

Are Juries the Answer to Online Harassment?