select all

Do We Want Giant Tech Companies to Be Our Anti-Harassment Overlords?

Today, Wired published an article by Andy Greenberg that profiles Jigsaw, a small but ambitious new Google subsidiary. In part, Jigsaw focuses on making the internet better and safer for people living under oppressive regimes — delivering them anti-censorship workarounds, for example, and making it easier for them to mask their real-life identities.

Notably, and admirably, Jigsaw makes a point of getting its employees in touch with actual human beings who live in affected parts of the world, and plans on flying employees out to these regions to learn more about the needs of their populations. That’s in large part due to the company’s founder and president, Jared Cohen. During a Rhodes scholarship in the early 2000s, Cohen spent a great deal of time traveling around Iran, Syria, Lebanon, and Iraq, developing a deep attachment to the Middle East and a strong belief that technology can help improve people’s lives there. He went on to work for the State Department, carving out an influential role as a tech-oriented wunderkind before leaving to work at Google.

In addition to its antiauthoritarianism work, Jigsaw has also taken an interest in fighting online harassment, and that’s where things get a little sticky.

As Greenberg writes, this part of Jigsaw’s agenda rests on some exciting-seeming new technology:

Jigsaw is about to release … a set of tools called Conversation AI. The software is designed to use machine learning to automatically spot the language of abuse and harassment—with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” says [Cohen]. “To do everything we can to level the playing field.”

Google says Conversation AI has been delivering some impressive results. “Feed a string of text into its Wikipedia harassment-detection engine,” writes Greenberg, “and it can, with what Google describes as more than 92 percent certainty and a 10 percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack.”

Whether or not that sentence is true in some very strict sense, it is a bullshit claim, as any moderately seasoned tech reporter can attest — we are nowhere near the point where this sort of sentiment analysis is ready for prime time. Luckily, Greenberg isn’t a rube, and he puts the tech through its paces rather than accept Google’s claims at face value. The results are troubling. When he types “I shit you not,” the algorithm spits out an offensiveness rating of 98 out of 100, “you suck all the fun out of life” gets a 98, and “you are a troll” a 93. Calling someone is a moron gets you a 99; the explicit threat Motherboard reporter Sarah Jeong received during a prolonged harassment episode detailed by Greenberg, “I’m going to rip each one of her hairs out and twist her tits clear off,” gets a 10.

But let’s go ahead and assume the technology will, in the long run, work as advertised — Google says it’s relying on machine-learning algorithms that are designed to slowly improve the software’s accuracy. Do we want this? Do we want a Google or a Twitter or a Facebook to have access to such powerful tools? Greenberg raises one potent potential objection: “Throwing out well-intentioned speech that resembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect.” In other words, let’s say I’m furious at my repressive government, express that sentiment, and get censored by an algorithm trying to keep everything civil. That wouldn’t be good.

It isn’t hard to come up with a hundred other potential problems with letting software run by a powerful organization — whether Google or someone else who adapts the technology for their own use — make these sorts of decisions. It’s worth pointing out that the two harassment victims highlighted in the article, Jeong and Sady Doyle, both express serious qualms to Greenberg about Jigsaw’s approach, despite having been the recipients of countless rape and death threats. “These are human interactions,” says Jeong, meaning you wouldn’t want to give an algorithm like this too much power. But while Jigsaw’s employees do give lip service to the idea of a human moderator always being in the loop, you don’t develop something like Conversation AI because you’re hoping to keep humans around in the long run — you develop it to reduce human labor costs and involvement in the herculean task of overseeing online cacophonies.

Obviously, if a repressive government or, say, the Koch brothers were working on this sort of technology, it would attract a lot of arched eyebrows; people would find it creepy, would harp on the potential for abuse and endless false positives. Will Conversation AI attract this much scrutiny? Probably not. And the reason why is nicely summed up by one of Cohen’s quotes. “It’s hard for me to imagine a world where there’s not a continued cat-and-mouse game,” he tells Greenberg, referring to the broader good-vs.-evil fight in the online world. “But over time, the mouse might just become bigger than the cat.”

You’re not the mouse, though! In any meaningful feline/rodent schematic of humanity, you’re one of the cats, because you work for one of the most powerful companies in the world. The problem is that because of where we are in the conversation about online harassment, cats can claim to be mice. That is, because online harassment is so explicitly awful and such a searing topic at the moment, it’s very hard to critique individual solutions, even despite the fact that history is replete with “solutions” to urgent problems that end up having a net-negative effect on the world by introducing so many new problems. When we’re panicking about something — and it can simultaneously be true that online harassment is serious and that we’re in a panic about it — the critical parts of our brain tend to get flipped temporarily off. A huge juggernaut of a company developing powerful new censorship technology? Hey, at least they’re fighting harassment.

And maybe that’s all this is for now — maybe Google is simply trying to make money (no one involved denies profit is the goal) by developing anti-harassment services. But given the company’s power and profits — given the fact that it can change how hundreds of millions of people view the world by ticking some random variable up from X = .00000001 to X = .00000002 — everyone should embrace a very, very high burden of proof when the company claims to be trying to do good. Being on the right side of the harassment fight shouldn’t be enough.

Do We Want Tech Companies to Be Anti-Harassment Overlords?