select all

Vague Questions and Vaguer Answers from Twitter and Facebook on Capitol Hill

Photo: Drew Angerer/Getty Images

Last year, representatives of Silicon Valley attended the first of what would become semi-regular meetings with Congress, open hearings in which the companies were criticized and prodded over a number of distinct issues: foreign election meddling, hate speech, censorship and privacy. Following those initial hearings and the lines of questioning on display, I thought Congress was on the cusp of an important realization: that these companies are simply too broad in scope, too large in scale, to address any of these issues effectively. Our representatives haven’t gotten there yet.

Almost a year later, nothing has changed. After other hearings of varying seriousness this year — witnesses ran the gamut from Facebook President and CEO Mark Zuckerberg to YouTube stars Diamond and Silk — Congress and Big Tech now perpetually recycle the same dialogue in each meeting.. The tech companies say they are aware of the problems, could’ve done better, and will do more. The Congressional representatives respond with veiled threats of regulation: moves like breaking up the companies, or revisiting the shield laws that limit platform liability for user-submitted content. And then little happens and everything resets on the next go around. This is because both the interrogators and those being interrogated like to speak of these issues in vague terms, either because they don’t entirely grasp the issue or because they want to avoid saying anything too definitive.

Today in front of the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey fielded questions ostensibly about foreign election meddling but, as the hearings always tend to go, were actually about a kludge of messy issues (Google was invited but didn’t appear, earning condemnation from the committee). For Sandberg, the hearing was about expressing Facebook’s commitment to stopping — and being transparent about — “inauthentic behavior.” For Dorsey, it was about emphasizing Twitter’s commitment to bolstering the “health of conversation” on his platform.

What do these terms even mean? It’s unclear. “Inauthentic behavior,” according to Sandberg’s opening statement, means that the accounts “misled others about who they were and what they were doing.” That is, by design, an unspecific definition. It’s what lets Facebook try to ferret out sock-puppet accounts from Russia without also shutting down the page for The Onion. Trying to judge whether behavior is “inauthentic” is a lot like trying to determine whether one of Trump’s falsehoods is a “lie”” – it’s difficult to determine intent. If the only difference between an American and a foreign user posting the same Black Lives Matter meme is intent, then maybe the metric needs to be reconsidered.

Dorsey’s endorsement of “healthy conversation” is similarly vague. A healthy conversation could mean exposing democratic socialists to alt-right Twitter users and vice-versa, or it could mean siloing each group entirely. Does showing users more factual news, even if personally distressing, count as healthy or harmful? Dorsey didn’t really say and the Senators didn’t push him on it.

One of the many problems with regulating big tech is that nobody seems sure about what aspect to regulate (speech? algorithmic fairness? competition?), and neither side wants to say. Politicians don’t want to push for regulating speech, and tech companies don’t want to openly state that they can arbitrarily limit speech at any time for any reason they want. Similarly, neither party wants to admit that that commitment to freedom of speech is what allows toxicity, foreign meddling, malevolent bots, and so-called “inauthentic behavior” to proliferate.

Eventually something has to break. Maybe the tech companies eventually assert their rights as private businesses to self-govern their platforms. Maybe regulators get tired of waiting and act without the cooperation of these companies. But trying to walk the line between these options is only shedding more light on how enormous these platforms are, how difficult online interaction can be to parse and how tough fair enforcement can be to execute. In other words, the longer this stalemate drags on, the more likely Congress is to arrive at the conclusion that the platforms are simply too big to exist.

Compounding the nebulously defined problems from Sandberg and Dorsey are the solutions that Facebook and Twitter are emphasizing to combat them: Artificial intelligence and strictly enforced sets of rules. They are trying to judge the fuzziness of human interaction in terms of 1s and 0s. Even the humans making moderation calls must do so at a machine-like pace, spending just seconds on each deliberation. The amorphous problem and the cold, technical solution don’t exactly align.

By deploying this rhetoric in the Senate, Sandberg and Dorsey put regulators in a tight spot. Nobody wants to seem like they are endorsing “inauthentic behavior” and nobody wants to be against “healthy conversation and debate.” And so we are left with a stalemate — in which complex problems are boiled down to metrics then analyzed by machines and machine-like humans.

Maybe we are using the wrong metrics to evaluate social media and its many problems. The next time these companies appear before Congress, it’s probably worth asking if “inauthentic behavior” is the right problem to tackle, or if whatever defines “healthy conversation” is the right outcome to work towards.