This week, representatives from Facebook, Google, and Twitter will visit Capitol Hill and try to explain what they think happened in American politics in 2016 — and what role their large and influential platforms played in it.
If you read the news — or really, if you use any of these services on a regular basis — you likely know why these tech companies have been assembled in D.C. It’s now public knowledge that the Internet Research Agency, a group linked to the Russian government, used social-media accounts that appeared to belong to Americans to push hyperpartisan (and often factually incorrect) stories, memes, and positions about issues like police brutality, gun control, and immigration, in a semi-coordinated effort to influence the presidential election. In some cases they organized political rallies. Most significantly, in a minority of instances, the Russian accounts paid to boost certain messages into the view of other users. On Facebook alone, posts from these accounts may have been seen by 126 million people.
Virginia Democrat Mark Warner, Vice Chairman of the Senate Intelligence Committee, has set out a list of questions he plans to ask during his commitee’s hearing tomorrow:
These are good questions, worth asking and important to answer. But they’re a bit limited. For the most part, we know how Russia used internet platforms to influence the election: They used them the way Americans do. What the Internet Research Agency did to rile people up is no different than what Americans do to rile people up. And the answer to the question of how susceptible these platforms are to misinformation campaigns is, well, “very.” For all the deserved focus on political ads paid for by Russian operatives, what most worries experts about the role of internet platforms in 2016 is the “organic” stuff — the mis- and disinformation created and shared for free, not just by Russians but by cynical or misguided Americans, that can shape and direct political debate.
Put another way, the problem of foreign interference is a subcategory of a larger and more existential question about the effects of mass digital media on politics and democracy — a question that both the government and the tech companies they’ll be interrogating seem unprepared to handle. It’s one thing to identify and protect against a single, outside bad actor like “Russia.” It’s quite another to figure out what to do when your own citizens are enthusiastic participants, from start to finish, in an attention economy that rewards misinformation and hyperpartisanship. If these congressional hearings are to be effective in really forcing the tech industry and the public to grapple with the wide-ranging effects of internet platforms on civic society, they need to ask bigger questions than the ones proposed by Senator Warner. What structural and technical factors made these platforms such a ripe target for exploitation? How can the government better regulate platforms to ensure that they act in the broad public interest? Maybe, most importantly, they need to ask: Are Facebook and Google too big?
The good news is that senators seem to recognize, at least in part, the nature and the scale of the problem that currently confronts us. In today’s hearing, a separate meeting with the Senate Judiciary Committee, Senator Dick Durbin asked a pointed question of Facebook general counsel Colin Stretch: “How are you gonna sort this out consistent with the basic values of this country when it comes to freedom of expression?”
Even if “vile” content couldn’t be an ad, Louisiana senator John Kennedy expressed a large and correct amount of skepticism that even for paid ads, Facebook was at a scale at which moderation could be sidestepped. “You have 5 million [monthly] advertisers and you’re gonna tell me that you’re able to trace the origin of all of those advertisements?” Stretch began talking about Facebook making an effort to do its best, but that wasn’t the point of Kennedy’s question. His point was to highlight that Facebook’s stated commitment to solving the problem of deceptive advertisers doesn’t match its current ability to do so.
Kennedy’s right: Facebook is so big that it’s not clear if self-regulation is enough to solve the problems it poses. Or, really, any kind of regulation at all. Facebook’s user base is nearly a third of the world’s population, and ideas can go viral within its ecosystem cheaply, easily, and quickly: Trolls imitating American affectations, using the same tools everyone else uses, were seen by 126 million people on the network. Google is the single most important institution for obtaining information about the world — and, for hours after the shooting in Las Vegas, promoted as a top result an obvious hoax. It’s worth considering the idea that these sites are simply too large to operate as independent, for-profit companies — a heavy existential question, but one worth pressing these companies on.
Constant human moderation is one solution to Facebook’s problems, but if moderation is off the table — too expensive, too slow, too bad for business — then the only alternative is to move back toward a decentralized web. If we make it more difficult to spread information, put up more barriers against nefarious infiltration, make it the law that consumers have a right to export their data, and enable them to escape Facebook and spread out and form new homes and discussion hubs online, we might do a good job of ensuring a less misinformed electorate. But unless Congress is willing to ask these hard questions, we’re going to go in circles around Russia and the election for a long time.