life in pixels

Facebook Stopped Russia. Is That Enough?

Photo: David Paul Morris/Bloomberg via Getty Images

Despite the lack of an election campaign, there was probably a small victory party in Menlo Park on Tuesday night: Finally, a major news event had passed in which no one was blaming Facebook. For all the social network’s elaborate preparation — and for all the close fact-checking attention of news organizations — the election had come and gone, and no one seemed prepared to suggest that Facebook, or social media more generally, had played a decisive role, for good or for ill. So far, the close races have come down to the kind of standard modern political campaigns we’re much more used to, not networks of state-sponsored Russian bots or Macedonian teenagers spreading conspiracy theories for money. The idea of anyone writing an article claiming that “Republicans Control the Senate Because of Facebook” seems far-fetched.

This is, needless to say, a striking change from 2016, when Facebook was blamed, sometimes entirely, for electing Donald Trump. It seems worth asking: Did the much-publicized Facebook “War Room,” dedicated to stopping misinformation and coordinated influence campaigns — or, at least, dedicated to convincing reporters that Facebook was taking those problems seriously — actually work? Is social media safe again? Has Facebook finally beaten fake news?

As with so many things in our varied new world, the answer is both a heartening “yes” and a terrifying “lol, no, are you kidding me?” I think we can say, cautiously, that Facebook and Twitter have managed to mitigate (if not fix) some of their most glaring and well-known problems — the sexy stuff like foreign influence operations (Russian bots!) and for-profit “fake news” sites — and, in so doing, have escaped the kind of particular and discrete blame they were assigned in 2016. It’s just that there’s a better question to ask than, “Did Facebook stop Russia?” — “Is that enough?”

Two years ago, in the aftermath of Trump’s unexpected victory, Facebook, and to a lesser extent Twitter, found themselves cast as some of the election’s biggest villains. What the big new social networks had done, in a word, was “misinform”: they’d created and encouraged platforms that were now being used to spread not just just hyperpartisan “news” but out-and-out misinformation, wild conspiracy theories, and divisive propaganda — some of it, we would learn, created and distributed by trolls and operatives at the direction of the Russian government. Under some pressure, and after some resistance, both companies vowed to address and fix their misinformation problems and become good citizens of the sphere.

Have they succeeded? The trouble with assessing what Facebook has managed to fix in last two years is that it is an enormous platform and that “misinformation” is a vague category. What, exactly, was the “misinformation” at issue, and how had Facebook or Twitter abetted it? In the coverage that followed the 2016 election, two particular examples came to the forefront. The first, of course, was “fake news” — the phrase that, before it became a presidential incantation, referred to the Facebook-specific phenomenon of websites built to look and feel like legitimate news sources, but which published mostly fear-mongering fictions or wish-fulfillment under the guise of “satire” for the purpose of enticing large audiences and selling advertising. The second was Russian influence operations — the now well-documented practice of Kremlin-sponsored trolls creating and posting to Facebook and Twitter pages with the intent of sowing discord, confusion, and mistrust.

As far as these two significant issues are concerned, Facebook has done well. The problem of for-profit “fake news” (as distinct from hyperpartisan, but “merely” misleading, news sites like Breitbart) hasn’t been fully fixed on the platform, as even a brief perusal of the site’s top stories will tell you, but there is some evidence that the company’s efforts are working. In the months since the presidential election, Stanford researchers found that interactions with “fake news” have dropped by as much as 50 percent, despite having climbed dramatically in the period leading up to that election. And while dealing with troll farms is a bit like playing Whac-a-Mole, Facebook has been vastly more aggressive with its mallet than it was through most of 2016: as recently as the day before the election, it suspended 30 Facebook and 85 Instagram accounts it had connected to Russia’s infamous Internet Research Agency. None of this means, obviously, that Facebook has decisively eliminated state-sponsored trolls — or huckster fabulists — from its platform. But its vigilance, and the public’s greater awareness of the problem, has mitigated the worst of them.

Is that sufficient evidence to state that Facebook has “fixed” its misinformation problem? Well, it depends on what the meaning of “its” is. The problems of misinformation that are unique to the Facebook platform itself — like fake-news entrepreneurs leveraging Facebook’s ability to generate traffic to sell ads on their websites — are being addressed fairly aggressively. But the funny thing about removing hundreds of Russian trolls who were creating and distributing misleading, offensive, and highly partisan memes and posts from your site is that you’re left with millions of American citizens creating and distributing misleading, offensive, and highly partisan memes and posts. Facebook’s doing a good job dealing with Russia. Unfortunately, the bigger problem now is with America.

Pushers of fake news and Russian trolls represent, essentially, an engineering problem — they’re bad actors whose badness is predefined in specific and identifiable ways — and Facebook is very good at solving engineering problems. But Americans, exercising our American prerogative to distribute material accusing the Democratic presidential candidate of masterminding coded satanic sex rings, are an everything problem. Facebook can’t staunch the free flow of our bullshit without dramatically changing its operating philosophy (by making truth judgments about the content its users post), its business practices (by hiring a vast army of new employees to make those judgments), and, arguably, its entire design (by leaving freely available attention on the table). You can’t put 80 percent of the country on a communications platform, reward them for posting outrageous content, and expect everyone to rigorously fact-check their status updates.

Cast in this way, the Facebook “misinformation problem” is less a discrete problem to be solved, and more a dynamic to be managed. And a familiar one, at that. America has been a nation of conspiracy theorists and prurient gossips since its founding; for decades now, at least since the rise of cable news, we’ve been enjoying news formats that mix emotional charge, outrage, fear-mongering, interpretation, and actual information. It should not have been surprising when researchers studying American media during the 2016 election found Fox News to be a more influential vector for misinformation than any Russian trolls (or than Facebook itself). Facebook is showing up to a party that’s been going on for years.

That’s not to absolve Mark Zuckerberg or Sheryl Sandberg from responsibility for their platform, or to suggest that the misinformation circulating on Facebook isn’t worth worrying about. It’s just to note that the problem is much larger and more complex than “fake news,” and any serious effort to stop it will require more than just a “war room,” and more institutions than just Facebook. When Facebook and Twitter and Fox News and dozens of right-wing bottom feeders — not to mention political candidates — are all reinforcing each other’s misleading information about, say, a migrant caravan in Mexico, we’re well beyond a problem of “not enough moderators” or “the AI isn’t advanced enough.” That might be one reason why Facebook has escaped criticism for the midterms: It’s a bit simpler to congratulate it for whacking Russian trolls than it would be to ask it to redesign itself, reform the media industry, and reeducate every uncle and grandmother with a fondness for memes.

Because ultimately, what social media’s “misinformation problem” comes down to is really the same old misinformation problem we’ve had for decades, scaled up and accelerated for our new era. Which points to the other reason Facebook has avoided blame: We regard that problem as more or less normal. It’s not as though there aren’t wild conspiracy theories and outright lies still circulating on Facebook — it’s just that they’re mostly being created and shared by Americans and their favorite cable news channels instead of by Russian trolls and fly-by-night blogspots. It’s not optimal, but it’s what we’re used to.

Facebook Stopped Russia. Is That Enough?