Advertisers have a dozen reasons to purchase ads on Facebook, but the foundational pitch, the one that’s allowed Facebook to outpace all newspapers, combined, in advertising revenue, is this: The Facebook News Feed is influential. That is, as Facebook tells its potential clients, the average user, who spends 20 minutes a day scrolling through a custom-tailored and exactingly targeted News Feed, is likely to buy things that appear in that News Feed. There’s some evidence that this is true; further, Facebook itself has done some (extremely shady and manipulative) research into how the composition of a given user’s News Feed affects that user’s mood. It also just makes intuitive sense. The truth is that Facebook, as a business, just doesn’t make sense unless you believe that its most profitable product, the News Feed, is influential.
So it was odd to hear Mark Zuckerberg essentially say the opposite last night at the Techonomy conference, as Will Oremus reported in Slate:
False news stories that were shared hundreds of thousands of times on the network […] “surely had no impact” on the election, [Zuckerberg] said, speaking at the Techonomy conference.
“Voters make decisions based on their lived experience,” Zuckerberg went on. The notion that fake news stories on Facebook “influenced the election in any way,” he added, “is a pretty crazy idea.”
In an extended onstage interview with David Kirkpatrick, author of The Facebook Effect, Zuckerberg noted that fabricated stories made up a small fraction of all the content shared on Facebook. And he suggested that the criticism Facebook has received for fueling such falsehoods was rooted in condescension on the part of people who failed to understand Donald Trump’s appeal. “I think there is a certain profound lack of empathy in asserting that the only reason someone could have voted the way they did is because they saw fake news,” Zuckerberg said. “If you believe that, then I don’t think you internalized the message that Trump voters are trying to send in this election.”
Zuckerberg is arguing against a straw man, here; no one I’ve seen has argued that “the only reason someone could have voted the way they did is because they saw fake news.” It was a close election — a few hundred thousand votes would have swung it to Clinton — and close elections tend to be the product of multiple and difficult-to-unravel factors. It seems hard to argue that Facebook wasn’t one of those factors. Fake news wouldn’t even need to do anything as challenging or specific as transform Clinton voters to Trump voters to have changed the outcome of the election: Emotionally charged fake news could have galvanized Trump voters and ensured high turnout; or its scale and volume could have drowned out (or offered false counterweight to) the rigorously reported mainstream-media stories about Trump’s greed, corruption, and mendacity.
In the wake of Trump’s election there’s been a lot of discussion of “bubbles.” Maybe the media is in a bubble that prevented it from seeing the depth of support Trump could maintain in the Rust Belt. Maybe Midwesterners are in a bubble that stokes racism and xenophobia and fear of difference. One particularly opaque bubble seems to be that from which the titans of social media are observing their platforms. Here’s Jack Dorsey, founder and CEO of Twitter, in the days following Trump’s election:
“Unacceptable,” Dorsey writes about reports of politically motivated harassment and abuse, on the network he founded, so memorably described by a former employee on BuzzFeed as “a honeypot for assholes.” If Dorsey believes such abuse is unacceptable, where has he been for the last ten years? Twitter’s marauding bands of misogynist, racist, and anti-Semitic trolls have become so notorious — and so much worse since the launch of the Trump campaign last year — that the harassment problem caused at least two suitors to walk away from an opportunity to buy the company.
The tech industry, and especially its social-media sector, has always been excited about its potential for revolutionary change — in some cases, quite literally. But its brightest lights — the founders and CEOs — seem reluctant to acknowledge the consequences of those changes. Do they even use the services they’ve created? The fact is that Zuckerberg and Dorsey are both too famous, in real life and especially on their own platforms, to ever have a regular user experience. When you lead a company and have millions of followers, you’re unlikely to be having enjoyable semi-private conversations that get crashed by abusive strangers, or to spend enough time scrolling through your News Feed that you start coming across clear lies shared by friends and family.
Now, you can’t blame social-media CEOs for denying their products are being used for (or are, in fact, incentivizing!) misinformation and harassment campaigns, any more than you can blame the arms-manufacturer CEOs for slapping “guns don’t kill people, people do” stickers on the buttstocks of their AR-15s. And it’s true that there isn’t really an easy solution for either Twitter or Facebook that doesn’t also create concerns about speech and press protections.
But if you lead a revolution, at some point you’re going to have to govern. Zuckerberg and Dorsey have created tools that legitimately changed the world, and not always for the worse. But in so doing they’ve also eliminated the physical and cultural structures — from establishment-newspaper dominance to the primacy of face-to-face interaction — that helped keep harassment, abuse, and misinformation to a smaller minimum. Now that the world is changed, it’s up to them to ensure that the best values of the past endure. They could start by at least acknowledging the problem.