What happened to the internet over the past decade? As online activity became centered on just a handful of websites, opportunistic extremists, hucksters, and misanthropes took advantage of lax oversight to move once-unthinkable ideas into the mainstream. At the same time, the platforms who turned a blind eye are still hesitant to cop to their own role in the rise of the alt-right and the resurgence of internet Nazis (who turned out to be real Nazis). Some of the most prominent examples of the online right are chronicled in Andrew Marantz’s new book Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation. He spoke to Intelligencer earlier this week about his reporting process, and the current state of online discourse.
On a broad level, how do you approach interviewing the bad characters in this book?
Very, very carefully. I do not at all take lightly the ethical concerns that are intrinsic in broaching or not broaching the subject matter that I’m interested in. I see a lot of glib dismissals of these questions. I see a lot of journalists say, “Well, as long as you write the truth your hands are clean.” And first of all, that raises all kinds of thorny questions about “the truth,” and also it’s not always the case that if don’t have any factual errors in your piece that means that you’re ethically in the clear, or even journalistically in the clear.
You can write a piece with all true facts in it and completely miss the larger story. If I just wrote a piece that said, “Gavin McInnes and Richard Spencer are two men with cleanly pressed dress shirts who hold some pretty controversial views and who deny that they are white supremacists” — all those facts would be true but I would be completely missing the story. So I always try to approach these things with caution and try to contextualize things as much as possible. And often when a story doesn’t reach the threshold of various forms of newsworthiness or informativeness or it doesn’t seem like something that I can do justice to, then I don’t do the story.
You’re obviously critical of all of these people in your book, and point out their contradictions. I guess part of the concern is that they seem to think that all press is good press, even if you are calling them racist or anti-Semitic or whatever. That helps them in a way, so does that complicate things at all?
Sometimes it helps them. I think the maxim that all press is good press is about as useful as any other maxim. It’s largely true but it’s not entirely true. There’s definitely a transactional nature to this stuff. I’m not naïve enough to think that these guys are talking to me out of the goodness of their heart. Obviously they’re making some kind of calculation that they think they might get something out of it, but it’s obviously a gamble for them. And I’ve been interested to note that since the book came out, most of the people in the book have sort of freaked out about it in one way or another, either by trying to chew me out privately or trying to chew me out publicly, or trying to disavow the book or trying to swarm it with negative one-star Amazon reviews. So I think I would have been a little bit disturbed if everyone in the book looked at what I did and said, “Thank you very much, nice doing business with you,” and seemed entirely pleased by it.
That said, obviously there’s something transactional about all journalistic endeavors. I mean, anyone who’s read Janet Malcolm knows that. And I think one of the ways to get at this is a thing I try to think through in the book about how trolls set an ingenious trap. If you engage with them in any way, including by mocking them or casting aspersions at them, they are, in a sense, getting what they want, which is attention or oxygen. But if you never rebuke them, then you are ceding ground to them. You are making it look as if their views are going unchallenged. You are allowing them to run whatever online space you’re talking about. When that online space is the bulk of American discourse, that’s not a good solution either.
So I don’t think it’s as easy as saying, well, these people want attention, therefore we can never pay attention to them, because not paying attention to them presents a huge set of different problems.
Sure. I’m curious about, you refer to them as trolls. Do you think troll is an adequate term for these types of reactionaries and extremists?
Well, I don’t refer to them as trolls. I refer to some people as trolls and I refer to other people as reactionaries, and I refer to other people as Nazis. If they could all be lumped into the same category, it wouldn’t have taken me so much time to embed with them to find out what they were really about, and it wouldn’t have taken me so many pages to write the book. There is a whole phylum of shitheads on the internet, and some of them are exactly what they seem to be and others are not exactly what they seem to be. Some of them are trolls, some of them are not. Some of them are a whole grab bag of things at once. You can be a troll and a misogynist and a racist and an anti-Semite and a liar and a Russian bot. I mean, you can be a lot of things at once. Again, I’m familiar with the critique that it’s somehow soft-pedaling to call someone a troll, but that’s not an exclusive category, and I think anyone who reads the book will see that I’m not using it in any kind of soft-pedal way.
Just out of curiosity, how would you define trolling?
I think it’s evolved like a lot of things, like almost any internet term has evolved. This was what made it kind of hard to write a glossary for this book, much less to use terms accurately in the book. Because as I get into in the book, terminology evolves quickly in life and especially on the internet, and it gets pushed in directions by propagandists that it wouldn’t have gone in otherwise. So the term “fake news” means something very different after Trump gets his hands on it than it did even three-months prior. And there’s a similar thing with trolling or with any internet-adjacent term.
In the good old halcyon days, back when these problems already were manifestly in existence, but before most people began to think about them, trolling just meant trying to get a reaction out of someone. Pranking them, trying to incite them into caring more than they were supposed to care, because the aesthetic of the internet is to be cool, is to not care about anything. And trolling was designed to incite a reaction out of people. Nowadays it has all kinds of other ancillary meanings baked into it.
I spent a lot of time with the founders of Reddit and embedded in their headquarters in San Francisco for many, many hours. Those guys, when they were growing up on the internet and acting as starry-eyed techno-utopians, they told me that they considered themselves trolls, and then obviously they tried to put an asterisk on that and say, “Of course, now that trolling has all these connotations of abuse and vicious misogyny and all the rest of it, now of course we don’t consider ourselves that.” That was only one tiny corner of the way that those guys evolved. They also evolved from free-speech absolutists into reluctant gatekeepers who let me sit in the room as they decided which Nazi subreddits to ban and which Nazi subreddits to not ban. So that was an act of internet curation and internet gatekeeping, that they, I don’t think, would ever have foreseen themselves undertaking in the early days of Reddit.
My personal definition of trolling is that the person doing the trolling has to not actually believe what they’re saying. Like, if a Nazi says all the Jews should die, that’s not really trolling, that’s just arguing.
It depends. I mean, maybe. You could define it that way. I think it’s a little more complicated than that. It’s like if you watch satire, sometimes satirists say the opposite of what they believe, sometimes they say exactly what they believe, but they say it in a context that is either through a persona or through a mask or … but whatever. I’m not comparing Nazi trolls to performance artists or anything, but I think it’s a little bit too simplistic to say some people are real Nazis and some people are fake Nazis. Obviously that’s true. There’s a spectrum of genuine belief just like there’s a spectrum of everything else. But I think it’s a little more murky and interpenetrated than that.
In your interactions with all of your subjects, a topic that comes up a lot is how their stances change on a whim based on what’s getting results online. How much of this is strategy for someone like Mike Cernovich or Lauren Southern or whoever? How much of that do you think is strategic and deliberate and how much of it seems a little bit like throwing anything at the wall and seeing what sticks?
Well, it’s both. I mean, how much of Facebook’s strategy was deliberate and how much of it was trial and error or minimum viable product or moving fast and breaking things. I think that there’s a certain amount of trial and error that could itself be a strategy, which is part of what makes these people so dangerous, because they are not constrained by consistency or truth-telling or ethical boundaries, so they are going to try everything and see what sticks. Unfortunately I think that is a really dangerously viable strategy.
To the point of the gatekeepers that you mention a lot in the book, there are a lot of examples of traditional media taking the bait on stuff. Like when Chris Cillizza takes the bait on Cernovich’s rant about Hillary Clinton’s health. Do you think that the alt-right or people within that vague spectrum would have the influence they had if mainstream media did not cover them?
Well, yeah. Of course they wouldn’t have the influence they have if they were never covered in the media. That’s also true of Donald Trump, that’s also true of a New York Times reporter who puts out a story in the New York Times and then gets to amplify it on CNN. Everyone who gets amplified on CNN gets additional power because of that. But I don’t think we can be so complacent as to think that if CNN never covered Mike Cernovich he wouldn’t have any influence because he’s still on Twitter and he still has a Facebook page. And if he didn’t have those platforms, there would be other platforms.
The reason that this book is a really deep kaleidoscopic critique of social media rather than just of one cast of characters or one set of platforms is that we’ve built an entire information ecosystem on the basis of emotional engagement. And emotional engagement has pro-social elements and also antisocial elements. So it’s not as easy as saying who should be banned from Twitter today, or is banning someone from Twitter a violation of their free-speech rights. I mean, all of those conversations are, in a way, so narrow as to miss the point. The problem is really, really fundamental and structural.
And similarly, I definitely think that there are times when the mainstream media gets duped and hoodwinked into covering stuff they shouldn’t, and I refer to certain mainstream media reporters as astoundingly frictionless weather vanes or some phrase like that. The critique there is, again, it’s not a one-size-fits-all critique, where you just say you must never cover X range of topics. That’s not flexible enough to be realistic. I think the underlying critique there is you can’t just be a weather vane and point wherever the winds of the sort of daily conversational gusts are pointing you. You have to know what you stand for. You have to understand the country you live in. You have to understand where your morals lie, and you have to not be so blown around by little winds and trends and the immediate social approbation of your peers that you don’t stand for anything.
I guess this gets sort of back to my earlier concern that if covering them at face value doesn’t work, and even covering them critically doesn’t really ding their influence at all, then the third option seems to be not covering them. Like you said, there’s no one-size-fits-all thing, but I’m trying to figure out a method of deliberation.
Sure, that’s an option. It’s an option to not cover them, but I just don’t think that gets you very far. I mean, look, we can all just pretend that the bad stuff on the internet doesn’t exist, but we’re just making ourselves sitting ducks. We’re just allowing the bad stuff on the internet to continue to fester, to continue to grow stronger, to continue to have greater and greater influence over greater and greater sectors of the country. So I think you have to cover it carefully. I think you have to cover it well. But I think to say that you can’t cover it at all means that we just want to live in a fantasy world where we only cover things we like. There’s a danger when you cover ISIS that you are perpetuating their propaganda. Does that mean that we should just pretend that ISIS doesn’t exist?
Fair enough. A lot of this book takes place in 2017 and 2018, and I was wondering if you think things have changed since then. What is covering online extremists or even just troll general activity look like in late 2019?
Well, to be clear, my purpose with this whole project was not just covering online extremists. I mean, for one thing, the subtitle goes on from there. It’s “Anti-Social: Online Extremists, Techno-Utopians and the Hijacking of the American Conversation.” Online extremists is just one part of that. And the way I view the job of a reporter is to find things in the world that are representative of your side of concerns and to try to examine them in enough detail and stick around and observe them with enough fly-on-the-wall detail that you can report on those things convincingly enough that they stand in for an entire universe, an entire set of concerns.
So it wasn’t as if I set out to find all the bad people on the internet, meet them all, shake hands with all of them, get them to say a few salacious things, and then fill my notebook with their quotes and then write them all down in a really thrilling order and then my job was done. I would be mortified if that were my job. I think that’s not what journalism is supposed to be. There were many goals of the project, but one of them was to try to use the bad guys on the internet as a kind of reductio ad absurdum, meaning if we had a good and reasonable and ethical and functional informational ecosystem, it wouldn’t look like the one we have.
And so I think we need a really careful, detailed look at how wrong we’ve gone in order to set things on a better course because, again, it’s really easy to sit back and look at bad guys from a distance and have a take. And that take would be these guys are bad, and that take would be correct. But I don’t think it’s all about useful. I think it’s useful to really see up close in detail what they’re about, how they actually act when their mask slips off, how they interact with each other, and then to properly contextualize that and use that as an example of how far we’ve gone from the idealistic utopian vision of what the internet was supposed to be. What was the question? I’m sorry. I don’t know if I strayed from the question.
No, that was helpful. My question was less focused on this book specifically and more on how it feels like people generally have gotten smarter about this stuff. I feel like I’m seeing less Cernovich. Granted, obviously I’m in a filter bubble and everything’s anecdotal. I’m seeing less Bill Mitchell. Milo has completely disappeared. Do you have any insight into what has happened over the last two years that led to that reduction?
Well, just as a cautionary note, you and I might be wiser now to the tricks of the people you named. I know you and I certainly are, but there was some conference at a Trump resort yesterday where they played some hilarious meme of Trump committing a mass shooting all in the name of so-called good fun and memeing. So I don’t think we can really safely say that the influence of shitty memes on the internet has subsided. If Carpe Donktum is still the president’s best Twitter buddy, we don’t live in a world where these things don’t have influence.
I will grant you that the individuals in my book did not become as powerful as they hoped to become. To the extent that there’s a hopeful arc to this book, one of the hopeful arcs is the extent to which a lot of the people I chronicle have a rise-and-fall narrative, and also, just along the way, they’re often more pathetic and bumbling and kind of darkly comical than I expected. So it’s not as if I’m just trying to expose people to 400 pages of pain and misery. But one of the more, I think, heartening things is that a lot of the people that I chronicle, they don’t end up members of Congress or with their own prime-time show on Fox News or something. They end up pretty dejected and cast out and pathetic. That said, again, I don’t want people to be complacent and think that because Richard Spencer doesn’t have a show on prime time that means his ideas haven’t permeated into the American discourse.
So I think some of the worst ideas around have permeated shockingly far into the center of American discourse, and I think we ignore that at our peril. And I think that is the doing of a small set of very willful propagandists, many of whom I spent a lot of time with. So I guess the way I look at it is that they were kind of the front line of soldiers who had to die off in order for the lines of people behind them to breach the … I don’t know my military metaphors all that well. They themselves didn’t make it into the promised land but the next generation of shitty meme-makers just might. See, I’m more comfortable with biblical references than military references.
No, that makes sense. There are obviously a lot of people who are active in making memes or pushing rumors and narratives and whatnot. Who is the passive consumer of this stuff? I’m just guessing the passive consumers comprise a much larger group than the people who are making memes or whatnot.
Oh, definitely. I mean the passive consumer is anyone. It’s just the internet is for anyone. And I think a lot of people consume stuff so passively that they don’t really know where it’s coming from. I think that’s most people. I think most people who consume the things that you and I make don’t really know where it’s coming from.
Why do you think they don’t know where it’s coming from? What’s causing that confusion?
The way that the social media platforms are designed. They’re designed to cause that confusion. It’s not an accident. They think it’s really fusty and old-fashioned that people like us read bylines or that we even care about what publication something comes from. It’s supposed to be one of the unspoken premises of the new social internet, new historically, that you’re not supposed to care where something comes from. Information is supposed to be fungible, it’s supposed to just be about making your information diet as widely varied as possible. And I should say, even though it goes without saying, that idea is really nice, and people should have wide and varied information diets, but like anything, there are limits. If this makes me fusty and elitist then, whatever, I’ll take it. But I just think that something I read in the New York Times is more likely to be accurate and well-told than something I read on OMGFacts. Or something I read on some random Facebook page of someone that I met once and forgot I was Facebook friends with.
So if that sounds anti-democratic, I think that’s just a sign of how far we’ve strayed from the basic notions of truth. I honestly think it redounds to the benefit of social media companies to pretend that boundaries don’t exist and truth is fungible and any piece of content from anywhere can just be sluiced into one giant content swamp. I mean, that obviously helps them, but I don’t think it helps us as consumers or as citizens.
To that end, how do you feel about Twitter and Facebook over the course of writing this book. Because Reddit’s the main social site that you spend time with, but how do those other two factor in? And I guess YouTube.
They’re all huge and I’m pretty fundamentally skeptical of all of them. So last I checked, YouTube and Google and Facebook were the top-three most-trafficked sites in the country, and Reddit was number four or five. So Reddit is up there whether we know many people who use it or not. It’s way more popular than Twitter, for example, even though journalists live on Twitter. But I think they all have a role to play, and I think they’ve all been pretty derelict in playing that role with a robust conscience.
Whether we like to think about it or not, it’s happening. We like to tell ourselves that the internet is just like some force of nature or it’s just some emanation of the popular will of the thoughts of millions of people, and if you don’t like what’s on it, you just don’t like what’s in the hearts and minds of your fellow Americans. But the fact is, it’s a product of many, many human decisions. And it very much brought home to me, as I was sitting there, trying to recede into the background as these Reddit engineers were sitting there going, “Well, this is a page full swastikas, so I guess we’ll ban that,” and “Well, this page has some swastikas on it but they seem to be in a kind of historical newsworthy context so I think we should leave it up.” And, “We banned a subreddit called dog sex but we forgot to ban a subreddit called sex with dogs.” You can’t really ignore the messiness and human subjectivity of the internet after you’ve seen something like that.
My stance is that the reason that Facebook and Twitter and YouTube and those other sites get so much flak is that they’re sort of unwilling to acknowledge the messiness of it. For me personally, I can deal with a human moderator making a bad call, but the idea that these AI systems can automatically figure out what is and isn’t breaking so-called neutral rules is like, “Give me a break.”
It’s bullshit. It’s bullshit. And I made exactly that point to all of the companies you mentioned. This was in 2014, 2015, 2016, before the dam really broke and before everybody was really onto them. I think some journalists and some people who were paying attention were onto them, and some critics from within the tech industry, but I would say that, broadly speaking, the general public was still giving these industries a pass and generally treating these entrepreneurs as if they were bold innovators instead of robber barons, or at least deeply flawed individuals.
So I made that pitch to them. I said, “You should let me see this stuff because it’s happening and people are going to figure out it’s happening, and frankly, I think your average concerned citizen would rather see you trying than just see you denying that you are even in the room, that there’s anyone really behind the curtain.” And the only company that really went for that pitch was Reddit. And honestly, I have critiques of them in the book and I think they look foolish in some ways, but I think on the whole they look like people who are trying to acknowledge their responsibility as new gatekeepers and to try to live up to that responsibility. And some of the other companies are just still incredibly implausibly pretending that they’re not curators, that they’re not stewards, that they’re not gatekeepers and it just strains credulity.
I generally think Facebook is too big, it operates at too big a scale to effectively moderate on a human level. I buy that. And the bind they get into is once you acknowledge that it can’t be moderated effectively without human input, then you’re sort of essentially saying that Facebook is too big to be safely run.
It’s too big not to fail. But obviously that’s not in the company’s interest to have that perception of itself, or for the public, or God forbid, regulators to have that perception of it. So they fight it. That’s why they consider Elizabeth Warren an existential threat.
What are you looking at for the coming election? What are things you’re either worried about or feel good about?
Very few things I feel good about. Well, I guess that’s a little bit glib. I mean, one thing that I do feel good about is, if you had asked me five years ago, pre-2016, whether any of this would be truly being addressed at any level, I probably would have said no. Even though I was the annoying guy who was making bets that Trump was going to win, I probably would have forced myself to imagine a future in which Hillary wins and the Brexit referendum fails and maybe we don’t have Duterte, maybe we don’t have Bolsonaro, maybe we don’t have Salvini, and we just kind of keep skating by, barely dodging bullets. How many metaphors can I throw at this? We just keep going on pretending that these deep structural problems don’t exist because, after all, when have these things really led us into crisis? Now that they have led us into crisis, I do think it’s just impossible to ignore, and the reforms haven’t been sufficient, like not even close to sufficient. But we are way more prepared to at least talk about the problem than we were when I first embarked on this project.
What sorts of reforms would you like to see?
I don’t spend a lot of time in the book talking about specific governmental reforms, and that’s for a few reasons. I don’t want people to get too hung up on thinking that if we just pass this or that bill, or this or that administrative tweak, or this or that reform of the terms of service of one or all of these companies, that that will make the problem go away. I do think the problem is more structural and systemic than that. I think the informational crisis we’re facing is akin to the climate crisis or the health crisis or something. I think it’s really deep.
But I think government regulation could play a role. I think some of the data privacy stuff in Europe is interesting, although I don’t know that it would fly here. I have alluded to how I think we ought to have a really sort of elemental rethinking of what we consider the First Amendment to mean or not mean. I think that could help. I think antitrust stuff could certainly help if it were done in the right way. But it’s not entirely my area, and also I just don’t think that would, even if all of it happened, which it won’t, I don’t think it would be enough.
Are you at all apprehensive about Facebook’s big push to encrypt everything on their messaging services?
Yes, very. The analogy I heard people use [in interviews] was toppling Saddam but creating a power vacuum that leads to ISIS. And when I say interviews, I mean people who actually were smart and decent and knew what they were talking about, not interviews with Milo Yiannopoulos. But it scares me because it seems like what Facebook is doing is responding to widespread critique of the way they’ve handled public discourse is by just retreating from it and saying, “If you don’t like the way we’ve handled public discourse, what if we just shove all this stuff into the dark and make it private and make it encrypted so that even we can’t control or corral or curate it even if we wanted to?” That’s not a solution, that’s just a retreat. I mean, that just sounds like them trying to get out of trouble by forcing dark stuff ever farther into the darkness.
I’m conflicted. I don’t like the amount of power Facebook has, but I also know that there’s a ton of bad stuff happening on Facebook, so I don’t have a good answer.
I agree, but I don’t think that the encryption will make the bad stuff go away. I think it will just … In fairness, it might make it less viral. So that is a big step. So there’s a huge trade-off between forcing it ever further into the shadows. That’s scary, but it might have the upside of making it less infectious.
Do you have any advice for the average normie social media user on how to properly digest social media and interpret it?
Well, I think a lot of us are passive in the way we consume information, and I think that isn’t a knock on individuals. It’s not to call people dumb or something. It’s just these systems are designed to make us passive. We have feeds that dump things on us and that filter into our brains. It doesn’t feel like a very active choice a lot times, and they’re acting on our brains in ways that feel kind of twitchy and lizard-brainy and not very full of deliberation and forethought. So that’s one thing people can do is just try to think a little more and try to take a breath and before hate-sharing something or love-sharing something or envy-sharing something, maybe just think, “Do I actually need to share this?” or “Why am I sharing this?”
But I am not necessarily one of those people who thinks everyone should just entirely log off. I understand those arguments, but I do worry about all the good people leaving these platforms to the not-good people, and I worry that people will feel that once they have deleted Facebook they have done their civic duty. But they will still live in a world where presidential elections are determined largely by Facebook, and whether we address our climate crisis is a product largely of the discussion that happens on Facebook. Again, I don’t think it’s enough.
Do you think there is a sort of age gap that will eventually mitigate some of these problems? Are younger internet users, do you think, just better at sorting all this out than older users?
Some of it, but not all of it. I think there are some memes that are designed to prey on the particular weaknesses and particular gullibilities of older users who might not be initiated into the ways that the web works, or not know how to be skeptical in the right ways or whatever. But there are lots of ways in which young people are particularly susceptible to being radicalized into just overt white supremacy, for example. Not that there aren’t plenty of old white supremacists, but it’s just that they have their ideology a bit more firmly in place, and young people can be radicalized on YouTube and are being radicalized on YouTube every day. So I think there are different vulnerabilities, if you had to generalize, that can be put into generational buckets. But I don’t think we can just sort of wait until all the old people get off Facebook or die off and then politics will be saved.