The concept of the “shadowban” has existed practically since the inception of the web forum. The idea is simple: Instead of banning a user outright and informing them of such punishment, a shadowban makes a user’s post visible only to themselves. In other words, they think that they’re still part of a community, but nobody is able to see or interact with them. The shadowban is useful in that rather than someone immediately being locked out and possibly retaliating by, for example, making a new account, the user fades away gradually due to the lack of interaction from other users.
As forums have evolved into social-media platforms, the shadowban and its descendants have come along with them. Facebook argued that the best way to dispel misinformation from places like Infowars, which argues that the 2012 Sandy Hook shooting was a staged hoax, was to secretly minimize their distribution in the site’s News Feed — a similar but not identical tactic. I myself have previously argued — tongue in cheek — that Twitter should shadowban (or “hellban”) President Donald Trump, who most recently threatened a cataclysmic war against Iran — the idea being that he gets the feeling of communicating with his followers while his actual posts are hidden from the (volatile) world.
More than anywhere else, though, the shadowban has persisted among the conspiracy-minded, as a great weapon of censorship, generally thought to be perpetrated by liberal Silicon Valley operatives at the expense of conservatives. That right-wing voices are being “shadow banned” by Twitter and Facebook has become all but an article of faith among internet-native conservative activists and publishers — despite little evidence to support the claim.
It’s no surprise, then, that a Vice News article claiming that Twitter is “shadow banning” Republicans has already taken hold in the minds of the most online right-wingers. At issue, though, is not “banning” but Twitter’s search mechanism. Usually, Twitter will automatically complete a search query and suggest an account when a user begins typing in Twitter’s search bar. If you type in “Donald,” for example, Trump’s account is displayed as an easily accessible hyperlink, so that you don’t have to click through to the results page; if you type in “Hillary,” you’ll get a similar autocomplete suggestion of Hillary Clinton.
And what happens if you type in, as one so often does, “Andrew Surabian,” Donald Trump Jr.’s spokesperson and, in Vice’s words, a “prominent Republican”? According to Vice, Surabian and other “prominent Republicans” — like Republican Party chair Ronna McDaniel and a number of GOP lawmakers — are not being automatically suggested when you type in their names. All of these accounts, importantly, are still shown atop the actual search results page. The only problem is that you have to click through to it, instead of being given an easy autocomplete link. (Vice’s report is essentially a more partisan-focused repackaging of an article published by Gizmodo on Sunday, which covered how alt-right figures like Richard Spencer and Jason Kessler were also not appearing in auto-populated fields on Twitter. In some instances, parody accounts were featured in their stead.)
To start with, and to state the most obvious, this sort of moderation isn’t shadow banning. Users following the affected accounts will still see their tweets; those accounts still appear in search (just not in the search-bar auto-population). “Shadow banning,” as generally imagined and described by the activists who claim they’ve been affected, would actively suppress user content even to followers, not just make accounts one click more difficult to find.
Which is why, to the extent that this is even a problem, it’s pretty easy to come up with an alternate explanation: This is a side effect of a minimal measure designed to make sure that people aren’t preemptively encouraged to consume bad information from dubious sources. In May, Twitter announced that it would do what it could to ensure “people contributing to the healthy conversation will be more visible in conversations and search”; it seems eminently likely that an algorithmic strategy to make trolls and extremists less visible on the platform semi-accidentally ensnared some Republicans. (And can you blame it?) As HuffPost’s Ashley Feinberg pointed out on Twitter, well-known left-wing podcast hosts—surely, in the grand scheme of things, around the same level of “prominence” as Don Jr.’s spokesperson—are suffering from the same “problem” of being marginally more difficult to be searched out.
Vice quotes a Twitter spokesperson saying that the company is “shipping a change,” but that the “technology is based on account *behavior* not the content of Tweets.” New York Law School professor Ari Ezra Waldman told Vice that, “This isn’t evidence of a pattern of anti-conservative bias, since some Republicans still appear and some don’t. This just appears to be a cluster of conservatives who have been affected.” He added, “If anything, it appears that Twitter’s technology for minimizing accounts instead of banning them just isn’t very good.”
That’s a more likely scenario than a cabal of secretive Twitter employees trying to suppress the speech of … Donald Trump Jr.’s spokesperson. Is it bad? Sure, the way a hangnail is bad. But it’s not censorship, and it’s certainly not “shadow-banning.”
It’s well-established that right-wing media and personalities hew further to the right, and are consequently more likely than their left-wing counterparts to interact with fringe accounts (the kind of signal that would indicate to Twitter’s algorithm that an account is a troll or harasser), if not actually spread falsehoods and sensationalized outrage. But in framing this issue as the “Republicans being treated differently than Democrats” instead of “minimizing falsehoods and bad faith,” Vice has given conservatives even more ammo with which to claim oppression.
Vice’s story feels an awful lot like one reported two years ago by Gizmodo, which claimed that Facebook was “suppressing conservatives.” In reality, Facebook’s editors were making the editorial judgment call that manufactured, misleading, and hyperpartisan stories from conservative outlets — such as ones about Benghazi, in 2016 — were less relevant to Facebook’s users than breaking stories from mainstream outlets. Similarly, Twitter’s new system seems less about suppressing conservative viewpoints and more about minimizing controversial figures whose primary tactic is to stoke outrage through heavy slant or outright distortion.