Twitter announced on Friday that it will make sweeping, albeit temporary, changes to how its platform works in an effort to prevent the amplification of misinformation about the election — including making it harder for tweets to go viral. The measures, which follow high-profile efforts to prevent President Trump and others from abusing its platform, are likely to have an even greater impact on the feed of its most powerful and prolific user.
Arguably the biggest change that will apply “globally” on the platform will be a new prompt, when users go to retweet something, aimed at encouraging them to add their own commentary with a quote tweet instead of a standard commentless retweet. “Though this adds some extra friction for those who simply want to Retweet,” Twitter executives said in a blog post, “we hope it will encourage everyone to not only consider why they are amplifying a Tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation” If a user doesn’t add any commentary, the retweet will appear as normal.
There will be two other platformwide changes aimed at promoting more “thoughtful consideration” of what users share and amplify. The company will disable “liked by” and “followed by” recommendations from people whom users don’t follow. Furthermore, Twitter is making a change to its “trending topics” feature, which highlights widely discussed content and has been rightfully criticized for promoting misinformation. While the company isn’t disabling the feature, as many critics want, it says that it will stop listing trends in a U.S. user’s personalized “For You” tab unless the trends include additional context on what that trend is and why. That means, in theory, that no trending topics will appear for U.S. users unless they have been vetted, fact-checked, and contextualized by Twitter’s curators. The company hopes the added context will “help people more quickly gain an informed understanding of the high volume public conversation in the U.S. and also help reduce the potential for misleading information to spread.” It is also an attempt to prevent organized misinformation campaigns from exploiting or hijacking trending topics in order to artificially gain credence and visibility on the platform.
The retweeting “friction” and the other two changes will go into effect on October 20 and run through at least the “end of election week,” at which point the company will determine whether they should remain in effect longer.
Meanwhile, Twitter reiterated its pledge to prevent candidates from prematurely declaring victory before the election outcome is “authoritatively called” via “an announcement from state election officials, or a public projection from at least two authoritative, national news outlets that make independent election calls.” Those tweets, as well as any that incite violence or election interference, or promote COVID-19 misinformation, will be labeled by the company. Beginning next week, anyone who tries to retweet one of those labeled tweets will first see a warning directing them to credible information before they are allowed to share it.
Twitter also says it is going to add “additional warnings and restrictions on Tweets with a misleading information label from U.S. political figures (including candidates and campaign accounts), U.S.-based accounts with more than 100,000 followers, or that obtain significant engagement.” Under the policy, users will only be able to quote tweet those labeled tweets, not retweet, like, or reply to them, and the labeled tweets won’t be algorithmically recommended to users, either. The company has also reserved its right to remove those misleading tweets entirely if necessary.
Twitter’s moves follow Facebook’s announcement that it will temporarily ban political advertising starting on Election Night, while Google says it will ban political and issue advertising starting a week earlier. (Twitter banned political ads, but not issue ads, earlier this year.)
Regardless of how effective Twitter’s countermeasures are, they will undoubtedly prompt more backlash from Trump, who has on multiple occasions sent or retweeted tweets that Twitter has flagged as misinformation and slapped with warning labels — or even removed. “Make no mistake,” a Trump campaign spokesperson said in response to the new policies on Friday, “this corporation is attempting to silence voters and elected officials to influence our election, and this is extremely dangerous for our democracy.”
On the other hand, it’s not clear how quickly — or uniformly — Twitter’s curators will be able to stamp out misinformation before it spreads. In the past, that process has taken hours, allowing plenty of time for the offending messages to reverberate. And as Recode’s Shirin Ghaffary notes, critics have praised Twitter’s new policies but wondered how vigilant and efficient the company would actually be at implementing them:
“As always, the big question for both [Twitter and Facebook] is around enforcement,” wrote Evelyn Douek, a researcher at Harvard Law School studying the regulation of online speech, in a message to Recode. “Will they be able to work quickly enough on November 3 and in the days following? So far, signs aren’t promising” …
Douek said that platforms “need to be moving much quicker and more comprehensively on actually applying their rules.” But, she added, if “introducing more friction is the only way to keep up with the content, then that’s what they should do.”
In a Wired op-ed, communication scholars Mike Ananny and Daniel Kreiss applauded Twitter’s new changes as well, but argued that Twitter and Facebook should also add a time-delay for posts sent by Trump and other political elites:
In those hours [before the social media companies act], as recent research from Harvard shows, Trump is a one-man source of disinformation that travels quickly and broadly across Twitter and Facebook. And we know that the mainstream media often picks up on and amplifies Trump’s posts before platforms moderate them. Journalists report on platforms’ treatments of Trump’s tweets, making that and them the story, and giving life to false claims.
What if we never let Trump’s disinformation breathe to begin with, cutting it off from the social media and mainstream journalism oxygen it craves?
We suggest Twitter and Facebook immediately institute the following process for all of Trump’s social media posts, and for those of other political elites: Any time the president taps “Tweet” or “Post,” his content is not displayed immediately but sent to a small 24/7 team of elite content moderators who can evaluate whether the content accords with these platforms’ well-established policies. This is especially important in the context of electoral and health disinformation, which all the major platforms have singled out as being of utmost importance. Within, say, three minutes, such a team would decide whether to (a) let the post through, (b) let the post through with restrictions, (c) place a public notice on Trump’s account saying that the platform is evaluating a post and needs more time, or (d) block the post entirely because it breaks the company’s policies. The platforms would publicly announce that such a system was in place, they would provide weekly metrics on how many posts the review system had considered and categorized, they would allow those impacted to appeal any decisions, and they would revisit these systems after an experimental period to evaluate their effectiveness.
Many other critics have challenged Twitter and Facebook to make the changes permanent, while others have pointed out that social-media platforms’ efforts to slow down social media amount to a kind of self-repudiation:
Or as Charlie Warzel noted at the New York Times on Thursday, in response to Facebook’s new efforts to remove extremist groups and conspiracy theorists like QAnon followers:
Facebook’s pre-election actions underscore a damning truth: With every bit of friction Facebook introduces to its platform, our information ecosystem becomes a bit less unstable. Flip that logic around and the conclusion is unsettling. Facebook, when it’s working as designed, is a natural accelerating force in the erosion of our shared reality and, with it, our democratic norms.