Elon Musk Can’t Solve Twitter’s ‘Shadowbanning’ Problem

For several years, social-media users have expressed anxiety about algorithmic suppression. Now they’re getting some unexpected clarity.

Screenshots of Elon Musk's Twitter account fading into shadow
The Atlantic

Since Elon Musk took over at Twitter, he has apparently spent a considerable amount of time “looking into” the personal complaints of individual users who suspect that they are not as visible on the platform as they should be.

Chaya Raichik, the woman behind the fearmongering account Libs of TikTok, pointed out that she is on a “trends blacklist” and asked, “When will this be fixed @elonmusk?” A popular MAGA shitposter who goes by
Catturd ™ wrote that he was “Shadowbanned, ghostbanned, searchbanned.” The far-right personality Jack Posobiec said that “a lot of people” had told him that they couldn’t see his tweets for some reason. And Musk replied to each one and offered all of them the same assurance: He would get to the bottom of it.

“Shadowbanning,” in its current usage, refers to a content-moderation tactic that reduces the visibility of a piece of borderline content rather than removing it entirely. It originally referred to something much more dramatic: quieting annoying personalities on message boards by making their posts totally invisible to everyone else. Platforms such as Twitter and Facebook have denied doing anything that extreme, but they do limit content’s reach in various ways—it’s frequently unclear how or why, which makes people suspicious. Shadowbanning can mean that posts aren’t promoted to a wide audience, or it can mean something more severe, such as hiding accounts from search results (platforms tend to blame this on bugs).

In general, the practices that slow a post’s spread or limit an account’s reach are intended to be consolations or compromises—they allow for more nuanced moderation than a system in which something is only either left up or taken down, and a person is either not banned or banned. Regardless, shadowbanning has been a pet peeve of Republicans since 2018, when Donald Trump called it “discriminatory and illegal.” Controversy was renewed in December with the temporary uproar over “the Twitter Files,” a batch of pre-acquisition documents and internal communications about content moderation (including some practices that could be called shadowbanning) that Musk gave to hand-selected reporters.

Although Musk wants to be the hero who ends shadowbanning forever, he’s unlikely to fully assuage paranoia about it. After more than a decade of widespread social-media use, many people have deeply held pet theories about how algorithms work, and about how they affect them personally. So far, the Musk era of Twitter has been a shadowban Rorschach test, with different users seeing a different reality based on the stories they’re already telling themselves about their experiences on the platform. “Thank you @ElonMusk for lifting the #shadowban on controversial views,” an #exvegan who advocates for all-meat diets posted earlier this month. Meanwhile, Catturd ™ tweeted on Friday that he believes “all conservatives accounts are being throttled and hidden again just like before @elonmusk took over ownership.” Other users have also complained that they’re still being persecuted:

  • “It’s so dull & frustrating STILL being under a shadowban”
  • “@Twitter busily shadowbanning folks again, including me”
  • “Hi @elonmusk, can you stop hiding my cleavage from the world?”

Musk recently added “View” counts to the bottom of tweets, presumably with the intention of equipping users with data and giving them greater insight into whether others actually are seeing their tweets and just not liking them. This effort appeared to mostly anger people: The numbers were smaller than expected, which served as more evidence of shadowbanning.

Effective or not, Musk’s efforts indicate that moderation policy on major social-media platforms is moving into an anti-shadowban era. Users have been loudly agitated by shadowbanning for so long that platforms are finally acquiescing. Instagram introduced an “Account Status” tool in October 2021, which gives creators and business owners limited but meaningful insight into whether a professional account’s content has been marked as ineligible for recommendation (meaning that it won’t be promoted in the app’s Explore section or in other users’ feeds). In December, Musk announced, “Twitter is working on a software update that will show your true account status, so you know clearly if you’ve been shadowbanned, the reason why and how to appeal.” This update has yet to materialize (Musk says it’s coming “no later than next month”), but it’s sure to be popular when it does.

“Sometimes, it feels like everyone on the internet thinks they’ve been shadowbanned,” Gabriel Nicholas, a research fellow at the Center for Democracy and Technology, wrote in The Atlantic last year. In a survey he helped run, 9.2 percent of social-media users said they believed they’d been shadowbanned at some point in the past year.

But of course, these people had no firm proof. Those who believe themselves to be shadowbanned can only swap stories, share data they’ve collected, make arguments, and suggest conspiracies. This is the subject of recent work by Laura Savolainen, a doctoral student in sociology at the University of Helsinki. For a paper published last year, she used a tool called 4CAT to collect thousands of comments about shadowbanning posted in popular Reddit forums about Instagram, YouTube, and TikTok. Sorting through the comments, she saw social-media users sharing bits of what she calls “algorithmic folklore.” They would describe a fluctuation in the engagement on their accounts and then tell a story about what they imagined was causing it. Or they would listen to someone else describe their suspicions and help build on them.

These people evoke data and cite analytic tools that monitor account performance, demonstrating their “heightened awareness” of “ubiquitous numbers,” according to Savolainen. But the way in which many of them use these numbers is arbitrary. They fill in the gaps with speculation and personal grievance.

“Algorithms are very conducive to folklore because the systems are so opaque,” Savolainen told me. “These wider technological networks connect us to people on the other side of the world, and we don’t know who they are or why they made this decision or that decision.” Obviously, we’re going to have fraught relationships with something that undergirds our social lives and, for many, our financial stability. (In the survey that Nicholas ran, 20 percent of respondents who believed they’d been shadowbanned said it “affected their ability to make a living.”)

Here is where the shadowbanning debate becomes sort of a tragic misunderstanding. People who use social platforms think of themselves, naturally, as people. And they think of the algorithm as one all-powerful thing assessing them and passing judgment. In reality, the people who use these platforms are collections of data. Savolainen explains in her paper that the algorithms behind something like TikTok or Instagram regard their users as “composites of individual features—clusters continuously formed and reformed as the data traces users emit are processed and correlated.”

In the Reddit comments Savolainen cataloged, there were many people who took their shadowbanning “very, very personally.” They felt persecuted by the algorithm; sometimes, they felt self-doubt. “Am I shadowbanned, or is it just not good-quality content? Maybe I’m shadowbanned, or maybe I’m not that good of a singer after all. I’m not sure which would actually have been worse for people,” she told me.

To her mind, platforms owe us transparency not because it’s fair and because we are all entitled to a certain amount of visibility, but because they have created a fake emotional and mental conundrum for us, and they should resolve it. “Everything that goes on on a platform is already always artificial,” she said. There’s no control against which to compare any post’s performance, because post performance isn’t a concept that exists without social media. The distinction between “now the algorithm is working normally” and “now the algorithm is shadowbanning me” is all in the brain of the beholder. It makes no sense. It’s not reality. (It’s hurting my head.)

The people in charge of most of these platforms would argue that they can offer answers only within limits. If they start revealing every single consideration that goes into every single recommendation decision, people will begin to game the system in ways that nobody will like. Or, if they start providing a ton of context to users about the way their accounts are being treated by various algorithms, there’s no telling what people would actually make of the information. Some may only be further confused by it.

And what’s worse? You may find that you’re not shadowbanned. You may find that ignorance was bliss.