You’ve probably seen the headlines or felt it yourself while scrolling. A Black creator’s video about systemic reform gets "shadowbanned" into oblivion while a video featuring a "low-threat" aesthetic—basically, a specific, Eurocentric version of beauty—races to a million likes. It feels personal. It feels like the app has a mind of its own, and that mind isn't exactly a fan of diversity.
So, why is Instagram so racist? Honestly, it’s not just one guy in a room pulling levers to hide certain people. It’s a messy, layered soup of human bias, lazy coding, and a business model that prioritizes "engagement" over actual equity.
We need to talk about what’s actually happening under the hood.
The Engagement Trap: Why the Algorithm "Prefers" Certain Faces
A massive study from the University of Nevada, Reno, published in July 2025, really pulled the curtain back on this. Researchers looked at over 70,000 Instagram posts. They found that social media users—that's us, the audience—exhibit "subtle signs of racial bias" that the algorithm then amplifies.
Basically, the study found that Black individuals on the platform get more engagement only when they display "low-threat" features. Think big smiles, being older, or appearing more feminine. If a Black creator is "front and center" in a post, engagement often drops compared to when they are in "supporting roles."
The algorithm doesn't "hate" anyone. It just learns. If the collective "we" pauses longer on a white creator’s travel reel than a Black creator’s travel reel, the machine thinks: "Okay, people like this more." It starts feeding that preference back to everyone. It’s a feedback loop of our own implicit biases, and the tech just makes it 10x worse.
Automated Moderation is Kinda Broken
Instagram uses AI to keep the "community safe." But AI is notoriously bad at understanding context.
Take the term "algospeak." You’ve seen it: people writing "rac!st" or "segreg@tion." Why? Because the automated moderation tools are often trained on datasets that can't tell the difference between someone using a slur and someone talking about their experience with that slur.
- The Intent Gap: A 2025 report from the HKS Misinformation Review highlighted that detection models lack the ability to capture "human intent."
- The "Safe" Filter: Instagram’s "Sensitive Content Control" often flags topics like racial justice or LGBTQ+ rights as "sensitive." This pushes that content out of the Explore page and Reels tab.
- Data Bias: If the AI was trained on "professional" photos that are 90% white, it starts to see anything else as an anomaly or "lower quality."
There was a case with Nyome Nicholas, a plus-size Black model. Her photos were repeatedly flagged and removed for "nudity" while similar photos of thin, white women stayed up. This isn't just a glitch; it's a pattern where the AI treats marginalized bodies as inherently "more provocative" or "riskier" than others.
Shadowbanning and the "Whitewashed" Feed
There’s a lot of "algorithmic gossip" out there about shadowbanning. But for creators of color, it’s a lived reality. A 2025 study from City Research Online looked at how women of color in the fitness space have to "train" their own algorithms just to see people who look like them.
The fitness and "wellness" niches on Instagram are notoriously whitewashed. When creators speak up about this, their reach often mysteriously tanks. It’s a "veiled form of content moderation," as the Reynolds Journalism Institute puts it. By downranking accounts that talk about "heavy" topics like racism, Instagram keeps the vibe "advertiser-friendly."
Brands want to sell leggings next to a sunset, not next to a 10-slide carousel about the history of redlining. Because of that, the money—and the algorithm—steers the platform toward "sanitized" content.
Breaking the Cycle: What Can You Actually Do?
If the system is rigged, how do you fix it? You can’t rewrite the code yourself, but you can mess with the machine’s data.
Stop just scrolling past. The algorithm tracks "watch time" and "saves" more than likes. If you see a creator from a marginalized group making great content, save the post. That tells the AI, "This is high-value stuff," which forces it to show it to more people.
Diversify your "Following" list aggressively. If your feed is a monolith, the algorithm thinks you only want a monolith. By following and engaging with a wider variety of voices, you’re essentially "teaching" your personal algorithm to stop being so biased.
Use the "Not Interested" button on the fluff. If your Explore page is nothing but the same three "aesthetic" influencers, long-press and hit "Not Interested." It clears the path for more diverse content to actually reach you.
Don't rely on the "Home" feed. Use the "Following" tab (click the Instagram logo at the top left) to see posts in chronological order. This bypasses the algorithmic bias entirely and shows you what the people you actually chose to follow are saying—no filters, no "sensitivity" downranking.
Instagram is a tool. Right now, it’s a tool that reflects the loudest and most biased parts of society. But by changing how we interact with it, we can at least start to tilt the scale back toward something that looks a little more like the real world.
💡 You might also like: The Gutenberg Bible Explained: Why a 500-Year-Old Book Still Breaks the Internet
Next Steps for You:
Check your "Account Status" in settings to see if your content is actually eligible for recommendations. If you notice a sudden drop in reach, try removing hashtags that might be "flagged" and focus on engaging directly with your community through Stories, which are less strictly filtered by the main feed algorithm.