Why videos of people committing suicide keep surfacing and how to actually stop the cycle

Why videos of people committing suicide keep surfacing and how to actually stop the cycle

The internet has a dark side that doesn't just stay in the corners of the deep web. It's right there, on your TikTok feed, your X (formerly Twitter) timeline, or even shared in a WhatsApp group by someone who didn't know what they were clicking on. We’re talking about videos of people committing suicide, a phenomenon that tech companies claim to be fighting but never quite seem to win against. It’s heavy. It’s traumatic. And frankly, the way these clips go viral says a lot more about our digital infrastructure than it does about the people watching them.

You’ve probably seen the headlines when a specific video "breaks" the internet. Remember the 2020 Facebook Live incident involving Ronnie McNutt? That's the case most people point to because it was uniquely horrific in its spread. It wasn’t just that it happened; it was that the algorithm actively pushed it into the "For You" pages of children who were just looking for dance trends or gaming clips. That’s the real danger.

The Viral Architecture of Graphic Content

Algorithms don't have a moral compass. They prioritize "engagement." When a video—even a tragic one like videos of people committing suicide—starts getting clicks, shares, or long watch times because people are in shock, the system sees that as a "high-value" piece of content. It starts serving it to more people. This is basically a feedback loop of trauma.

Platforms like Meta, ByteDance, and X use a mix of AI hashing and human moderators to catch this stuff. Hashing is sort of like a digital fingerprint. Once a video is identified as violating terms, its "hash" is added to a database so the AI can block it if someone tries to re-upload it. But people are clever. They’ll add a filter, change the aspect ratio, or put a border around the video to change the digital fingerprint.

It’s a constant cat-and-mouse game.

Why Moderation Often Fails

Human moderators are the unsung, traumatized backbone of the internet. They sit in offices in places like the Philippines or Ireland, watching thousands of hours of the worst humanity has to offer. Studies have shown that these workers often develop secondary PTSD. Because they can only look at a clip for a few seconds before moving to the next, things slip through.

🔗 Read more: How Do You Know You Have High Cortisol? The Signs Your Body Is Actually Sending You

Sometimes, a video is disguised. It might start as a video of someone talking about their day, only to cut suddenly to the graphic act. This "bait-and-switch" tactic is exactly how these videos bypass initial AI scans.

The Psychological Impact of "Accidental" Viewing

Most people don't go looking for this. It finds them. When you stumble upon videos of people committing suicide, your brain goes into a state of acute stress.

Psychologists refer to this as "vicarious trauma." You aren't there, but your nervous system reacts as if you are. For young people, whose prefrontal cortex is still under construction, this can be devastating. It creates a sense of "mean world syndrome," where the world feels significantly more dangerous and hopeless than it actually is.

There's also the "Werther Effect." This is a documented phenomenon named after a 1774 novel where a wave of copycat suicides followed its publication. When graphic videos are shared widely, it can romanticize or normalize the act for vulnerable individuals who are already struggling. It provides a "script" for something that should never be scripted.

  • Immediate physical reactions: Nausea, racing heart, or inability to sleep.
  • Long-term effects: Flashbacks, increased anxiety, or a desensitization to violence.
  • The "Rabbit Hole" effect: Once you've seen one, the algorithm might incorrectly think you want more "edgy" content.

What the Tech Giants Are Actually Doing

You'd think with billions of dollars, they'd have fixed this. Honestly, they've made progress, but it's not perfect.

💡 You might also like: High Protein Vegan Breakfasts: Why Most People Fail and How to Actually Get It Right

The Global Internet Forum to Counter Terrorism (GIFCT) is one of the main bodies where companies like Google and Meta share their hashing databases. This is a big deal because it means if a video is caught on YouTube, it’s harder to post it on Facebook. But the "decentralized" web—sites like 4chan or smaller, unmoderated forums—don't participate in this. That's where the videos usually originate before migrating to the mainstream.

In 2026, we’re seeing more "edge computing" where the AI on your actual phone tries to scan a video before it even renders on your screen. Apple and Google have been experimenting with these "safety layers" to blur sensitive content automatically. It’s a bit controversial because of privacy concerns, but from a mental health perspective, it’s a massive shield.

Laws are finally catching up. In the UK, the Online Safety Act puts a massive burden on platforms to protect users from "priority illegal content." If they fail, they face fines that can reach billions. In the US, the debate around Section 230 continues. Section 230 basically says platforms aren't responsible for what users post, but that shield is cracking.

If a platform's recommendation engine—the part that chooses what you see—pushed a video of a suicide to a minor, many legal experts argue the platform should be liable for the harm caused.

How to Protect Yourself and Others

If you see something, don't just scroll past. Your action matters.

📖 Related: Finding the Right Care at Texas Children's Pediatrics Baytown Without the Stress

  1. Report it immediately. Don't assume someone else has. Every report helps train the AI that this specific version of the clip is harmful.
  2. Do not share it. Even if you're sharing it to "raise awareness" or "warn" people, you are technically helping the algorithm spread it.
  3. Clear your cache. If you've accidentally watched a graphic video, go into your app settings and clear your search history and cache. This helps reset the algorithm so it doesn't keep serving you similar content.
  4. Talk about it. If you know a friend or a child has seen something, don't ignore it. Ask them how they feel. Sunlight is the best disinfectant for the shame and shock that comes with these images.

Moving Toward a Safer Feed

The reality is that videos of people committing suicide will likely always exist in the dark corners of the web. But they don't have to exist in our pockets. The shift from "reactive" moderation (taking it down after it's reported) to "proactive" prevention (blocking it at the point of upload) is the only way forward.

We need to demand better. Not just better AI, but better ethics from the people who design the apps we spend our lives on. A "like" shouldn't be worth more than someone's dignity or a viewer's mental health.

If you or someone you know is struggling, help is available. You can call or text 988 in the US and Canada, or call 111 in the UK. These are 24/7 services staffed by people who actually care and want to listen.


Next Steps for Digital Safety:

Audit your social media settings right now. Go to the "Content Preferences" or "Privacy and Safety" section of your favorite apps. Look for "Sensitive Content" filters and ensure they are set to the most restrictive level. This won't catch everything, but it adds a crucial layer of friction. Additionally, if you are a parent, use "Family Pairing" features on apps like TikTok to manage what your kids can see. These tools aren't about spying; they're about creating a digital buffer while their brains are still developing the tools to process the world. Finally, if you have witnessed something traumatic online, reach out to a professional counselor who specializes in digital trauma. You don't have to carry those images alone.