Why Whats Going On Images Keep Taking Over Your Feed

Why Whats Going On Images Keep Taking Over Your Feed

Ever scrolled through Facebook or X and seen a picture that looks just a little bit... off? Maybe it’s a giant shrimp sculpted out of sand, or a flight attendant with seven fingers posing in front of a plane crash that never happened. These are whats going on images, a strange breed of visual content that has basically hijacked the way we interact with the internet.

They’re weird. They’re often fake. And honestly, they’re everywhere.

If you’ve felt like you’re losing your mind trying to figure out if what you’re seeing is a real photograph or a fever dream cooked up by a server farm, you aren't alone. We are currently living through a massive shift in how digital imagery works. It’s no longer about capturing a moment; it’s about capturing an emotion, an outrage, or a "like" at any cost.

The Weird Science of Why We Click

The psychology here is actually pretty simple, even if the tech isn't. Humans are hardwired to notice things that don’t fit. It’s an evolutionary trait. If you see a lion in the grass, you notice it. If you see a "whats going on images" post showing a dog driving a bus, your brain hits the brakes.

Engagement bait is the engine.

Algorithms on platforms like Meta and TikTok don't care if a photo is "true." They only care if you stop scrolling. When you see one of these bizarre images, you might comment "is this real?" or "wow, amazing!" Even a negative comment counts as engagement. This signals the algorithm to show that image to ten more people. Before you know it, a weirdly polished AI-generated image of a "poverty-stricken child" building a mansion out of plastic bottles has 400,000 shares.

🔗 Read more: Roof Inspection Drone News: Why 2026 is the Year the Ladder Finally Dies

Most of these images are created using Generative Adversarial Networks (GANs) or diffusion models like Midjourney and DALL-E 3. These tools have become so accessible that anyone with a prompt can generate high-fidelity weirdness in seconds.

Spotting the Glitches in the Matrix

You’ve probably noticed that AI has a hard time with the small stuff. Hands are the classic giveaway. While the newest versions of these models have gotten much better, you’ll still see the occasional six-fingered hand or a limb that blends directly into a table.

Look at the backgrounds.

In many whats going on images, the background logic completely falls apart. Text on signs looks like an alien language—it’s "gibberish" script that mimics the shape of letters without actually saying anything. Or maybe the lighting on a person’s face doesn't match the shadows on the ground. These are "artifacts," the digital scars of a machine trying to guess what a photo should look like based on billions of other data points.

There's also a specific "sheen" to AI images. Everything looks a bit too smooth, a bit too HDR, like it was scrubbed with digital soap. Real life is grittier. Real life has dust, imperfect skin textures, and messy lighting.

The Rise of the "Slop" Economy

People are calling this "AI Slop."

It’s low-effort content designed to fill space and harvest ad revenue. Many of the pages posting these images are "bot-run" or managed by content farms looking to grow a massive following quickly. Once the page has enough followers, they pivot to selling low-quality products or scamming users with "giveaways."

It’s kinda sad, really.

There was a famous case recently where an image of an "all-ice" interior of a plane went viral. Thousands of people shared it, genuinely asking which airline offered this. It was totally fake, obviously. But the fact that it generated so much conversation shows how thin the line between reality and "what's going on" has become.

🔗 Read more: Getting Your Head Around a Transformation Rules Cheat Sheet Without Losing Your Mind

Why the Platforms Aren't Stopping It

You might wonder why Google, Meta, or X don't just ban these.

Money.

Ad revenue is tied to time spent on the platform. If these images keep people commenting and sharing—even if they’re arguing about whether the image is real—the platform is technically "winning." Meta has started labeling some images as "AI Info," but the system is far from perfect. Often, the label is small and easy to miss, or the creator finds a way to bypass the detection filters by slightly tweaking the pixels.

Beyond the Scams: The Creative Side

It’s not all bad. Honestly.

Some creators use whats going on images as a new form of surrealist art. They aren't trying to trick you; they’re trying to make you feel something. It’s the digital equivalent of a Dali painting. When an artist uses AI to create a hyper-realistic forest where the trees are made of glass, they're pushing the boundaries of imagination.

The problem isn't the technology. It’s the intent.

There is a world of difference between a digital artist creating a dreamscape and a scammer trying to trick grandma into clicking a link for a "free cruise" by showing her a photo of a sinking ship that never existed.

How to Protect Your Sanity and Your Feed

If you want to stop seeing this stuff, you have to train your algorithm.

Stop commenting. Even if you want to point out it's fake, just don't. Every interaction is a "vote" for more. Instead, use the "Hide Post" or "See Fewer Posts Like This" feature. This is the only way to tell the machine that you aren't interested in the slop.

Verify before you share.

If an image looks too crazy to be true, it probably is. You can use tools like Google Reverse Image Search or TinEye to see where an image actually came from. If it only appears on weird Facebook groups and has no news articles attached to it, it’s a "whats going on" special.

What This Means for the Future of Truth

We are entering an era of "post-truth" imagery.

For the last hundred years, a photograph was seen as a record of fact. "Pics or it didn't happen" used to mean something. Now, pics don't prove anything. This is a massive shift in how we process information as a society. It puts the burden of proof back on the viewer.

We have to become more skeptical.

It’s not just about weird shrimp or fake plane cabins anymore. This tech is being used in politics, in war zones, and in corporate PR. Deepfakes and manipulated images are becoming tools of influence. Understanding the mechanics of whats going on images is actually a survival skill for the 21st century.

Practical Steps for Navigating the New Visual Reality

  • Check the Source: Look at the account posting the image. Is it a verified news outlet or an account called "Nature Is Awesome 12345" created last month?
  • Zoom In: Check the edges of objects. Do they blur into the background? Are there weird extra limbs or floating objects?
  • Read the Comments: Often, "community notes" or savvy users will have already debunked the image.
  • Trust Your Gut: If it feels like a dream, it’s probably a prompt.
  • Use Official Tools: Look for the "Made with AI" labels that platforms are slowly rolling out.

The internet is changing. It's becoming more chaotic, more visual, and a lot less "real." By staying informed and keeping a critical eye, you can enjoy the weirdness without falling for the traps. Just remember that next time you see a photo of a cat the size of a skyscraper—it might just be another one of those images.

If you want to keep your digital life clean, start by being ruthless with your "unfollow" button. The more you engage with quality, human-made content, the less space the "slop" has to grow. Pay attention to the creators who actually show their process, the photographers who share their metadata, and the journalists who cite their sources. This is how we keep the internet grounded in reality.