Images of Fake People: Why You Can’t Trust Your Eyes Anymore

Images of Fake People: Why You Can’t Trust Your Eyes Anymore

You’ve probably seen her. She has perfect sun-kissed skin, a slightly asymmetrical smile that feels "real," and maybe a few stray hairs catching the light. She looks like a barista from Seattle or a hiker in the Alps. But she doesn't exist. She’s never breathed, never bought a coffee, and her entire "life" is just a collection of pixels calculated by a latent diffusion model. Images of fake people have officially moved from the uncanny valley into our daily feeds, and honestly, most of us are totally unprepared for it.

It’s weird.

A few years ago, you could spot a fake because it had six fingers or teeth that looked like a solid white picket fence. Not anymore. With the explosion of tools like Midjourney, Stable Diffusion, and Flux, the barrier to creating a hyper-realistic human has basically vanished. We aren't just talking about deepfakes of celebrities anymore; we're talking about an infinite supply of "average" people used for everything from LinkedIn scams to fast-fashion advertising.

The Tech Behind the Non-Existent

How does this actually work? It isn't just "copy-pasting" eyes and noses. Most of these images of fake people are generated through Generative Adversarial Networks (GANs) or, more recently, Diffusion models.

Think of a GAN like an art student and a teacher. The "student" tries to draw a human face, and the "teacher" (who has seen millions of real photos) says, "No, that looks like plastic," or "The lighting on the chin is wrong." They do this millions of times per second until the student gets so good that the teacher can’t tell the difference. This is exactly how sites like This Person Does Not Exist—created by engineer Philip Wang in 2019—populate a fresh, fake face with every refresh of the browser.

Diffusion models, on the other hand, work backward. They start with a field of random static—pure digital noise—and slowly "denoise" it into a sharp image based on a text prompt. It’s like carving a statue out of a block of marble, except the marble is TV static and the sculptor is an algorithm trained on the entire public internet.

👉 See also: The Facebook User Privacy Settlement Official Site: What’s Actually Happening with Your Payout

Why Companies Love Using Fake Humans

Money. It’s almost always about money.

Hiring a human model is expensive. You have to pay for the talent, the photographer, the studio space, the lighting tech, and the hair and makeup artist. Then there’s the "usage rights"—you might only own those photos for two years before you have to pay the model again.

Now, imagine you’re a startup. You need a "diverse team" photo for your About Us page, but you’re just three guys in a garage. You can go to a site like Generated Photos and buy a pack of 10,000 unique images of fake people for a fraction of the cost of a single professional photoshoot. No contracts. No royalties. No human drama.

  • Social Proofing: Scammers use these faces to create "customer reviews" that look authentic. If a profile picture looks like a friendly grandmother, you’re more likely to trust the 5-star review next to it.
  • Safety and Anonymity: Sometimes it’s actually ethical. Journalists working on sensitive stories about whistleblowers might use an AI-generated avatar to protect a source's identity while still providing a "human" face for the audience to connect with.
  • Marketing Experiments: Brands like Levi’s have experimented with AI models to increase diversity in their displays without actually having to fly dozens of different people to a shoot. It's controversial, sure, but it's happening.

The Ethics of the "Perfect" Nobody

There is a darker side to this, and it isn't just about "fake news."

When we flood the internet with images of fake people who are all mathematically "optimized" to be attractive, we’re creating a new kind of body dysmorphia. These AI models don't have pores. They don't have scars unless the prompt specifically asks for them. They are a distillation of our collective biases—often leaning toward Western beauty standards because that’s what the majority of the training data looks like.

✨ Don't miss: Smart TV TCL 55: What Most People Get Wrong

Professor Hany Farid, a digital forensics expert at UC Berkeley, has spent years warning about the erosion of biological reality. If we can’t trust that a person in a photo is real, the "liar’s dividend" kicks in. This is a phenomenon where people start claiming that real evidence of their wrongdoing is actually just "AI-generated." It creates a world where the truth is whatever you want it to be.

How to Spot the Fakes (For Now)

Even though the tech is getting scary-good, there are still "tells." You just have to know where to look. AI struggles with the things humans don't even think about.

  1. The Background Blur: AI often creates a "dreamlike" or messy background. If the person is sharp but the trees behind them look like melted Impressionist paintings with nonsensical geometry, be suspicious.
  2. The Jewelry Glitch: Earrings are a classic giveaway. AI often forgets to make the left earring match the right one, or it might fuse an earring directly into the earlobe.
  3. The Eye Reflections: In a real photo, the reflection (the "catchlight") in both eyes should match the light source in the room. In images of fake people, you’ll often see different shapes or patterns in each pupil.
  4. The "Vibe" of the Skin: AI skin often looks either too airbrushed or suspiciously "gritty" in a way that doesn't follow the contours of a face.
  5. The Hair Chaos: Look at where the hair meets the forehead. AI often struggles with individual strands, making them look like they are sprouting directly out of the skin or floating just above it.

Who owns the face of someone who doesn't exist?

Current US Copyright Law is pretty firm: work created entirely by a machine without human creative input cannot be copyrighted. But it’s messy. If a photographer uses AI to "enhance" a real person, or if a creator spends ten hours tweaking a prompt to get a specific look, the lines get blurry.

We’re also seeing a rise in "Identity Theft of the Dead." There have been cases where AI models were trained on photos of real people without their consent, creating "new" images that look suspiciously like someone’s deceased relative or a specific influencer.

🔗 Read more: Savannah Weather Radar: What Most People Get Wrong

Where We Go From Here

We aren't going back to a world of "real" photos only. That ship has sailed, sunk, and been replaced by a digital submarine.

The next phase is provenance.

Groups like the C2PA (Coalition for Content Provenance and Authenticity) are working on "digital nutrition labels." The idea is that every image would carry metadata—a digital fingerprint—that tells you exactly where it came from. Was it taken on an iPhone? Was it edited in Photoshop? Was it spat out by a server in a data center?

Until those labels are everywhere, the responsibility falls on us. We have to stop scrolling on autopilot.

Actionable Steps for Navigating the Synthetic Era

  • Reverse Image Search Everything: If you see a profile picture on a dating app or a "breaking news" tweet that feels off, throw it into Google Lens or TinEye. If it only appears on stock photo sites or AI galleries, it’s a bot.
  • Audit Your Own Content: If you’re a business owner using images of fake people for marketing, be transparent. A simple "AI-generated for illustrative purposes" tag builds more trust than getting "caught" later by a savvy customer.
  • Watch the Hands: It’s a cliché for a reason. Check the knuckles, the way fingers grip objects, and the number of joints. If the person is holding a coffee cup and their thumb is merging with the ceramic, you're looking at a ghost in the machine.
  • Verify the Source: Don’t trust a face; trust the domain. A hyper-realistic person on a website you’ve never heard of is much more likely to be synthetic than a photo on a verified news outlet with a named photojournalist.

The reality is that these images are tools. Like any tool, they can be used to build something cool or break something important. We just need to make sure we’re the ones holding the hammer, not the ones getting hit by it.