This Person Does Not Exist: Why AI Faces Are Changing Everything

This Person Does Not Exist: Why AI Faces Are Changing Everything

You’ve seen them. Maybe you didn't realize it, but you definitely have.

They look like your neighbor, a random barista, or that person you passed on the street three years ago. Except they aren't real. They never were. They are the spectral output of a website called https://www.google.com/search?q=thispersondoesnotexist.com, a digital ghost factory that fundamentally altered how we perceive reality online. When Philip Wang launched the site in early 2019, it felt like a parlor trick. You refresh the page, and boom—a high-resolution face of a human being appears.

But it's just math.

👉 See also: iPad case and stand: Why Most People Are Still Using Their Tablets All Wrong

The site is built on a specific type of artificial intelligence architecture known as a Generative Adversarial Network, or GAN. Specifically, it uses StyleGAN2, a model developed by researchers at NVIDIA. The tech is actually a bit of a "fight" between two neural networks. One tries to create an image, and the other tries to guess if it's fake. Over millions of iterations, the creator gets so good at fooling the judge that the judge can no longer tell the difference. Honestly, it's kind of terrifying how well it works.

The Mechanics of This Person Does Not Exist

How does it actually build a face? It doesn't just copy-paste eyes and noses.

It learns "styles."

Think of it like a recipe. The AI understands the concept of "coarse styles" like pose and face shape, "middle styles" like facial features and hair, and "fine styles" like skin pores and hair color. It blends these variables into a vector space. When you hit refresh on This Person Does Not Exist, the site picks a random point in that mathematical space and renders the result.

It's fast. It’s free. And it’s incredibly influential.

The training data for the original model came from the Flickr-Faces-HQ (FFHQ) dataset. This dataset consists of 70,000 high-quality images of real people crawled from Flickr. This is where the ethical murky water starts to rise. While the images were technically public, the people in them never signed a waiver saying, "Sure, use my face to teach a robot how to replace me."

We often talk about "deepfakes" in the context of political videos or celebrity scandals, but these entirely synthetic people are arguably more pervasive. They are the "stock photos" of the 21st century, minus the royalty checks.

Why We Can't Stop Looking at These Faces

There is a weird psychological pull here.

Humans are hardwired to recognize faces. It’s part of our survival instinct. When we look at a generated face from thispersondoesnotexist, our brain tries to find a story. "He looks like a teacher," or "She looks like she’s about to go on vacation." We project humanity onto a grid of pixels.

But if you look closely—really closely—the illusion breaks.

Spotting the Glitches in the Matrix

Even though StyleGAN2 is a masterpiece of engineering, it has "tells." If you’re trying to figure out if a profile picture is a product of thispersondoesnotexist, look for these specific red flags:

  • The Background Blur: The AI is great at faces but terrible at context. Often, the background looks like a psychedelic swirl of colors or a distorted architectural nightmare.
  • The "Third Person" Ear: Sometimes, the AI tries to render a second person standing next to the subject. This usually results in a terrifying, fleshy blob that looks like it belongs in a Cronenberg movie.
  • Asymmetrical Accessories: Look at the earrings. The AI almost never matches them. One ear might have a dangling gold hoop while the other has a silver stud or nothing at all.
  • The Teeth: Older versions of GANs struggled with "monoteeth"—a single, continuous white bar across the mouth. While StyleGAN2 is better, the alignment is often slightly off-center.
  • Eyeglasses: If the person is wearing glasses, the frames often melt into the skin or don't line up properly across the bridge of the nose.

It's basically a game of "Spot the Difference" where the stakes are your trust in the internet.

The Business of Being Fake

Why does this matter for business?

Efficiency.

If you are a startup and you need "team members" for a landing page to look established, you could hire a photographer. Or you could just download five faces from This Person Does Not Exist. It costs zero dollars. This has led to a massive surge in "fake" personas on LinkedIn and Twitter.

In 2020, researchers and journalists discovered networks of fake profiles using these AI faces to push political agendas or engage in corporate espionage. Because the faces are unique, a reverse image search comes up empty. You can't find the "original" because there isn't one. This makes these faces the perfect tool for bad actors.

But it’s not all doom and gloom.

In the gaming industry, developers use GANs to generate endless varieties of non-playable characters (NPCs). Instead of having ten "guards" who all look like triplets, you can have a thousand unique individuals. In medical research, synthetic data—including synthetic faces—can be used to train diagnostic tools without violating patient privacy laws (HIPAA).

It’s a tool. Like a hammer, it can build a house or break a window.

The Ethical Crossroads

We have to talk about bias. It's the elephant in the room.

Because the training data (FFHQ) was pulled from Flickr, it reflects the biases of that platform's user base. Historically, GANs have struggled with diversity. If the dataset has more Caucasian faces, the AI becomes an "expert" at Caucasian features and a "novice" at others. This leads to digital erasure.

When we rely on thispersondoesnotexist for our "generic" human faces, we risk reinforcing a very narrow definition of what a person looks like.

Furthermore, there is the "Right to Publicity." If an AI generates a face that looks 99% like you, do you own that image? Currently, the law says no. You can't copyright a face created by an algorithm, but you also can't easily stop people from using a "lookalike" that the algorithm spit out.

How to Use This Technology Responsibly

If you're a creator or a developer, you might be tempted to use these faces. It's easy. It's tempting. But you should probably follow a few "human-first" rules.

First, don't use them to deceive. If you're using an AI face for a persona, disclose it. Transparency is the only currency left in an AI-saturated world. Second, be aware of the "uncanny valley." If the face looks mostly right but slightly "off," it will trigger a disgust response in your audience.

Third, consider the alternatives. There are now "Ethical AI" companies that pay models for their likeness to create synthetic datasets. Support those. They ensure that the humans who provided the "DNA" for the AI actually get paid.

📖 Related: How to Add a Bookmark on iPad: What Most People Get Wrong

The Future of Identity

We are moving toward a world where the "default" state of a digital image is "untrustworthy."

This isn't just about thispersondoesnotexist anymore. We have This Vessel Does Not Exist, This Cat Does Not Exist, and even This Chemical Equation Does Not Exist. We are essentially building a mirror world.

The next step isn't just static images. We are already seeing the rise of real-time synthetic video. Soon, you’ll be able to video call someone who looks and sounds like a specific person, but is entirely generated by a local model on a laptop.

What does that do to our social fabric?

It means we have to lean back into "analog" trust. Face-to-face meetings, physical signatures, and shared history will become more valuable as digital "proof" evaporates.

Actionable Steps for Navigating the Synthetic Era

To stay ahead of the curve, you need to develop "AI Literacy." It's no longer an optional skill.

  1. Verify, then trust. If you see a profile that seems too perfect, check the ears and the background. Use tools like the "Gan Detector" browser extensions if you're suspicious.
  2. Audit your own content. If you're using synthetic media, ask yourself if it adds value or just adds noise.
  3. Stay updated on NVIDIA’s releases. The site is just the tip of the iceberg. Following the research papers behind StyleGAN3 and beyond will give you a roadmap of where digital identity is headed.
  4. Support digital provenance. Keep an eye on the Content Authenticity Initiative (CAI). They are working on "nutrition labels" for images that show exactly how an image was created and edited.

The faces on thispersondoesnotexist are beautiful, haunting, and completely fake. They are a reminder that in the digital age, seeing is no longer believing. Believing is now a choice you make after checking the math.