Deepfakes Explained: What Do You Call Videos That Are Fake AI Generated?

Deepfakes Explained: What Do You Call Videos That Are Fake AI Generated?

You've probably seen that video of Tom Cruise doing magic tricks or Barack Obama saying things he’d never actually say in real life. It’s unsettling. Your brain does this weird double-take because the lighting looks right and the voice sounds spot on, but something feels... off. This brings us to the big question: what do you call videos that are fake AI generated?

Most people just call them deepfakes.

It’s a portmanteau. "Deep learning" meets "fake." Simple, right? But as the tech evolves, the terminology is getting way more nuanced than just one catch-all buzzword. We’re moving into an era where "synthetic media" is the professional term, though nobody is going to say that at a bar. If you’re trying to identify these clips or understand the tech behind them, you’re looking at a collision of neural networks and creative mischief.

The Anatomy of a Deepfake

So, how does this actually happen? It’s not just a fancy Snapchat filter. Deepfakes rely on something called Generative Adversarial Networks, or GANs. Think of it like an art forger and an art critic trapped in a room together. The "forger" (the generator) creates a fake image. The "critic" (the discriminator) looks at it and says, "Nah, that looks like a robot made it." They do this millions of times. Eventually, the forger gets so good that the critic can’t tell the difference anymore.

🔗 Read more: Beats Earbuds on Sale: Why You Should Probably Wait (or Buy These Specific Pairs Right Now)

Ian Goodfellow is the guy who basically pioneered GANs back in 2014. He probably didn't realize at the time that his academic breakthrough would eventually lead to people putting Nicolas Cage’s face on every character in The Avengers. But that’s the internet for you.

There are different "flavors" of these videos. Face-swapping is the most common. You take the source face (the target) and stitch the donor face onto it. Then there’s lip-syncing, where you take an existing video of a politician and make their mouth move to match a completely different audio track. This is arguably more dangerous because it’s easier to make it look "good enough" for a low-res social media feed.

Honestly, the tech is moving faster than our ability to regulate it. We’re seeing "cheapfakes" now too. Those aren't even AI. They’re just videos slowed down or edited with basic software to make someone look drunk or confused. But when people ask what do you call videos that are fake AI generated, they are usually talking about the high-end stuff that requires a GPU and a lot of training data.

Why the Name Matters

Words have power. Calling everything a "deepfake" can be a bit of a slippery slope.

In the legal world, experts like Danielle Citron (a law professor at UVA) often use the term "non-consensual intimate imagery" when talking about the darker side of this tech. It’s a heavy term, but it’s accurate. Most deepfakes created today aren't political parodies; they are weaponized against women. Using the right name helps shift the focus from "cool tech" to "real-world harm."

On the flip side, Hollywood is leaning into synthetic media. Think about The Mandalorian and how they brought back a young Luke Skywalker. That’s technically a deepfake. But Disney isn't going to use that word in a press release because it sounds shady. They call it "de-aging" or "digital resurrection." It’s the same underlying math, just with a better publicist.

How to Spot the Fakes (For Now)

You can’t always trust your eyes anymore. It sucks, but that’s the reality.

However, AI still struggles with the "edges" of things. If you’re watching a video and you’re suspicious, look at the blinking. Early deepfakes were notoriously bad at blinking because the AI was mostly trained on photos of people with their eyes open. It didn't "know" what a blink looked like. They’ve mostly fixed that now, but it's still a good marker.

Look at the jewelry. AI is weirdly bad at earrings. If one earring looks like it's melting into the earlobe, or if the glasses seem to merge with the person's temple, you're looking at a fake.

Lighting and shadows are also a dead giveaway. The AI might get the face right, but it often forgets how a shadow should fall across a collarbone or how light should reflect in a human eye. Look for the "specular highlight"—that little white dot of light in the pupil. In a real human, it should be consistent. In an AI video, it might jitter or disappear entirely.

The Future of "What Do You Call Videos That Are Fake AI Generated"

We are heading toward a "post-truth" era of media. That sounds dramatic, but it’s true.

Microsoft and Adobe are working on things like Content Credentials. It’s basically a digital watermark that stays with a file to prove it’s "real." Think of it like a nutritional label for a video. It tells you who filmed it, what camera they used, and if it’s been edited by AI. This is part of the C2PA standard.

But here is the kicker: the people making the most convincing fakes aren't going to use those labels. They want to fool you.

We’re also seeing the rise of AI-generated audio, which is often paired with these videos. This is called "voice cloning." Companies like ElevenLabs can take a 30-second clip of your voice and recreate it perfectly. When you combine a face-swap with a cloned voice, you get a "full-body" deepfake. It’s remarkably easy to do now. You don't need a PhD; you just need a decent graphics card and a few hours of YouTube tutorials.

Misconceptions and Nuance

A lot of people think deepfakes are just for misinformation. That's a huge part of it, sure. But there’s a massive industry growing around localized advertising. Imagine a commercial where the actor’s mouth moves perfectly to speak Spanish, French, and Japanese. No more awkward dubbing. That’s a positive use of the tech.

Then there’s the "Liar’s Dividend." This is a term coined by professors Bobby Chesney and Danielle Citron. It describes a situation where a real person does something bad on camera, but they just claim the video is a deepfake to get away with it. "It wasn't me, it was AI." As the tech gets better, this excuse gets more believable. It undermines the very idea of video evidence.

👉 See also: Finding Your Perfect Vanity Number: Why Use a Phone Number Word Generator?

Real-World Impact

In 2023, a deepfake of an explosion at the Pentagon went viral on X (formerly Twitter). It caused a brief dip in the stock market. It was a fake image, but for ten minutes, the world thought a terrorist attack had occurred.

In 2024, New Hampshire voters received a robocall that sounded exactly like Joe Biden telling them not to vote. It was a deepfake audio clip. This isn't science fiction; it's happening during election cycles right now.

So, when we ask what do you call videos that are fake AI generated, we aren't just talking about a technical definition. We’re talking about a tool that can destabilize markets and shift the course of democracy. It sounds like hyperbole until it hits your newsfeed.

Taking Action: What You Can Do

The best defense is a healthy dose of skepticism.

If you see a video that seems too good (or too bad) to be true, don't share it immediately. Check multiple sources. If a major world leader said something insane, every major news outlet would be covering it. If it’s only on a random account with 200 followers, it’s probably a fake.

Practical Steps:

  • Reverse Image Search: Take a screenshot of the video and run it through Google Images or TinEye. Often, you’ll find the original "source" video that the AI was built on.
  • Check the Source: Look for the "original" upload. High-quality deepfakes usually start on niche forums or specific AI-generation sites before hitting the mainstream.
  • Educate Others: Tell your parents and grandparents about deepfakes. They are often the most vulnerable to these types of digital scams.
  • Use Tools: Platforms like "Deepware" or "Sensity" offer scanners that try to detect AI artifacts in videos. They aren't 100% accurate, but they help.

We are in the middle of a massive shift in how we consume information. Understanding the tech is the first step toward not being fooled by it. Whether you call them deepfakes, synthetic media, or AI-generated fakes, the reality is the same: seeing is no longer believing.

Keep your eyes peeled for those melting earrings and the weirdly static lighting. The bots are getting better, but they still have "tells" if you know where to look. Verify before you vilify. Stay skeptical. Check the metadata. This isn't just about cool videos anymore; it's about the truth.