You’ve probably seen the video of Tom Cruise doing magic tricks on TikTok. Or maybe that clip of Barack Obama calling someone an "uppedy" dipshit. If you looked closely at the Cruise video, something felt... off. The skin was a little too smooth. The lighting on his chin didn't quite match the room. That’s because it wasn't Tom Cruise. It was a deepfake.
Deepfakes are basically the next evolution of Photoshop, but for video and audio. Instead of a human editor painstakingly moving pixels around, an artificial intelligence does the heavy lifting. It learns a face. It maps it. It stitches it onto someone else's body. It sounds like science fiction, but it’s sitting in your pocket right now.
Honestly, the term itself is a portmanteau of "deep learning" and "fake." It first bubbled up on Reddit back in 2017 when a user started swapping celebrity faces into adult content. It was messy then. Now? It's terrifyingly good.
How Deepfake Tech Actually Works Under the Hood
Forget the "magic" for a second. The engine behind a deepfake is usually something called a Generative Adversarial Network, or a GAN. Think of it like a master art forger and a world-class detective locked in a room together.
The "Generator" (the forger) tries to create a fake image. The "Discriminator" (the detective) looks at it and says, "Nope, that looks like a robot, try again." They do this thousands of times. Every time the detective catches a flaw, the forger learns. Eventually, the forger becomes so good that the detective can't tell the difference between the fake and the real photo.
The Data Hunger
To make a high-quality deepfake, you need data. Lots of it.
If you wanted to swap my face onto a movie star, the AI would need thousands of images of my face from every possible angle. It needs to know how my mouth moves when I say "O" versus "E." It needs to see my eyes squint in the sun. This is why celebrities are the primary targets; there are millions of frames of their faces available online for free.
📖 Related: The Atomic Age: What Most People Get Wrong About the Nuclear Era
Ian Goodfellow, the researcher who basically pioneered GANs, didn't set out to break reality. He was looking for a way to let machines generate realistic data. But like any tool, it got weaponized fast.
It's Not Just About Faces Anymore
When people talk about what a deepfake is, they usually picture a face-swap. That’s just the tip of the iceberg. We’re now seeing voice cloning that is so accurate it’s being used in bank heists.
Back in 2019, the CEO of a UK-based energy firm was tricked into transferring $243,000 to a scammer. How? He thought he was on the phone with his boss. The AI had captured the boss's German accent and specific speech patterns perfectly. The CEO didn't even hesitate. He just hit "send."
There's also "Liveness Detection" bypass. This is where hackers use a deepfake to trick biometric security systems. Your bank thinks it’s looking at your face via the front-facing camera, but it’s actually looking at a digital mask.
- Face Swaps: Replacing one person's head with another.
- Lip Syncing: Making a person say things they never said by altering their mouth movements.
- Puppetry: Using one person's body movements to control a digital avatar of someone else.
- Voice Synthesis: Creating a vocal blueprint from just a few seconds of recorded audio.
The Viral Reality of Deepfakes
You probably remember the "DeepTomCruise" account. Chris Ume, a visual effects artist, worked with a Cruise impersonator named Miles Fisher to create those clips. It wasn't just raw AI. It was a hybrid of a talented actor and a high-end AI model.
This highlights a huge misconception: people think you can just click a button and get a perfect deepfake. You can't. Not yet. The "perfect" ones still require hours of post-production and a "base" actor who shares the same bone structure as the target.
However, the barrier to entry is dropping. Apps like Reface or Wombo allow anyone to do "cheap fakes" in seconds. These aren't going to fool a forensic expert, but they’re good enough to spread misinformation on a fast-moving Twitter thread.
🔗 Read more: Finding the best keyboards for iPad 9th generation: What most people get wrong
Political Fallout
Hany Farid, a professor at UC Berkeley and a leading expert in digital forensics, has been shouting from the rooftops about the "liar’s dividend."
This is a weird side effect of deepfake technology. It’s not just that people will believe fake things; it’s that they will stop believing real things. If any video can be fake, then a politician caught in a real scandal can simply say, "That’s a deepfake," and their supporters will believe them.
We saw a version of this in Gabon in 2018. The president, Ali Bongo, had been out of the public eye due to illness. When the government released a video of him to prove he was okay, the opposition claimed it was a deepfake. It sparked an attempted military coup. The video was likely real, but the possibility of it being fake was enough to trigger chaos.
How to Spot a Fake (For Now)
The tech is evolving, but it’s not perfect. Humans are incredibly good at sensing "The Uncanny Valley"—that feeling of Revulsion when something looks almost human but isn't quite right.
- The Blink Test: Early AI models struggled with blinking. The training data usually consisted of photos of people with their eyes open, so the AI didn't "know" how to render a blink. Modern ones are better at this, but it’s still worth watching for.
- Edge Distortion: Look at the hairline or the jawline. If the person moves their head quickly, the "mask" might lag or blur at the edges.
- The Inside of the Mouth: AI is notoriously bad at teeth and tongues. If the mouth looks like a blurry white block or if the teeth seem to shift shape while the person talks, you’re looking at a deepfake.
- Lighting Inconsistency: If the shadows on the nose go left but the shadows on the cheeks go right, something is wrong.
Honestly, the best way to spot one isn't technical. It’s logical. Ask yourself: "Would this person actually say this in this setting?" If the Pope is wearing a Balenciaga puffer jacket (another famous AI image), he probably isn't.
The Law is Racing to Catch Up
Currently, the legal landscape is a mess. In the United States, there is no federal law that specifically bans deepfakes. Most of the legal battles fall under existing categories like defamation, copyright infringement, or "right of publicity."
California passed a law (AB 730) that prohibits the distribution of "materially deceptive" audio or video of political candidates within 60 days of an election. But enforcement is a nightmare. By the time a video is flagged and taken down, it’s already been seen by ten million people.
Then there’s the issue of non-consensual imagery. According to a study by Sensity AI, roughly 90% to 95% of deepfakes online are non-consensual pornography targeting women. This isn't a "technology" problem; it's a harassment problem scaled by AI.
The Positive Side (Yes, There Is One)
It’s easy to get bogged down in the doom and gloom. But deepfake tech—or "synthetic media" as the industry calls it—has some genuinely cool uses.
In the film Top Gun: Maverick, Val Kilmer’s voice was reconstructed using AI because he had lost his speaking voice to throat cancer. A company called Sonantic (now owned by Spotify) used old recordings of his voice to create a model that could speak his lines with all his original grit and emotion.
Education is another big one. Imagine a history lesson where a "deepfaked" Abraham Lincoln reads the Gettysburg Address. Or a museum where a digital version of Salvador Dalí greets you and explains his paintings. These things already exist. The Dalí Museum in Florida has an AI Dalí that takes selfies with visitors.
Protecting Yourself in a Synthetic World
You don't have to be a tech genius to defend yourself against the downsides of this tech. It’s mostly about changing your "default" setting from trust to verification.
👉 See also: Why the Apple Touch ID Keyboard is the Only Mac Accessory That Actually Matters
Start by tightening your social media privacy. Scammers need a "source" to clone your voice or face. If your Instagram is public and full of videos of you talking, you’re providing the raw materials.
If you get a suspicious call from a family member asking for money, have a "safe word." It sounds paranoid, but a simple word that only your inner circle knows can instantly debunk a voice-cloned scam.
Verify the source. If a shocking video appears on your feed, don't share it immediately. Check reputable news outlets. Look for the "original" upload. If the only place a video exists is a random "TruthSeeker2024" account on X, it’s probably garbage.
The reality is that deepfake technology is here to stay. We can't put the genie back in the bottle. The "Information Age" is over; we are now in the "Verification Age."
Immediate Action Steps
- Audit your digital footprint: Set your high-video social accounts (like TikTok or Instagram) to private if you don't use them for business.
- Establish a family "Safe Word": Pick a unique word or phrase to use during phone calls if someone is asking for urgent help or financial transfers.
- Install detection tools: Browser extensions like "Reality Defender" are beginning to hit the market, though they are still in early stages.
- Practice "Lateral Reading": When you see a controversial video, open a new tab and search for the event + "fact check" before reacting.