I Made Deepfakes of My Friends: What Happened When the Novelty Wore Off

I Made Deepfakes of My Friends: What Happened When the Novelty Wore Off

It started with a video of a dancing cat that had my roommate's face. We laughed for twenty minutes. At that moment, the tech felt like a toy, a digital parlor trick that belonged in the same bucket as Snapchat filters or those "which Disney character are you" quizzes. But things changed quickly. Once I made deepfakes of my friends, the atmosphere in our group chat shifted from amused to deeply unsettled. It wasn't because the videos were bad; it was because they were too good.

We are living through a period where the barrier to entry for synthetic media has basically crumbled. You don't need a basement full of NVIDIA H100s anymore. You just need a decent mobile app or a subscription to a cloud-based face-swapper.

The reality of deepfaking people you actually know—people you see every day—is far weirder than the headlines about celebrity parodies suggest. It forces you to confront the fact that your likeness is no longer under your exclusive control. Honestly, it’s a bit terrifying.

The Technical Reality of Making Deepfakes in 2026

The "how" isn't a secret. Tools like Roop, DeepFaceLab, and even consumer-grade apps like Reface have made the process trivial. Most of these systems rely on Generative Adversarial Networks (GANs). Think of it as two AI models playing a high-stakes game of "catch me if you can." One model generates the face, and the other tries to spot the fake. They go back and forth until the generator is so skilled that the discriminator can't tell the difference.

When I started experimenting, I used a simple open-source script. I fed it about 50 photos of my friend, Mark. Some were from his Instagram, others were candid shots from a camping trip. The AI didn't care about the context. It just wanted the geometry of his jawline and the specific way his eyes crinkle when he’s annoyed.

After about three hours of processing, I had a file. It was a video of a professional Muay Thai fighter, but it was Mark’s face taking the hits and delivering the kicks. The skin textures matched. The lighting on his forehead shifted perfectly as he moved under the gym rafters. It was seamless.

But here’s what the tutorials don't tell you: the "uncanny valley" is a physical sensation. When I showed it to Mark, he didn't cheer. He stared at his own face doing things his body couldn't do, and his first reaction was to touch his own cheek, as if checking if it was still there.

💡 You might also like: Why the iPhone 7 Red iPhone 7 Special Edition Still Hits Different Today

Why the "Source Material" Matters So Much

The quality of a deepfake depends entirely on the variety of the input. If you only have front-facing headshots, the AI struggles with profiles. It creates a weird "smearing" effect. To make the videos of my friends look real, I had to hunt for diverse angles.

  • Low-light photos help the AI understand shadows on the skin.
  • High-resolution video allows the model to map pores and fine lines.
  • Expressions like laughing or shouting are the hardest to get right because the mouth interior—teeth and tongue—is often "hallucinated" by the AI.

The biggest issue wasn't the tech. It was the social fallout. Even though I made deepfakes of my friends with their knowledge, the "implied consent" felt flimsy.

We often talk about deepfakes in the context of misinformation or political interference. Those are massive, systemic problems. However, on a personal level, the issue is "identity autonomy." When you see a video of yourself saying something you never said, your brain experiences a brief, violent flicker of cognitive dissonance. It feels like a violation, even if the content is harmless.

I remember making a clip of my friend Sarah "singing" an opera solo. Sarah can't carry a tune to save her life. The video was funny for five seconds. Then she asked me to delete it. She said seeing herself with that level of confidence and a voice that wasn't hers felt like someone had stolen her skin. That’s the phrase she used. "Stolen my skin."

There is currently no federal law in the U.S. that broadly criminalizes the creation of non-consensual deepfakes unless they are pornographic or used for fraud. Some states, like California and Virginia, have moved faster to implement civil remedies. But in a friendship? There are no lawyers. There’s just the awkward silence in the group chat when you realize you’ve crossed a line.

The Ethics of "Just Joking"

Most people who experiment with this tech think they are in the clear because their intentions are good. "It’s just a prank, bro." But intent doesn't negate the psychological impact. Research from University College London has suggested that our brains process "self-face" stimuli differently than any other visual data. When that data is manipulated, it triggers a "threat" response in the amygdala.

📖 Related: Lateral Area Formula Cylinder: Why You’re Probably Overcomplicating It

Basically, we are biologically hardwired to be creeped out by deepfakes of ourselves.

Detection is Getting Harder, Not Easier

A few years ago, you could spot a deepfake because the subject wouldn't blink. The AI models weren't trained on "closed-eye" data. That’s a solved problem now. Then, people said to look at the ears or the jewelry. AI used to struggle with the complexity of a dangling earring.

Now? The models are smarter.

If you are trying to figure out if a video of your friend is real, you have to look for "digital artifacts." Look at the boundary between the hair and the forehead. Sometimes you'll see a faint shimmering or a "halo" where the AI struggled to blend the textures. Check the shadows. Does the shadow on the wall match the movement of the person? Often, the AI swaps the face but forgets to update the environmental reflections.

But honestly, the best way to detect a deepfake of a friend isn't technical. It’s behavioral. You know your friends. You know their cadence, their specific brand of sarcasm, and the way they move their hands. Deepfakes are currently "soulless" in their movement. They are puppets. They lack the micro-gestures that make a person real.

The Future of "Identity Insurance"

As I made deepfakes of my friends, I started thinking about the long-term implications. We are entering an era where "seeing is believing" is a dead concept. This has huge ramifications for everything from insurance claims to "proof of life" in emergencies.

👉 See also: Why the Pen and Paper Emoji is Actually the Most Important Tool in Your Digital Toolbox

Some tech companies are working on "Content Provenance." The idea is that your phone's camera will digitally sign every photo and video you take, creating a blockchain-style trail of authenticity. If a video doesn't have that signature, it’s assumed to be synthetic.

Until that becomes standard, we are in a bit of a "Wild West" phase.

What You Should Do If You Want to Experiment

If you’re curious about this tech, don't just start swapping faces. It’s a fast way to lose friends. Here is how to handle it if you really want to dive in:

  1. Ask for Explicit Consent First: Don't surprise them. Explain what you're doing and why.
  2. Use "Sandboxed" Data: Use photos they’ve specifically sent you for this purpose rather than scraping their social media.
  3. Keep it Private: Never post a deepfake of someone else to a public forum without their written permission. Even if it’s "funny."
  4. Know the Platforms: Be aware that uploading a deepfake to platforms like YouTube or TikTok can get your account flagged for "manipulated media" violations, even if the subject is okay with it.

The New Social Contract

The most important takeaway from my time messing with this tech is that we need a new social contract. We used to worry about people taking out-of-context screenshots. Now, we have to worry about people rewriting our entire physical history.

When I made deepfakes of my friends, I thought I was being the "tech-savvy" one in the group. In reality, I was just the first one to stumble into a very complicated ethical swamp. The novelty of seeing your buddy's face on a movie star wears off in about ten minutes. The realization that your digital identity is now a liquid asset? That stays with you.

If you’re going to play with these tools, do it with a high degree of empathy. The technology is accelerating, but human psychology is still stuck in the same place it’s been for thousands of years. We want to be seen for who we are, not for what an algorithm thinks we look like.

Actionable Steps for Protecting Your Likeness

  • Audit your social media: If you have high-resolution videos of yourself speaking directly to the camera (vlogs, etc.), those are goldmines for deepfake creators. Consider moving them to "friends only" or private.
  • Establish a "Safe Word": For families, having a non-digital "safe word" or phrase is becoming a genuine security necessity. If you get a video or call from a loved one asking for money or claiming to be in trouble, ask for the word.
  • Use Watermarks: If you are a creator, use subtle watermarks or "noise" filters on your videos that make it harder for AI models to scrape your facial data cleanly.
  • Stay Informed: Follow organizations like Sensity AI or the MIT Media Lab, which track the evolution of synthetic media. Knowledge is the only real defense we have left.

The tech isn't going away. It’s only going to get more accessible, more convincing, and more integrated into our daily lives. Whether that’s a good thing or a bad thing depends entirely on whether we value our friends' privacy as much as we value a good laugh.