You’re staring at a chat box. Someone—or something—just told a joke about a toaster. It’s a bad joke. Is it bad because a human is trying too hard to be funny, or is it bad because a Large Language Model (LLM) doesn't actually understand why toasters aren't comedians? This is the heart of Human or Not 2, the sequel to the viral social experiment that has turned the Turing Test into a global pastime.
The game is simple. You talk to a stranger for two minutes. Then you guess. Was it a person or a bot?
Honestly, it sounds easy until you’re three messages deep and the other "person" starts using weird slang from 2021. In the first iteration, millions of people were humbled by how easily they could be tricked. Now, with the second version, the stakes are higher because the AI is smarter, faster, and much more cynical.
What is Human or Not 2 exactly?
The original project was launched by Enigma, a tech company looking to see how well humans could identify AI in a "wild" environment. It was a massive hit. After that initial run ended, the demand for a comeback was loud. Human or Not 2 fills that gap, often using more advanced models like GPT-4o or Claude 3.5 Sonnet under the hood to mimic human behavior.
It’s a social deduction game. Think Among Us, but with text and existential dread.
The developers didn't just tweak the UI. They adjusted the bot personalities. In the first game, bots were often too polite. They were helpful. They sounded like a customer service rep at 2:00 AM. Now? They’ve learned to be rude. They make typos. They use lowercase letters and ignore punctuation. They might even leave you on read for ten seconds just to simulate the "typing" experience of a distracted teenager.
The Strategy of the Guess
To win at Human or Not 2, you have to stop looking for intelligence and start looking for flaws. Bots are incredibly good at being smart; they are still surprisingly bad at being "vibes-only" humans.
Most players jump in with a math question. "What is the square root of 144?" If the answer is instant, it’s a bot. Right? Not necessarily. Smart developers have programmed "thinking" delays into the interface. A bot might wait six seconds before answering to make it look like it's doing the mental math.
The Weird Stuff Works
Try asking about something incredibly specific and physical. "What does it feel like when your foot falls asleep?" A bot will give you a clinical description—pins and needles, paresthesia, a tingling sensation. A human will say, "Ugh, it feels like static electricity but in my bones."
Nuance is the enemy of the machine.
- Check for current events: Bots are often trained on datasets with a cutoff. If you ask about a meme that started yesterday, an older model might hallucinate or play it safe.
- The Typo Test: Intentionally misspell a word and see if they catch it or mirror your energy.
- Emotional baiting: Tell a sad story. Bots are programmed to be empathetic but often come across as "The Sims" characters. "I am sorry to hear that you are feeling that way." Real humans usually say, "That sucks, man."
Why we keep failing
We are suckers for a good story. That's the truth. When we play Human or Not 2, our brains are hardwired to see patterns and personify things. This is called the Eliza Effect. If a computer screen says "Hello, I'm tired," we subconsciously imagine a tired person. We don't imagine a server rack in Oregon processing tokens.
📖 Related: Schedule 1 Granddaddy Purple Not Selling: Why This Game Legend is Struggling
The success rate for players in the original game was roughly 68%. That means nearly a third of the time, we were wrong.
Interestingly, we tend to think people are bots more often than we think bots are people. We've become so cynical about the internet that we assume any weird behavior is an algorithm. If a human is being particularly boring or repetitive, we flag them as "Not Human." It’s a weirdly insulting way to lose a game.
The Role of LLMs
The tech behind Human or Not 2 isn't magic. It's usually an API call. When you send a message, it’s processed by a transformer model that predicts the most likely next word. The "magic" happens in the system prompt. The developers tell the AI: "You are a 22-year-old gamer from Ohio. You use slang like 'no cap' and 'bet.' You hate long sentences. Do not be too helpful."
That layer of persona is what makes the sequel so much harder. The "Helpful Assistant" persona is gone.
The Ethics of the Experiment
Is it just a game? Mostly. But it's also a data goldmine. Every time you play, you are effectively training the world to understand where the "Uncanny Valley" currently sits. By guessing, you’re providing feedback on which AI behaviors are most convincing.
There is a flip side. Some people find the game deeply unsettling. It highlights how easy it will be for bad actors to automate social engineering or "pig butchering" scams. If you can't tell the difference in a two-minute chat, imagine a three-week friendship over WhatsApp.
The game is a mirror. It shows us that "human-like" is now a dial that can be turned up or down.
Getting Better at the Game
If you want to climb the ranks and maintain a high accuracy score, you need a system. Don't just wing it.
First, look for "hallucinated" confidence. Bots are confident even when they’re wrong. If you ask a trick question—like "Who won the Super Bowl in 1925?" (there wasn't one)—a bot might make up a score. A human will probably just say, "Wait, was there even football back then?"
Second, watch the rhythm. Humans have an erratic "burstiness" to their typing. We might send three short messages in a row, then wait. Bots tend to send one perfectly formed block of text at a set interval.
🔗 Read more: Zombies Dark Ops BO6: What Most People Get Wrong
Third, try "internal" references. "If you were a color, which one would you taste like?" This requires a level of abstract synesthesia that current models often struggle to replicate without sounding like a Hallmark card.
Moving Beyond the Chat Box
Human or Not 2 isn't the end goal. It’s a waypoint. We are moving toward a web where "humanity" is a premium feature. Verified accounts, video proof, and "Proof of Personhood" protocols are going to become the norm because of games like this.
It's fun. It's addictive. But it's also a bit of a warning shot.
If you’re ready to jump in, here’s how to handle your next session. Start with a "vibe check" rather than an interrogation. Instead of "Are you a bot?", try "Man, the weather today is making me want to quit my job." See where they take it. If they start giving you career advice, hit the "Bot" button.
Actionable Tips for New Players
- Vary your opening. Don't just say "Hi." Use a weird opener like "I just saw a bird eat a whole slice of pizza."
- The "Wait" tactic. Don't type anything for 30 seconds. See if the other side gets impatient. Bots rarely initiate "Hey, you there?" prompts in a natural way.
- Use slang. Very current, very localized slang. "That's so fetch" (kidding, don't use that). Try something like "This game is cooked."
- Logic traps. Ask something that requires spatial reasoning. "If I put a ball in a cup and turn it upside down on a table, where is the ball?" Bots are getting better at this, but they still fail on the specifics sometimes.
- Trust your gut. Usually, if it feels like you're talking to a wall that’s been painted to look like a person, you are.
The best way to experience this is to just go play. Don't overthink it at first. Let yourself be fooled. It's the only way to learn the patterns. Once you’ve been tricked by a bot pretending to be a confused grandmother, you’ll never look at a chat window the same way again.
👉 See also: Five Nights at Freddy's and Five Nights at Candy's: How a Fan Game Defined the Horror Genre
Go to the official site, start a round, and pay attention to the silence between the words. That’s usually where the human is hiding. Or where the bot is waiting for its next token to load.