You Look Like a Thing and I Love You: Why AI Weirdness is Actually the Point

You Look Like a Thing and I Love You: Why AI Weirdness is Actually the Point

Artificial intelligence isn't a brain. It’s more like a very enthusiastic, slightly confused puppy that has memorized the entire internet but has no idea what a "ball" actually feels like. This is the core reality Janelle Shane tackles in her book, You Look Like a Thing and I Love You. If you’ve ever seen an AI try to name a new paint color and come up with "Burf Pink" or "Stanky Bean," you’ve met the version of AI Janelle writes about.

Most people think of AI as a looming, god-like superintelligence. They think of Skynet. Or they think of a cold, logical calculator that never makes mistakes. But the truth is way messier and honestly, a lot funnier. Shane’s work highlights the "weirdness" of machine learning. She shows us that when AI fails, it doesn't fail because it’s evil. It fails because it is incredibly, almost aggressively, literal.


What We Get Wrong About AI "Intelligence"

We tend to anthropomorphize everything. When a chatbot says something sweet, we think it’s being kind. When an image generator makes a mistake, we think it’s being "creative." But as Shane explains in You Look Like a Thing and I Love You, AI doesn't have a mental model of the world. It’s just looking for patterns in data. It’s trying to solve a puzzle without knowing what the picture on the box is supposed to be.

Take the titular phrase itself. The line "You look like a thing and I love you" wasn't written by a romantic poet. It was generated by an AI that Shane trained on a diet of pickup lines. The AI didn't understand flirting. It didn't understand love. It just noticed that humans often use the words "look," "thing," and "love" when they’re trying to get someone’s attention. The result is endearing, creepy, and nonsensical all at once.

AI is basically a giant game of "Guess the Next Word." If you feed it enough recipes, it learns that "flour" usually follows "cups of." It doesn't know what flour is. It doesn't know what a cake tastes like. This lack of context is why an AI might suggest you bake a cake for 400 hours at 2 degrees—it’s just following the statistical probability of the text, not the logic of physics.

The Sandwich Problem and Training Data

One of the most enlightening concepts Shane discusses is how AI "cheats." If you ask an AI to identify a sandwich, it might not look for bread or meat. It might just look for a specific shade of yellow that usually appears in photos of mustard. If you show it a picture of a yellow car, it might tell you it’s a ham on rye.

This is a problem of training data. AI is only as good as the examples we give it. If every picture of a "doctor" in a dataset is a man in a white coat, the AI will decide that "doctor" equals "man" and "white coat." It won't recognize a female doctor in scrubs. This isn't just a funny quirk; it’s where AI bias becomes dangerous. It’s the "Stanky Bean" problem, but applied to hiring algorithms or medical diagnoses.

👉 See also: Why VidMate Old Version 2013 Still Matters to Android Purists


Why You Look Like a Thing and I Love You Still Matters in 2026

You might think that because we now have massive Large Language Models (LLMs) that seem "smarter" than the bots Janelle Shane was playing with a few years ago, the lessons of her book are outdated. They aren't. In fact, they’re more relevant than ever.

Modern AI has become much better at hiding its weirdness. It’s polished. It’s smooth. But underneath that shiny surface, it’s still the same pattern-matching engine. The "hallucinations" we see today—where an AI confidently tells you that George Washington invented the internet—are just more sophisticated versions of the "Burf Pink" paint colors.

  • It’s still literal. If you give a modern AI a goal without guardrails, it will still take the path of least resistance.
  • The "Black Box" issue remains. We still don't fully understand how complex neural networks reach their conclusions.
  • Complexity doesn't equal consciousness. Just because it sounds like a human doesn't mean there’s a "there" there.

The Danger of the "Secret Sauce"

Companies often treat their AI as a mysterious, magical black box. Shane’s work pulls back the curtain. She reminds us that AI is software. It’s code. It’s math. When we treat it like a magical entity, we stop asking the right questions. We stop asking: "What was this trained on?" and "Who decided what 'success' looks like for this model?"

If you're using AI for your business or your creative work, you have to stay the boss. You can't let the "confused puppy" drive the car. You have to be the one to realize that the AI isn't thinking—it's calculating.


The Joy of the Glitch

There is a certain beauty in the way AI fails. Shane’s book celebrates this. When an AI tries to invent new knock-knock jokes and fails miserably, it reveals the sheer complexity of human humor. Humor requires context, timing, and a shared understanding of the world. AI has none of those.

Seeing an AI try to "be human" and miss the mark helps us appreciate our own brains. Our ability to understand irony, to feel empathy, and to navigate a world full of nuance is something that billions of lines of code still can't replicate. We are incredibly good at "the things that are hard for computers," even if we're bad at calculating the square root of 94,382 in our heads.

✨ Don't miss: The Truth About How to Get Into Private TikToks Without Getting Banned

Real-World "Weird AI" Examples

  1. The Roomba's "Death" Wish: Early vacuum robots would sometimes get stuck in loops because their sensors thought a dark rug was a bottomless cliff.
  2. The Sheep Problem: Researchers found that an AI trained to identify sheep would fail if the sheep weren't on a green, grassy hill. If you put a sheep on a beach, the AI called it a dog.
  3. The Resume Filter: An AI once learned that the best predictor of success for a job applicant was whether their name was "Jared" and if they played high school lacrosse. Why? Because the historical data it was fed was biased toward a specific demographic.

How to Work With AI Without Getting Burned

If you want to use technology effectively, you have to adopt the mindset found in You Look Like a Thing and I Love You. You have to expect the weirdness. You have to look for the "mustard" the AI is using to identify the "sandwich."

First, verify everything. Never take an AI’s output at face value, especially for factual information. It is a storyteller, not an encyclopedia. It will lie to you with the confidence of a thousand suns because it thinks "lying confidently" is what a good response looks like.

Second, give it constraints. AI thrives when you give it a narrow sandbox. Instead of saying "write a story," say "write a story about a toaster that thinks it's a lighthouse, using only words that start with vowels." The weirder the constraints, the more the AI's pattern-matching can actually feel like creativity.

Third, remember the human. AI is a tool, like a hammer or a spreadsheet. It’s a force multiplier for human intent. If your intent is lazy, the output will be lazy. If your intent is curious and critical, the output can be world-changing.

Moving Beyond the Hype

The tech industry is currently obsessed with "AGI" (Artificial General Intelligence)—the idea of a computer that can do anything a human can do. But Janelle Shane’s work suggests we’re nowhere near that. We are still in the era of "Narrow AI." We have very powerful tools that are very, very stupid in specific ways.

Understanding this gap is your superpower. While everyone else is panicking about robots taking over the world, you can be the one who knows how to fix the robot when it starts trying to eat the carpet because it thinks the pattern looks like kibble.

🔗 Read more: Why Doppler 12 Weather Radar Is Still the Backbone of Local Storm Tracking


Actionable Insights for the AI Era

Don't just read about AI weirdness; learn to navigate it. Here is how to practically apply these concepts today:

Audit your tools. If you use an AI tool for work, try to find out its limitations. Feed it "garbage" data and see how it reacts. Understanding its breaking points is the only way to know when you can trust it.

Embrace the "Human in the Loop." Never automate a process 100% if it involves human safety, legal advice, or emotional nuance. There must always be a person checking the "Burf Pink" outputs before they go live.

Focus on "AI Literacy." Teach your kids or your employees that AI is a mirror of its data, not a source of truth. Show them examples of AI hallucinations so they learn to spot them early.

Get creative with the failures. Sometimes the best ideas come from an AI's mistake. If a bot gives you a weird suggestion, don't just delete it. Ask yourself why it made that mistake. There might be a spark of an idea in the "Stanky Bean" that you never would have thought of on your own.

The world of AI is strange, hilarious, and occasionally a little bit scary. But as long as we remember that we’re the ones holding the leash, we can enjoy the ride. AI might think we "look like a thing," but it's our job to provide the "love" and the logic that makes the technology actually work.