Minds Brains and Programs Explained: Why John Searle Still Worries AI Researchers

Minds Brains and Programs Explained: Why John Searle Still Worries AI Researchers

If you’ve ever felt a little spooked by how well ChatGPT or Claude seems to "understand" your bad jokes, you’re basically walking in the footsteps of a philosopher named John Searle. Back in 1980, long before anyone was worried about LLMs taking over the world, Searle published a paper called Minds, Brains, and Programs. It changed everything.

Honestly, it’s one of those rare academic papers that actually makes sense when you read it. He wasn't just being a buzzkill. He was trying to figure out if a machine could ever truly know what it was talking about, or if it was just a really fast calculator.

The Paper That Started the Fight

Searle’s paper, Minds, Brains, and Programs, originally appeared in the journal Behavioral and Brain Sciences. He starts by making a distinction that we still use today: Weak AI vs. Strong AI.

Most of the tools we use are Weak AI. They’re useful. They help us predict the weather or diagnose diseases. They’re just tools. But Strong AI is the big dream. It’s the idea that a properly programmed computer doesn't just simulate a mind—it actually is a mind.

The Chinese Room Experiment

This is the part everyone remembers. Imagine you're locked in a room. You don’t know a single word of Chinese. You’ve got a massive rulebook written in English. Outside the room, people slide slips of paper under the door with Chinese characters on them.

You look at the symbols.

📖 Related: The Arkansas Nuclear One Accident That Everyone Forgot (But Engineers Still Study)

You look at the rulebook.

The rulebook says: "If you see symbol A, write symbol B on a new piece of paper."

You follow the rules perfectly. You slide the reply back under the door. To the people outside, it looks like you’re a native Chinese speaker. They think you’re having a deep conversation about tea or philosophy. But you? You’re just moving ink around. You have no idea what "A" or "B" means.

Searle says this is exactly what a computer does. It has the syntax (the rules for moving symbols) but zero semantics (the meaning).

Why Syntax Isn't Semantics

The core of Minds, Brains, and Programs is the argument that symbol manipulation isn't the same thing as understanding. You can get the "output" right without having any "input" on the meaning.

Consider a calculator. When you type $2 + 2$, the calculator gives you $4$. Does the calculator know what the number "two" represents? Does it understand the concept of quantity? No. It’s just triggering electrical switches based on a program.

✨ Don't miss: s1: simple test-time scaling and Why OpenAI’s o1 Might Have Competition

Searle argued that human brains are different because they have "causal powers." He’s a bit vague on what those are, but he insists that biological brains produce intentionality in a way that silicon chips running programs just can’t.

The Systems Reply

Of course, AI researchers weren't just going to take this lying down. The most famous pushback is called the Systems Reply.

Critics argued that while the person in the room doesn't understand Chinese, the whole system (the person, the room, the rulebook, the baskets of symbols) actually does. It’s like saying a single neuron in your brain doesn't understand English, but "you" do.

Searle had a pretty cheeky response to this. He said: "Okay, imagine I memorize the whole rulebook. I do all the calculations in my head. I am the whole system. I’m walking around outside, speaking Chinese flawlessly because I’ve memorized the rules. Do I understand Chinese yet?"

His answer was still a hard no. You'd just be a guy who is really good at following rules he doesn't understand.

Is Searle Still Right in 2026?

It’s been decades, and the debate is louder than ever. We now have Large Language Models (LLMs) that can write poetry and code. They look like they understand.

🔗 Read more: X Y Z Axis: Why Most People Still Struggle with 3D Space

But are they just bigger Chinese Rooms?

Stochastic Parrots

Many modern researchers, like Emily M. Bender and Timnit Gebru, have echoed Searle's sentiment. They call these models "stochastic parrots." They’re just predicting the next most likely word based on a massive database of text. There is no "there" there.

On the flip side, some people think Searle missed the point. If a machine can behave perfectly like a conscious being, does the internal "feeling" of understanding even matter? This is the classic Turing Test approach. If you can't tell the difference, maybe there isn't one.

Key Takeaways from Searle’s Work

  • Syntax vs. Semantics: Just because a machine follows the rules of language doesn't mean it understands the meaning behind the words.
  • Biological Naturalism: Searle believed that consciousness is a biological process, like digestion or photosynthesis. You can't simulate digestion on a computer and expect it to actually digest a sandwich.
  • Strong AI is a Myth: For Searle, the idea that a program is a mind is a category error.

Actionable Steps for Navigating the AI World

Understanding Searle’s argument helps you look at modern technology with a more critical eye. Here is how you can apply these insights today:

  1. Don't Anthropomorphize: When your AI assistant says "I think" or "I feel," remember the Chinese Room. It’s a linguistic trick, not a sign of a soul. Use the tool for what it is—a sophisticated pattern matcher.
  2. Verify the "Logic": Because AI lacks semantics, it can easily produce "hallucinations." It might give you a perfectly grammatical sentence that is factually impossible because it doesn't "know" the facts; it only knows the patterns of how people talk about facts.
  3. Focus on Grounded Learning: If you're a developer or a student, look into "grounded" AI. This is a field that tries to connect AI symbols to real-world sensory data (like video or robotics). This is the closest we might get to bridging the gap Searle identified.
  4. Read the Original Paper: Honestly, Minds, Brains, and Programs is surprisingly readable. It’s a great exercise in logic that will make you a better thinker about the future of tech.

Whether you agree with him or not, Searle's thought experiment remains the ultimate "vibe check" for artificial intelligence. It forces us to ask: are we building minds, or just really fancy rooms?