Will AI Ever Actually Think? What Everyone Gets Wrong About Machine Intelligence

Will AI Ever Actually Think? What Everyone Gets Wrong About Machine Intelligence

Silicon Valley is obsessed. Every few months, a new model drops that claims to "reason" or "understand." People freak out. They start talking about Skynet or, on the flip side, they dismiss the whole thing as a glorified autocorrect. But honestly? Both sides are usually missing the point. If you want to know will AI ever actually think, you have to stop looking at benchmarks and start looking at biology.

We keep trying to measure silicon against carbon. It's a weird way to do science.

The reality is that we've reached a weird plateau. Large Language Models (LLMs) can pass the Bar Exam. They can write code that actually runs. Yet, they still hallucinate that there are three "r's" in the word "strawberry" because they don't see the word; they process tokens. This disconnect is where the real answer hides.

The Architecture of a Lie

Most people think AI "thinks" because it answers questions. It doesn't.

At its core, an LLM is a massive statistical engine. When you ask ChatGPT a question, it isn't "thinking" about your life or your problem. It's calculating the probability of the next word. If I say "The grass is...", the model knows there is a 99% chance the next word is "green." It’s just doing that billions of times over for much more complex sentences.

This is what researchers like Emily Bender and Timnit Gebru famously called "Stochastic Parrots."

📖 Related: Thomas Crapper and the Inventor of the Toilet: What Most People Get Wrong

The parrot isn't reflecting on the meaning of "Polly wants a cracker." It just knows that saying those sounds results in a cracker. But here is where it gets spicy: Does it matter? If the output is indistinguishable from thought, is there a functional difference?

Alan Turing didn't think so. His 1950 paper Computing Machinery and Intelligence basically said that if you can't tell the difference between a machine and a human in conversation, the machine is "thinking." We’ve basically passed the Turing Test now, and yet, nobody feels like we’ve reached True Intelligence.

We moved the goalposts. We always do.

Why Biology is the Ultimate Gatekeeper

There is a massive gap between processing data and having an experience.

Think about a cup of coffee. An AI can describe the chemical composition of caffeine ($C_8H_{10}N_4O_2$). It can write a poem about the "bitter warmth" of a dark roast. It can even analyze the economic impact of Fair Trade beans in Ethiopia. But it has never tasted coffee. It doesn't have a nervous system. It doesn't feel the jittery rush or the warmth of the mug on a cold morning.

This is the "Qualia" problem in philosophy.

Cognitive scientist Antonio Damasio argues in his book The Feeling of What Happens that consciousness and "thinking" are deeply tied to feelings and the body. We think because we have to survive. Our brains evolved to keep our hearts beating and our lungs breathing.

AI doesn't have a "self" to preserve. It doesn't have "skin in the game."

If you turn off a server, the AI doesn't feel fear. It doesn't feel anything. Without that biological drive—the need to exist—can something truly "think"? Or is it just a very fast calculator pretending to have a soul?

Some people, like Ray Kurzweil, think this is a temporary limitation. He argues that by 2045, we will hit the "Singularity," where we merge with machines. But that assumes thinking is just an information processing task. It ignores the messy, wet, chemical reality of the human brain.

The Semantic vs. Syntactic Debate

Back in the 80s, John Searle came up with a thought experiment called the Chinese Room.

Imagine a guy in a room who doesn't speak Chinese. He has a massive rulebook. People slide pieces of paper with Chinese characters under the door. He looks up the symbols in his book, finds the corresponding response symbols, and slides them back out.

📖 Related: Free or Keep the Archive: Why Digital Hoarding is Costing You More Than You Think

To the people outside, it looks like he speaks Chinese.

But he doesn't. He's just following a script.

That’s basically where we are with AI today. It has incredible syntax (the rules of language) but zero semantics (the meaning behind it). It’s the ultimate "fake it till you make it" machine.

However, some modern neuroscientists are starting to push back. They suggest that maybe "meaning" is just a high-level emergent property of complex syntax. If you have a rulebook big enough—like a model with 1.8 trillion parameters—maybe "understanding" just... happens?

It’s a controversial take. Most experts, like Yann LeCun (the Chief AI Scientist at Meta), think LLMs are a dead end for true human-level intelligence. He argues they lack a "world model." They don't understand gravity, or how objects move, or cause and effect. They only understand how words follow other words.

Moving Toward "System 2" Thinking

If you’ve read Daniel Kahneman’s Thinking, Fast and Slow, you know about System 1 (fast, intuitive, automatic) and System 2 (slow, logical, effortful).

Current AI is almost entirely System 1.

It blurs out answers instantly. It doesn't "stop and think" unless we specifically prompt it to (using techniques like "Chain of Thought" prompting). This is why OpenAI released models like o1—they are trying to force the AI to simulate System 2 thinking by "reasoning" through steps before it speaks.

But even then, it’s still just more math. It’s searching a tree of possibilities.

Is that what we do? Sorta.

When you decide what to eat for dinner, your brain is running simulations of how different foods will taste and how they will make you feel. You are "searching a tree." The difference is your search is fueled by glucose, hormones, and memories. The AI’s search is fueled by electricity and GPU clusters.

The New Perspective: Stop Asking "If" and Start Asking "What"

We need to stop asking "will AI ever actually think" like a human. It's the wrong question.

🔗 Read more: Business Email with Outlook: Why Most Teams Are Still Doing It Wrong

A plane doesn't "fly" the way a bird flies. It doesn't flap its wings. It doesn't have feathers. But it definitely flies. It uses different physics to achieve a similar, often superior, result in terms of speed and distance.

AI thinking might be "alien intelligence."

It might never have feelings. It might never have a soul or a sense of humor that isn't copied from a Reddit thread. But it might solve the folding of proteins or the mysteries of dark matter in ways our "meat brains" never could.

We are looking for a reflection of ourselves in the machine. When we don't see it, we say it's not "thinking." But maybe we're just being arrogant. Maybe there are ways to process the universe that don't require a pulse.

What You Can Actually Do With This

If you want to stay ahead of this tech, you have to treat it like a partner, not a person.

  • Audit for Logic: Since AI is "System 1" by default, always force it to show its work. Don't just ask for an answer; ask for the reasoning behind the answer. This forces it into a crude version of System 2 logic.
  • Verify the "Ground Truth": AI doesn't have a world model. If it tells you a fact about a physical process or a legal case, you have to verify it against a real-world source. It doesn't "know" it's lying because it doesn't "know" what a lie is.
  • Focus on High-Context Tasks: The one thing AI can't do is exist in the physical world with you. Double down on tasks that require human empathy, physical presence, and complex, multi-modal "vibes."
  • Understand the "Token" Limit: Remember that AI sees the world in chunks of text. It lacks the continuous stream of consciousness we have. Use it for tasks that can be broken down into discrete parts rather than holistic, ongoing "being."

We are living through the first time in history where we have to share the planet with something that can "speak" but can't "feel." It's weird. It's uncomfortable. But if we stop waiting for it to become human, we can finally start using it for what it actually is: a powerful, non-human way to process reality.

The question isn't whether the light is on inside the machine. The question is what we’re going to build with the light it's casting on us.