Most people treat ChatGPT like a fancy search engine or a particularly obedient intern. They're missing the point. If you’re just "using" it, you’re staying on the surface. To really get ahead, you have to move toward co-intelligence: living and working with AI as a partner, not a tool.
Ethan Mollick, a professor at Wharton who has become the de facto voice for practical AI implementation, argues that we’ve entered an era where the "jagged frontier" of technology defines our daily lives. Some tasks are now trivial for AI, while others—that seem simple to us—remain impossible for it. It’s weird. It’s inconsistent. And honestly, it’s a bit unsettling how much our jobs are shifting under our feet.
The Mental Shift: From Tool to Teammate
Stop thinking about software. When you use Excel, you click a button and get a predictable result. AI isn't like that. It’s probabilistic, meaning it’s basically a high-speed guessing machine based on trillions of connections.
If you want to master co-intelligence: living and working with AI, you have to start treating the model like a person—specifically, a very smart, very fast, but occasionally delusional colleague. You wouldn't give a human intern a one-sentence prompt and expect a masterpiece. You’d talk to them. You’d provide context. You’d correct them when they go off the rails.
This isn't just about "prompt engineering." That's a term that will probably be obsolete in a year anyway. It’s about social intelligence applied to silicon.
Why the "Jagged Frontier" Matters
The frontier is jagged because AI capabilities don't expand in a straight line. Research from Harvard and BCG showed that for some high-level consulting tasks, AI boosted performance by 40%. But for other tasks that looked similar on the surface, it actually led to worse results.
People who don't understand this fall into two traps. They either trust it too much and let it hallucinate a legal brief into existence, or they dismiss it entirely because it failed at a simple math problem once. Both are wrong.
📖 Related: iPhone Home Screens: Why Your Setup is Probably Stressing You Out
Real-World Co-intelligence: Living and Working with AI in 2026
How does this actually look in a Tuesday morning meeting? It’s not about having the AI write the email. It’s about using the AI to stress-test your strategy.
Imagine you’re a marketing manager. You have a plan. Instead of asking the AI to "write a plan," you give it your plan and say: "Act as a cynical competitor. Find the three biggest holes in this strategy and tell me why it will fail." That is co-intelligence. You are the director; the AI is the sparring partner.
- Ideation: Don't ask for "ten ideas." Ask for "ten ideas that a traditional firm would never consider because they are too risky."
- Coding: Software engineers are now "architects" who spend more time reviewing AI-generated code than typing it out themselves.
- Learning: You can take a complex white paper and tell the AI, "Explain this to me like I’m a high-schooler, then quiz me to make sure I actually got it."
It's basically a personalized tutor that never gets tired of your questions.
The Ethics of the New Partnership
We can't talk about co-intelligence: living and working with AI without acknowledging the elephant in the room: job displacement and the "human" element.
There is a real fear that we’re losing our edge. If the AI does the drafting, do we lose the ability to think deeply? Maybe. But we also lost the ability to do long division when calculators arrived, and we’re doing fine. The goal isn't to outsource your brain. The goal is to use the AI to handle the "drudge work" so you can focus on what actually requires a human soul—empathy, complex judgment, and genuine original thought.
Ethical use also means being honest. If you're using AI to write a performance review for an employee, you're failing at the "co" part of co-intelligence. That’s just being lazy. Humans deserve human feedback.
Dealing with the Hallucination Problem
The AI will lie to you. It won't mean to, but it will. Because it’s a prediction engine, it wants to give you a satisfying answer more than it wants to give you a correct one.
Expert users know this. They use a technique called "Chain of Thought" prompting, forcing the AI to show its work step-by-step. If you see the logic trail, you can spot the lie before it becomes a problem.
Practical Steps for Mastering Co-intelligence
You don't need a PhD in computer science. You just need curiosity.
- Invite AI to the table. Keep a window open all day. Don't just use it for "tasks." Use it for thoughts. "Hey, I’m feeling stuck on this project, give me a new perspective."
- Verify everything. Never copy-paste without a second pair of eyes. Think of the AI as a brilliant but slightly drunk genius.
- Find the jagged edge. Test the AI on different parts of your job. Figure out where it excels and where it falls flat. That map is your most valuable asset.
- Stay "Human-in-the-Loop." This is a technical term, but it’s a life philosophy now. Never let the AI be the final decider. You are the pilot; the AI is the autopilot. The autopilot is great for the long haul, but you want a human landing the plane in a storm.
The future of co-intelligence: living and working with AI isn't about being replaced. It's about being augmented. Those who learn to dance with the machine will find they can move much faster than those trying to fight it or ignore it.
✨ Don't miss: What Does the Percent on the Weather App Mean? The Truth About Your Rain Forecast
Start by giving it a difficult task today—not something for it to do for you, but something for it to do with you. See what happens when you push back on its first answer. That's where the real magic is.