You're standing in the grocery store aisle. There are six types of peanut butter. You’ve been there for three minutes. It feels like a moral failing, but it’s actually just a computational bottleneck. We spend our lives making choices—who to date, where to eat, when to quit a project—and most of us just wing it. We use "gut feeling." We use "intuition." But honestly? Our guts are often just messy heaps of cognitive bias.
There is a better way to think.
In their seminal book, Algorithms to Live By, Brian Christian and Tom Griffiths argue that the struggles we face in daily life are remarkably similar to the problems computer scientists have been solving for decades. It’s not about turning into a robot. It’s about realizing that "rationality" has a mathematical definition that can actually lower your stress levels.
The 37% Rule and the Agony of the Search
Let’s talk about dating. Or hiring. Or finding an apartment in a city where listings disappear in twenty minutes. This is what mathematicians call the Optimal Stopping Problem.
If you look at too many options, you waste time and eventually lose the best ones you saw earlier. If you commit too soon, you miss out on the potentially better ones later. So, when do you stop?
The math is brutal and beautiful. It’s 37%.
To maximize your chances of picking the absolute best candidate in a pool, you should spend the first 37% of your search "looking" without "leaping." If you're planning to date for ten years before getting married, the first 3.7 years are for data collection. You don't commit. You just establish a baseline. After that 37% mark, you marry the very next person who is better than everyone you saw during the look phase.
Does it guarantee a soulmate? No. But it is the mathematically optimal way to play the odds.
I’ve seen people use this for something as small as a parking spot. If you know there are about 50 spots on a stretch of road, don't even think about turning into one until you've passed the first 18. Once you hit that 19th spot, take the first one that’s open. It sounds crazy. It works.
Explore vs. Exploit: The Restaurant Dilemma
Ever find yourself going to the same taco place every Tuesday even though you know there’s a new spot down the street? That’s the Explore/Exploit Trade-off.
Exploitation is choosing what you already know is good. It gives you a guaranteed result. Exploration is trying something new. It’s risky, but it might yield a higher reward—or at least give you information you didn't have before.
The key factor here is time.
If you’re moving out of town tomorrow, you should exploit. Go to your favorite spot. There's no time left to use the information you’d gain from a new place. But if it’s your first week in a new city, you should explore like a maniac.
In computer science, this is often handled by an algorithm called "Upper Confidence Bound." Basically, you give a "bonus" to things you haven't tried yet. The less you know about something, the more potential it has. As we get older, we naturally shift toward exploitation. We have less "time on the clock," so we stick to the friends and hobbies we know we like.
That’s not being boring; it’s being computationally efficient.
Sorting, Searching, and the Mess on Your Desk
We spend a lot of time organizing things. We sort our books, our emails, our closets. But here’s a radical thought from the world of algorithms: Sorting is usually a waste of time.
Think about it. Why do we sort? We sort so that searching is faster later. But if you spend three hours filing papers that you only look at once every two years, you’ve spent more time sorting than you would have spent just rummaging through a messy pile.
Computers use something called Caching. The most important stuff should be at the very top of the pile.
There’s a concept called Least Recently Used (LRU). If you have a pile of clothes, and you always put the clean stuff back on top, the things you actually wear stay at the top. The stuff at the bottom? You probably don't need it.
Instead of a complex filing system, just use a "self-organizing" pile. Put the thing you just used back on the top. It’s not messy; it’s an LRU cache. It’s literally how your computer’s processor stays fast.
💡 You might also like: How to watch CNN live free without a massive cable bill
When to Stop Thinking: Overfitting and Complexity
We’ve all overthought a decision. We weigh 50 different variables for a new laptop purchase. This is what data scientists call Overfitting.
When a model is too complex, it starts to mistake "noise" for "signal." It fits the past data perfectly but fails to predict the future. If you try to pick a spouse based on a 100-point checklist including "likes the same obscure 90s indie band," you are overfitting. You’re focusing on details that don’t actually matter for long-term success.
The solution is Regularization. This is basically a penalty for complexity.
In the real world, this means giving yourself a time limit. Or a three-point checklist instead of a twenty-point one. If you can’t decide between two options, it’s often because they are so close in value that it doesn't actually matter which one you pick. The "cost" of the time spent deciding is higher than the difference between the two outcomes.
Just flip a coin. Seriously.
Mistakes and Game Theory
Sometimes, the best algorithm is to embrace a little chaos.
In networking, if two computers try to send a message at the exact same time, they "collide." If they both try to resend immediately, they’ll collide again. The solution is Exponential Backoff. They both wait a random amount of time, then try again. If they collide again, they double the wait time.
If you’re arguing with someone and you keep hitting a wall, stop. Don't try again in five minutes. Wait an hour. Then a day. Give the system space to clear.
And then there's Game Theory. We often get stuck in "Prisoner’s Dilemmas" where everyone acts in their own self-interest and everyone loses. The world is full of these. Climate change, traffic jams, even reply-all email threads.
The fix isn't usually "being a better person." It’s changing the game's "Mechanism Design." You change the rules so that the selfish choice is also the best choice for everyone.
Practical Next Steps for the Computationally Minded
You don't need a PhD in math to start using these. Start small.
- Use the 37% rule for your next minor search. Looking for a new podcast? Commit to the first one that beats the first three you sampled.
- Stop sorting your "to-do" list. Just do the thing at the top of the pile, and if something new comes in that’s urgent, put it on top.
- Recognize the Explore/Exploit shift. If you're feeling restless, you're probably under-exploring. If you're feeling overwhelmed, you're probably exploring too much and need to exploit some "known goods."
- Embrace the Mess. If the cost of organizing a drawer is higher than the cost of searching it once a month, leave it messy. You've just saved yourself valuable CPU cycles.
Living by algorithms isn't about being cold. It's about being kind to yourself. It’s about recognizing that some problems are just fundamentally hard, and there is a "best" way to handle them that doesn't involve losing sleep.
The next time you're paralyzed by a choice, ask yourself what a computer would do. Usually, the answer is: make a decision, move on, and stop overthinking the noise.