Why the FiveThirtyEight model still matters (even when it's wrong)

Why the FiveThirtyEight model still matters (even when it's wrong)

Nate Silver isn't there anymore. That's the first thing you have to wrap your head around if you’re looking at the FiveThirtyEight model in 2026. The site has been through the wringer—Disney layoffs, a massive leadership vacuum, and the departure of the guy whose literal birthday (January 13th, or 5/13) gave the domain its name. But the "Model" with a capital M still lives on. It’s this weird, digital oracle that political junkies and sports bettors treat like a holy text, even though it basically just tells us what we already know: humans are unpredictable and math is hard.

It’s about probability, not prophecy. People forget that.

What actually makes a FiveThirtyEight model tick?

Most people think the FiveThirtyEight model is just an average of polls. It’s not. If it were just an average, you could do it on a napkin. Instead, it’s a massive simulation—a Monte Carlo engine—that runs the "election" or the "season" 20,000 times to see how often a specific outcome happens.

Think about it like this. If a weather forecaster says there is a 20% chance of rain, and it rains, was the forecaster wrong? No. They said it could happen. But in politics, if the FiveThirtyEight model gives a candidate a 20% chance and they win, the internet explodes. "The model failed!" No, the 20% event just happened to be the one we’re living in.

The secret sauce used to be Nate Silver's specific weighting system. He’d look at pollster ratings—giving more "points" to high-quality outfits like Selzer & Co. or Siena College/New York Times—and less to "junk" polls that just automated robocall people. Now, under the guidance of G. Elliott Morris, the model has shifted. It relies more heavily on "fundamentals." We’re talking about things that aren’t polls: economic growth, incumbency advantage, and even how polarized a state has become over the last decade.

The 2016 trauma and the 2020 "correction"

We have to talk about 2016. It's the elephant in the room.

Every other data outlet had Hillary Clinton at a 99% or 95% chance of winning. The FiveThirtyEight model was the outlier; it gave Donald Trump a roughly 29% chance on election night. To the casual observer, that looked like a "safe" win for Clinton. To a statistician, 29% is the same as losing a game of Russian Roulette. It’s high. Very high.

💡 You might also like: Passive Resistance Explained: Why It Is Way More Than Just Standing Still

The model caught something others didn't: correlated error. This is the fancy term for "if the polls are wrong in Pennsylvania, they are probably also wrong in Michigan and Wisconsin." Most models at the time treated those states as independent events. FiveThirtyEight’s math realized that voters in the Rust Belt tend to move in packs. If one state tilted, they all would.

By 2020, the model got even more cautious. It factored in the "uncertainty" of a global pandemic and the massive shift toward mail-in voting. Even when Joe Biden was up by double digits in some national polls, the FiveThirtyEight model kept a wider range of possibilities open. It predicted a Biden win, sure, but it also signaled that the "Blue Wave" might be more of a "Blue Ripple" in the House and Senate. It was right.

Why you can't just trust the "Poll of Polls"

Polls are getting worse. Honestly, they’re kinda a mess right now. Nobody answers their phone anymore. If you see a random number on your screen, do you pick up? Of course not. You let it go to voicemail. This creates a "non-response bias." The only people answering polls are those who are super engaged or, frankly, have a lot of free time.

This is where the FiveThirtyEight model tries to save itself. It uses a technique called MRP (Multilevel Regression and Poststratification). It’s a mouthful. Basically, they take a small group of people and use demographic data to "fill in the blanks" for the rest of the population. If they talk to five 30-year-old Black men in Georgia, they use that data to estimate how all 30-year-old Black men in Georgia might vote, adjusted for education and income.

It’s clever. Is it perfect? No way.

The "Fundamentals" vs. The Polls

There’s a constant tug-of-war inside the code.

📖 Related: What Really Happened With the Women's Orchestra of Auschwitz

  1. The Polls: What people say they will do today.
  2. The Fundamentals: What history says should happen based on the price of gas and who is in the White House.

Early in a campaign, the FiveThirtyEight model leans heavily on the fundamentals. It’s too early for polls to be accurate; people aren't paying attention yet. As we get closer to Election Day, the "weight" shifts. The polls take over the driver's seat. If the polls and the fundamentals are saying two different things, the model gets "uncertain." This is usually when you see the win probability sit near 50/50, even if one candidate is slightly ahead.

Sports and the "Elo" obsession

While everyone screams about politics, the FiveThirtyEight model for sports—specifically the Elo ratings—is actually where the most consistent math happens. Elo is a system originally designed for chess. You gain points for beating a tough opponent and lose points for losing to a "scrub."

In the NFL or NBA models, this creates a rolling power ranking. It doesn't care about "momentum" or "who wants it more." It only cares about the score and the quality of the opponent. If the Kansas City Chiefs beat a winless team by 3 points, the model might actually drop their rating because they didn't win by as much as a "top-tier" team should have. It’s cold. It’s calculated. And it’s often better than the "experts" on TV.

The criticism: Is it just a "vibe" check?

Critics like Nassim Taleb have famously attacked the FiveThirtyEight model, calling it "spurious." The argument is that by constantly changing the odds, the model isn't actually predicting anything—it’s just tracking the current mood. If the odds go from 60% to 70% and back to 60%, is that meaningful information or just noise?

There’s also the "herding" problem. Pollsters don’t want to be the outlier. If everyone else says the race is tied, a pollster might be tempted to "adjust" their data so they don't look crazy. If the pollsters are herding, the FiveThirtyEight model is just aggregating a bunch of people who are all afraid of being wrong.

How to read the model like an expert

Stop looking at the percentage. Seriously.

👉 See also: How Much Did Trump Add to the National Debt Explained (Simply)

The "Win Probability" is the least interesting part of the FiveThirtyEight model. If you want to actually understand what’s happening, look at the "Snake Chart" or the "Tipping Point State."

The tipping point state is the one that gives the winner their 270th electoral vote. In 2024 and 2026 cycles, this has consistently been Pennsylvania or Wisconsin. If a candidate is winning the national popular vote by 5 points but losing the tipping point state, the model will tell you they are in deep trouble. The national average is a vanity metric. The tipping point is the reality.

Also, check the "Fat Tails." In statistics, a fat tail means there’s a higher-than-expected chance of an extreme event. If the model shows a wide distribution of outcomes, it means a landslide is just as possible as a razor-thin margin. When the distribution is narrow, the model is confident. Most people ignore the "shading" on the graphs, but that’s where the truth is hidden.

The future of data journalism

With Nate Silver launching his own "Silver Bulletin" on Substack and many of the original crew gone, the FiveThirtyEight model is in a transitional era. It’s more institutional now. It’s less about one man’s "gut" and more about a standardized set of algorithms maintained by ABC News.

Does that make it better? Maybe. It’s certainly more transparent. They’ve moved toward open-sourcing more of their logic, allowing other nerds to poke holes in the code. In a world of "fake news" and "alternative facts," having a benchmark—even a flawed one—is better than flying blind.

Actionable takeaways for following the data

If you’re going to use these models to inform your world view (or your bank account), follow these rules:

  • Ignore early-season models. Until about 60 days before an election or halfway through a sports season, the noise-to-signal ratio is too high.
  • Look for "Correlated Shifts." If five different models (538, Cook Political Report, Decision Desk HQ) all move in the same direction at once, something real is happening.
  • Check the "Pollster Grade." If a new poll drops that shows a wild result, see how FiveThirtyEight grades that pollster. If it's a "C-" or "D," ignore the headline.
  • Watch the "Deluxe" vs. "Lite" versions. The "Lite" version is just polls. The "Deluxe" includes the economy. If they are far apart, the economy hasn't "baked in" to voter sentiment yet.

The FiveThirtyEight model isn't a crystal ball. It’s a sophisticated weather map for the chaos of human behavior. Treat it as a guide, not a guarantee, and you’ll be ahead of 90% of the people on social media.