You’ve probably seen the "barometer" dials. Those colorful, semi-circular graphs that look like a speedometer on a 1990s Honda Civic, showing you exactly which teaching tricks are "hot" and which are "not." If you work in a school, John Hattie is a name that carries the weight of a thousand faculty meetings. His work, specifically the John Hattie effect size rankings, has been called the "Holy Grail" of education.
But here’s the thing. Most people are using these numbers all wrong.
It’s not just about picking the biggest number and running with it. Honestly, if you just look at a list and say, "Collective Teacher Efficacy is a 1.57, so let's do that," you’re missing the point of the last thirty years of educational research. The data is messy. It’s complicated. And it’s kind of brilliant if you actually know how to read the fine print.
The 0.40 Hinge Point: Is it Actually Magic?
Let’s talk about the "hinge point." Hattie basically looked at thousands of studies—meta-analyses, to be technical—and found that the average effect size for almost everything we do in schools is about 0.40.
Think about that.
Almost every intervention "works" to some degree. You could stand in front of a class and recite the phone book, and the kids would probably learn something. Hattie’s argument is that we shouldn't be satisfied with just "making a difference." We should be looking for things that work better than the average.
In his world, an effect size of 0.40 represents roughly one year of growth for one year of schooling. If a strategy has an effect size of 0.80, you’re theoretically seeing two years of growth in a single calendar year. That's the dream, right? But it’s not a law of physics. It’s a statistical average.
🔗 Read more: The Recipe With Boiled Eggs That Actually Makes Breakfast Interesting Again
Some researchers, like Robert Slavin, have been pretty vocal about the flaws here. They argue that Hattie mixes "apples and oranges" by combining studies with different types of tests and different groups of kids. A massive effect size in a tiny, controlled experiment might look great on paper, but it doesn't always translate to a chaotic classroom with 30 middle schoolers on a Friday afternoon.
What the Numbers Actually Say
You want the hits? Here are the heavy hitters from the most recent updates, including insights from Visible Learning: The Sequel.
Collective Teacher Efficacy (1.57)
This is the big one. It’s not just about being a "good teacher." It’s about the staff as a whole truly believing they can impact student outcomes. When a team of teachers stops saying "these kids can't learn because of their home life" and starts saying "we can figure out how to teach them regardless," the impact is massive. It’s a mindset shift, not a worksheet.
Teacher Estimates of Achievement (1.29)
This one is slightly spooky. It turns out that a teacher’s internal "prediction" of how a student will perform is one of the strongest predictors of how they actually perform. If you think a kid is a genius, they’ll likely rise to it. If you’ve written them off? Well, the data says they’ll probably meet those low expectations.
Cognitive Task Analysis (1.29)
Basically, this is the art of breaking down a complex skill into tiny, logical steps. It’s why a master chef can teach you to make a soufflé even if you can’t boil an egg. You’re making the "hidden" thinking processes visible.
Self-Reported Grades (1.33)
Hattie renamed this "Student Expectations." If you ask a student, "What grade do you think you’ll get on this test?" before they take it, they are usually right. The goal of the teacher is to help the student exceed their own expectations. Once a kid realizes they can do better than they thought they could, their whole trajectory changes.
💡 You might also like: Finding the Right Words: Quotes About Sons That Actually Mean Something
Why "Class Size" Doesn't Rank Higher
This is where the John Hattie effect size research usually causes a fight in the teacher's lounge. Reducing class size has an effect size of about 0.21.
That’s below the 0.40 hinge point.
Does that mean small classes don't matter? Of course not. It just means that if you take a teacher who is used to lecturing 30 kids and give them 15 kids, they usually still just lecture. The class size changed, but the teaching didn't. To get the benefit of a smaller group, you have to change the way you interact with the students. If you don't change the "how," the "how many" won't save you.
The Dark Side of the Rankings
It’s easy to get obsessed with the "top 10" list, but the bottom of the list is just as telling.
- Retention (Repeating a grade): -0.32. It’s one of the few things we do that actually hurts learning.
- Summer Vacation: -0.02. Yeah, the "summer slide" is real, especially for kids who don't have access to books or enrichment during the break.
- Corporal Punishment: -0.33. No surprises here. Fear is a terrible teacher.
Moving Beyond the "What" to the "How"
If you’re a school leader or a teacher, don't just print out the list and start checking boxes. That’s "checklist teaching," and it’s a recipe for burnout.
Instead, look for the "story" behind the data. Hattie’s big takeaway isn't that feedback is good (though it is, at 0.62). It’s that teaching needs to be visible. Teachers need to see the impact they are having, and students need to see what they are supposed to be learning.
📖 Related: Williams Sonoma Deer Park IL: What Most People Get Wrong About This Kitchen Icon
When you use the John Hattie effect size as a mirror rather than a manual, things get interesting. You start asking, "Is my feedback actually helping, or am I just writing 'Good Job' in red pen?" (Spoiler: 'Good Job' has almost zero effect).
Actionable Steps for Your Classroom
Stop treating the rankings like a grocery list. Start using them as a diagnostic tool.
First, evaluate your own impact. Use a simple tool to measure the growth of your students over a semester. If they aren't hitting that 0.40 equivalent, ask why. Is it the strategy, or is it the implementation?
Second, prioritize "Visible" strategies. Make sure your students know exactly what the "Success Criteria" are. If they don't know what a "good" essay looks like, they can't write one. Show them examples. Let them grade themselves using a rubric.
Third, build collective efficacy. Stop working in a silo. Talk to your colleagues about what’s actually working. Share the wins, but more importantly, share the failures. That's where the real learning—for both you and the kids—happens.
Hattie's research isn't perfect. No meta-meta-analysis ever could be. But it’s a way better starting point than "this is how we've always done it." Focus on the high-impact stuff, be skeptical of the fads, and always, always look for the evidence that your kids are actually growing.