Top-Up Tuesday:🧠 The Secret to Making Assessment Data Actually Useful
How to Make Assessments Work for You, Not Against You
"The data doesn't match what I'm seeing in the classroom."
If you've ever said that—or even thought it—you’re not alone.
But here’s the hard truth Dr. Matthew Burns shared with me:
The mismatch isn’t the data. It’s how we’re using it.
Welcome to this week’s Top-Up Tuesday, where I distill powerful insights from episodes of the Knowledge for Teachers podcast to help you sharpen your teaching toolkit. This week, my chat with Dr. Matthew Burns—renowned school psychologist and assessment expert—might just change how you see assessment altogether.
As educators, we're constantly navigating a sea of assessments. Screeners, diagnostics, progress monitoring – the terms alone can be overwhelming. Add to that the pressure to use data effectively to support our students, and it's no wonder many of us feel like we're drowning in numbers rather than using them to chart a clear course for instruction.
🧩 Why Assessments Feel Broken
You’re giving assessments. You’re looking at the data. But you're still unsure what to do next.
Dr. Burns argues that most assessment confusion comes from using tools for the wrong purpose.
A STAR test is a screener—it tells you who needs help. But if you’re using it to figure out what help to give… you’re flying blind.
Takeaway:
Don’t ask a thermometer to diagnose a broken bone.
📚 The Three Types of Assessment Teachers Actually Need
Forget the clutter. You only need to focus on three assessment types:
Screening – Who’s at risk?
Diagnostic – What specifically do they need?
Progress Monitoring – Is what we’re doing actually working?
🎯 The mistake? Many schools are doubling or tripling up on screeners, thinking that’s “triangulation.” It’s not. Dr. Burns puts it bluntly:
“Triangulation doesn’t mean giving the same test three times.”
🧠 The Framework Teachers Deserve (But Rarely Get)
Teachers aren’t short on data—they’re short on clarity.
Dr. Burns shared that when teachers are given specific questions to guide data analysis, their team meetings become faster and more effective. Instead of wandering through spreadsheets, you’re answering:
Is there a class-wide need?
Which students need targeted support?
What support do they need?
How will we monitor progress?
It’s not magic. It’s just having the right map.
📈 Fluency ≠ Speed. It Means This.
A standout moment? Burns’ take on fluency:
“Speed doesn’t matter. Automaticity matters.
But how do we measure automaticity? Speed.”
He breaks down the instructional hierarchy—acquisition, fluency, generalization, and application—and shows how the phase a student is in should drive what you teach and how you teach it.
For example:
If a student is inaccurate? Focus on modeling.
If they’re accurate but slow? Practice, practice, practice.
If they’re fast but inflexible? Time to generalise.
🧠 Knowing where a student is lets you stop guessing and start targeting.
🔧 CBM vs CBA: The Acronyms You’re Probably Mixing Up
Think you know the difference between Curriculum-Based Measurement and Curriculum-Based Assessment?
Many don’t. Even Dr. Burns did a live demo where half the room misidentified which was which.
The big difference?
CBM tracks if progress is happening. Focuses on fluency (like words read correctly per minute) and is primarily for progress monitoring. It tells you if a student is getting better.
CBA uncovers what to teach next. Focuses on accuracy (like percentage of words read correctly) using actual instructional material and is for diagnosis. It helps determine what a student needs and the right instructional level (e.g., 93-97% accuracy for connected text reading).
Using one when you need the other? That’s how assessment efforts stall.
🧪 Bottom Line: Data Isn’t the Problem
The problem isn’t data. It’s that teachers haven’t been taught how to use data for instructional decisions, only for grading.
“Don’t let kids practice anything unless they can do it with at least 90% accuracy.”
That’s one of the practical benchmarks Burns offers—and there are many more.
If you're a teacher, coach, or leader, this conversation is a wake-up call and a roadmap all in one.
✏️ Action Steps for This Week
Audit Your Assessments – Are you over-screening? Are you clear on the purpose of each test?
Choose Purpose-Aligned Tools – Use screeners only for screening. Use diagnostics only to guide what to teach.
Give Your PLCs a Framework – Don’t let teams flounder in data. Hand them a simple, structured protocol.
Rethink Practice – Repetition matters, but only if it’s built on accuracy.
Ditch the Intuition: Don't let intuition or unreliable metrics (like processing speed from IQ tests) drive intervention decisions; rely on evidence-based practices tied directly to data from appropriate assessments.
Want to make assessment less overwhelming and more instructional?
🎧 Listen to the full conversation with Dr. Matt Burns here 👇
https://www.learnwithlee.net/kft-matthewburns/