
Score Reporting Isn’t Enough: What Comes Next Matters
Imagining Supports That Can Make Score Results More Useful
“This test is just one measure of how well a student is performing academically. Other information, such as classroom assignments and scores on other assessments, should be considered to determine how to best support students.” Have you ever seen text like this on a score report?
This kind of language is ubiquitous in end-of-year state summative assessment score reporting, and it is meaningful; it rightly encourages users to look at other sources of information and triangulate across them to support good decisions. Language like this appears not only on the score reports, but also on the variety of interpretive materials states provide, such as interpretive guides, one-pagers for parents or recorded professional-development modules.
But imagine if we did more.
How much more useful could score reports be if we built a constellation of supports and resources that helped users see their next steps: supports and resources that go well beyond what we currently provide? Our field—those of us who develop, administer, and facilitate the use of statewide summative assessments—can help promote good use of test score data by better supporting policymakers, practitioners and families after score reports are published.
It’s not that the field isn’t providing supports for score reporting; it’s just that the need is so great and diverse that even the best examples leave a great deal of room for improvement. When developing score reports and related supports—interpretive guides, professional development, and the like—we can go further and make three things more explicit:
- What uses are supported by the assessment information,
- What other information is needed and helpful for those uses, and
- How that information can be used alongside assessment information.
Doing so addresses the “so what do I do with this?” problem, helping demystify what productive next steps can look like. This clarity can be particularly helpful for those early in their careers or who have limited access to professional mentorship or support.
Next I provide two examples of what going further might look like.
Supporting Parent-Teacher Conferences
What if, in partnership with like-minded schools and districts, state leaders and their partners developed a protocol to help teachers structure conversations with parents about statewide summative assessment during parent teacher conferences? Often, these late-fall conferences are the first chance parents have to talk through their students’ state assessment results—if they have a chance to do so at all. And all too often, teachers lack the support they need to have these conversations.
An optional, well-developed process would go a long way to empower teachers to have productive conversations with parents, and ideally help alleviate the pressure teachers face when explaining statewide summative assessment. This process would ideally touch on the three points above to help teachers contextualize the results alongside their current instruction and consider the possible next steps parents can support.
Supporting Curricular Evaluation
Consider the evaluation of curricular materials by school or district personnel. This kind of evaluative use is directly in line with the intended uses of statewide summative assessment, yet those in the field have to go it alone in conducting this kind of evaluation.
For districts that are well-funded, this kind of evaluation is often conducted by a regional office or external contractor. In other cases, it’s done by teachers or district leaders with limited time, resources and experience.
Many of these educators would likely appreciate non-prescriptive guidance on high-quality evaluative practices. This could take the form of a short guide that outlines major steps for evaluation involving state assessment data. “A Six-Step Process for Using Test Scores and Other Evidence to Evaluate Curriculum,” perhaps? That’s just one example. Resources like this could take many forms.
This kind of resource should be provided by the state without any requirements for use, but as an optional tool the field might find useful. It could help provide guidance on what other information should be included within an evaluation and what next steps might be. Ideally, this kind of tool could be vetted with and used by partner districts who could then help share their successes with it.
Better Supporting Summative Assessment Use
The above examples and supporting framing imagine a world where state educational agencies and their partners do more to support the use of statewide summative assessment results. Clearly, for state educational agencies already facing the Herculean tasks of running a statewide summative assessment program with limited time and resources, this kind of idea may seem far-fetched.
But this kind of work shouldn’t be seen as all-or-nothing. For example, a state educational agency could choose to develop just one piece of supporting guidance along the lines I’ve outlined. Building piece by piece could lead to success and eventually broader support by the field.
For example, in a recent program designed around end-of-unit assessments, one of our first steps was to develop guidance on how the results connect to resources. We then built on that guidance to provide a process for reasoning through the results. This work is far from perfect, and a number of teachers told us the second resource was much too long. But it shows how materials can be built up over time. And ideally, these kinds of resources can be shared across states, decreasing cost and effort while expanding the supports.
This kind of sharing is, in part, why I have focused on the supports surrounding score reports, rather than on the reports themselves. In addition, score reports can be difficult to change year-to-year, as they live in a vendor platform and require investment, and potentially scope changes, to revise. Frequent changes to score reports can also frustrate users, leaving them feeling like they have to relearn the reporting with each new release.
Regardless of the starting point, here are a few key recommendations for those considering this kind of work:
- Provide non-prescriptive examples of use. Offer clear, concrete examples of use—such as a protocol for a parent-teacher conference—without making them mandatory. The goal is to support the field while also respecting professional autonomy.
- Explicitly define additional evidence. Specify the kinds of evidence that complement statewide results (e.g., “compare against a recent end-of-unit quiz”). Clearer, narrower guidance can reduce cognitive load and increase usefulness.
- Engage with the field. Build supports in partnership with local leaders, practitioners, families, and if possible, students, to ensure they meet real needs. Engagement also fosters a sense of ownership and encourages sharing of successful practices.
- Connect to content, if possible. When feasible, link guidance to curriculum or content. If doing so directly isn’t possible, consider supporting the field in making those connections on their own.
Building these kinds of supports can help change the dynamics around score reporting, ideally changing the “so what do I do with this?” question into a question with the potential to help improve learning: “When, where and how can I best act on this guidance?”
Photo by Allison Shelley for EDUimages