The Center for Assessment logo is in the center of the image, and a ribbon of separated colors passes through it from left to right; however, on the right side as it exits the logo, the colors all come together to represent unified ideas.

Towards Coherence in Assessment Systems 

Making Sense of Diverging Narratives

At the tail-end of the COVID-19 pandemic, many stakeholders are trying to understand the state of students’ learning across the nation. Interim assessment providers and other vendors, such as Curriculum Associates, NWEA, and Renaissance, published reports showing significant declines in student learning. The most recent NAEP results echoed those findings, showing significant academic declines in core literacy areas across the board with relatively few exceptions. 

Many education professionals continue to use terms like “unfinished learning,” “pandemic recovery,” and “learning acceleration” to describe the dire challenges that students face as a result of the pandemic. Locally, teachers and school leaders witnessed the socioeconomically disparate impacts of the pandemic that affected students’ emotional well-being and their learning, such as access to technology at home or expanded roles in family support due to parents’ job losses.

Many colleagues in our community noticed something confounding, however: locally developed academic assessments and formative feedback from teachers sometimes sent a message that learning losses were not as problematic as large-scale data may have suggested. 

And so we find ourselves with a variety of diverging narratives about the state of learning in the nation. The divergences are particularly pronounced between the different levels of analysis: Reviewing one school’s data, for instance, might tell you quite a different story than the one told by a district’s data. And results from one state might show a distinctly different pattern from the one that emerges from national data. 

Different Assessments, Different Stories

To get some clarity on these diverging narratives and the ways insights are being fragmented, several organizations and independent research teams reviewed what was known at the time and provided strategic support (see reports by the Center for Assessment, KnowledgeWorks, and The Evidence Project). 

It was on this challenging landscape that we proposed a session for the most recent National Conference on Student Assessment (NCSA) on how to attain greater coherence in local systems of assessments, accountability, and school quality improvement and reduce the conflicting messages of these diverging narratives. 

The session included the two of us and our esteemed colleagues Phyllis Lynch (Rhode Island Department of Education), Lynn Schemel (Indiana Department of Education), Miranda McLaren (Gwinnett County Public Schools in Georgia), and Kristen Huff (Curriculum Associates). You can find the materials from our session (“Making Sense of Diverging Narratives”), along with materials from many other sessions, in the online catalog for the conference. A few themes emerged that we want to briefly review.

Theme 1: Coherence as a Concept

Stakeholders—particularly local and state leaders—routinely use the term “coherence” in strategic discussions of balanced assessment systems. You can think of coherence as the unifying glue that binds together the other core properties of balanced assessment systems: comprehensiveness, continuity, efficiency, and utility. 

For this session, we proposed a trichotomy of thinking about coherence to represent increasingly broader circles of inspection and inquiry: within-, across-, and beyond- assessment. Within-assessment coherence focuses on an individual assessment, across-assessment coherence focuses on the relationships among different assessments in a system, and beyond-assessment coherence focuses on the relationship of assessments to other aspects of pedagogical practice and school improvement work (see our slide deck in the online portal for more details.)

Coherence remains a rather elusive concept, however. While it can be scientifically approximated through principled analysis in different disciplines—we noted some examples from design science, linguistics, and medicine—it also requires sustained socio-cultural sense-making efforts. In an assessment system context, not only do data analyses tell different stories about the impact of the pandemic, but people may also interpret or react to these stories differently. This presents a challenge to achieving coherence.

Theme 2: Too Much Testing, Too Little Meaningful Action on Data

Presenters generally agreed that we live in a time when there is a wealth of data available from many assessments, but that this proliferation of data has not led to a proliferation of more coordinated actions at the district or state level overall. Teachers, students, and parents are bombarded with information from many assessments; amid the deluge it’s easy to overlook that each test is designed to evaluate a different aspect of learning for different decision-making purposes. 

The good news is that there is a way to counteract this: ensuring coherent approaches that tightly connect instruction, learning, and assessment, and presenting information in well-designed interfaces that support meaningful engagement with the data. Assessment-specific approaches like through-year assessments are a narrowly conceived alternative to this. But while they seek to reduce overall testing burden and increase the instructional utility of information derived from statewide assessments—aims not everyone agrees are realistic—they represent, at best, a simplified version of a coherent system.  

Theme 3: Assessment Literacy Challenges Belong to Everyone

Any discussion of assessment coherence requires some consideration of assessment literacy. Unfortunately, the framing of this issue is too often one-sided and presumes, often implicitly, that the main issue is that professionals are not educating themselves adequately. It also assumes that professionals have easy access to high-quality resources that can help impart the knowledge they need to take action on assessment information. 

Framing it this way, however, puts the blame for not understanding, and the responsibility for learning, largely on those who use these materials and resources. As one presenter noted sharply, we must challenge this assumption. 

Granted, all professionals should have foundational understandings in assessment, but once these conceptual building blocks are in place, assessment literacy becomes a decidedly more systemic issue. In fact, it is the responsibility of the technical specialists, vendors, and support organizations to present assessment information and data in clear, accessible ways and find ways to engage stakeholders with these resources.

To borrow an urban roadways metaphor, we need to stop thinking of assessment literacy as a one-way street. Effective assessment communication is, in reality, a two-way street, as is all educational work. And deepening that shared literacy is a freeway interchange, involving many players that cross paths, from testing vendors and school district superintendents to state communications officials and U.S. education leaders with access to wide-reaching platforms. 

Theme 4: Multi-pronged Communication is Key

All presenters noted that it’s important to give users of assessment data a variety of ways to make sense of what they’re seeing. One-on-one conversations with parents, teachers, or district leaders can be helpful, and so can customized templates, flyers, handbooks, and interactive displays, structured in various ways to reflect users’ decision-making needs. This translation of assessment information, in varied forms, is key. It can promote sustained engagement that not only builds accurate understanding and supports well-informed action, but also builds relationships and trust in the system. 

These four themes remind us of conversations we recently had during the Center for Assessment’s annual internal conference, the Brian Gong Colloquium. This year’s topic was innovations in accountability from an equity-centered mindset. We saw that colleagues who were successful in transformational change efforts worked incredibly hard, and in a sustained manner, to build relationships, common understandings, and mutual trust with people closest to the problem. As one colleague noted, the work is not sustainable if it is based on a model of ongoing “heroic action by heroic people.” This is also true for the sustainability of assessment literacy efforts and coherent balanced assessment systems.

We hope that this brief synthesis inspires you to look at the presentations in this session and, importantly, connect with those who delivered them to learn more and share experiences and insights. Building coherence in evidence-based narratives is a complex issue and requires sustained commitment from all of us. We would love to hear from you about the issues you are wrestling with and what approaches you might recommend to others! Connect with us via or