It Might Just be a Pile of Bricks!

The Challenges of Creating Balanced Assessment Systems

“…a collection of assessments does not entail a system any more than a pile of bricks entails a house.” Coladarci (2002). In the first 19 years following the 2001 publication of Knowing What Students Know, balanced systems of assessment were considered the unicorns of educational measurement. Many were seen in dreams and hopes, but few, if any, were seen in the wild. Something must have happened in 2019 and 2020 because all of a sudden, it seemed like balanced assessment systems were as common as old horses at a dude ranch. What’s causing this rapid increase and why do I think we’re still searching for unicorns?

Certainly part of the reason for the increase is that educators want more useful information to support teaching and learning and they recognize that state accountability tests are not designed to support these aims. Supporting teaching and learning was one of the motivating goals of the call for balanced systems of assessment in Knowing What Students Know. To be fair, many states are offering a variety of assessments beyond the state summative. For example, the Smarter Balanced Assessment Consortium has been a leader in this area with their suite of interim assessments and formative assessment resources. Several states, such as Wyoming and Utah, have adopted a similar framework. I referred to these systems as “loosely coupled” in 2018 because these tools can increase the coherence of the various assessments in the system, but it does not mean the system is balanced in the ways envisioned in Knowing What Students Know.

Perhaps the major reason why I’m seeing so many reports of balanced assessment systems is the co-opting of the term by interim assessment companies. Recent documents from NWEAIlluminate, and ETS are just a few of many examples touting that balanced assessment systems must be composed of formative, interim, and summative assessments. That’s simply not true. Interim assessments do not have a defacto place in balanced assessment systems. In fact, my colleagues and I argued in our Tricky Balance paper and policy brief that interim assessments more likely create unbalanced systems.

My colleagues and I coined the term interim assessment back in 2009 in an attempt to keep interim assessment providers (called benchmark, diagnostic, etc. back then) from claiming the literature in support of formative assessment (Perie, Marion, & Gong, 2009). However, now it seems that commercial interim assessment providers have shifted to promoting balanced assessment systems to capitalize on the growing interest and literature base.  

What do I Mean by Balanced Assessment Systems?

The call for balanced systems of assessment in Knowing What Students Know and many subsequent writings (e.g., NRC, 2006NRC, 2014) was born from a recognition that most assessments were not very helpful for improving learning and instruction. Educators understand that large-scale summative tests are far too distal from instruction, at the wrong grain size, and administered at the wrong time of year to make a difference in their daily practice. Balanced assessment systems were motivated by desires to enhance the utility for improving learning and teaching, as well as other stated purposes such as accountability and evaluation. 

The authors of Knowing What Students Know outlined three criteria for evaluating the balance of assessment systems: coherence, comprehensiveness, and continuity. Coherence refers to the learning model that connects the various assessments in the system and to curriculum and instruction. A system meets the comprehensiveness criterion with assessments that provide multiple views of what students know and can do, as well as supporting multiple users and uses. Continuity describes how the system documents student progress over time. These properties help create a powerful image of a high-quality system of assessments. Raj Chattergoon and I later added utility and efficiency to address more pragmatic aspects of assessment systems.  

The Importance of Curriculum

Using a common model of learning to closely connect instruction and assessment, generally instantiated through high-quality curriculum or learning progressions, is how we move students to deeper learning. Knowing What Students Know referred to a common model or vision of learning as the anchor for coherence. I and others think learning models and learning progressions, while critical, are hard for most practitioners to grasp. Teachers understand curriculum. My colleagues and I noted the importance of curriculum for serving as the foundation for district and classroom assessment system in our Tricky Balance paper.

Where are the Unicorns?

Unfortunately, our search will still not lead us to state-level balanced assessment systems because essentially all states leave the choice of curriculum to local school districts. Standards, as end of year learning targets, are not sufficient to guide day-to-day instruction and assessment. Therefore, as I (2018) and Shepard, et al., (2018) suggested, school districts are the most sensible locus of control for developing curriculum-embedded balanced assessment systems because states generally cannot meet the coherence criterion. Chicago Public Schools’ Curriculum Equity Initiative is one of the most promising large district efforts in this regard.

Where Do Interim Assessments Fit?

Given my focus on coherence and the need for a link between curriculum and assessment, I do not see a place for interim assessments unless they can somehow meet this critical connection in balanced assessment systems. Widely-used commercial interim assessments, in particular, generally are not tied to any specific curriculum and are not necessarily coherent with instruction and other assessments in the system. I am not saying that interim assessments cannot play a role in assessment systems. They just do not do so by default. I devoted a 2019 CenterLine post to this very concern where I concluded: “Therefore, I return to where I began; commercial interim assessments have a limited role, at best, in balanced systems of assessment, and any role must be supported by positive evidence that outweighs negative consequences.”

So What are the Necessary Components of an Assessment System?

I’ve grown tired of fighting over labels, in case you missed my post about “diagnostic assessment” last fall. Given the ambiguity of assessment names in our field, I urge district, school, and state leaders to focus as specifically as possible on use cases, something we’ve been arguing at least since 2009. A focus on use cases will move us away from the idea that components of a balanced assessment system are simply conveyed by title without any discussion of use. 

Lorrie Shepard and I talked about this issue on a recent webinar in partnership with the California Collaborative for Educational Excellence (CCEE). Additionally, my colleagues have been developing tools to help leaders use theories of action to clarify uses and to guide their selection of the assessments in a system. In fact, on May 11th Nathan Dadey and Erika Landl will be offering a webinar on this topic in partnership with CCEE. 

Relying on a well-articulated theory of action to ensure that assessments can support important uses is a first step in having balanced systems of assessments to support more equitable and useful assessments for students and educators.

Share: