Arrows point to the left, toward a sailboat, while red arrows along the bottom of the image point to the right. The contrast of directions indicates the tension in taking a different approach when things have traditionally been done one way – and in a certain direction.

Consider Backward Assessment Design 

The Key to Building and Implementing Instructionally Useful Assessments

When we plan a road trip, we decide on a final destination, then map out how to get there. Similarly, when classroom teachers design a unit of instruction, they should start by identifying the final learning goals for their students, using their state’s content standards, and then backward-design their activities to meet the final goals. 

In the world of curriculum, this process is known as backward design, an idea originated by Grant Wiggins and Jay McTighe in their 2005 book, Understanding by Design. It has three phases: 

  • Phase 1: Identify desired results
  • Phase 2: Determine acceptable evidence
  • Phase 3: Plan learning experience and instruction

Similarly, if we want an assessment to fulfill instructional purposes, we must start with the end in mind: I’ll call this backward assessment design. We don’t do this in the assessment world. True, many test developers use a principled assessment design approach like evidence-centered design, but those approaches are agnostic on whether the resulting information can be used to guide and shape daily instructional decisions.

Most testing companies design a test, then design a score report, and then put it in front of teachers and expect them to do something instructionally useful with the results. It still amazes me that we are surprised teachers rarely find this type of design process provides them with actionable assessment information. And by actionable, I mean that the assessment information produces insights teachers can use to change what they are teaching, how they are teaching it, and for whom (i.e., all students; some students).

Designing Tests With the End in Mind

Now imagine a scenario where assessment developers start with the end in mind. They start with the intended purpose of the assessment and then map out how to get there. 

Let’s say the test’s purpose is to provide instructionally useful information that can be used to adjust the daily interactions of teachers, students, and content. If that’s the case, then wouldn’t it make more sense to see if it’s possible to design a score report that teachers could use to inform their instruction? 

Once there is a viable proof of concept—where teachers accurately interpret the assessment information and explain what they will do different instructionally as a result—then, and only then, would test developers backward-engineer an assessment to supply that information. 

It is not a new thought to focus more effort on score report design from the very beginning of the assessment development process. My colleague Chris Domaleski wrote about this in his blog post, “What I Learned about Creating Effective Test Score Reports from the Great Ron Hambleton.” 

I’m going a step further here and saying this: If instructional utility is a key claim we want to make about a test’s results, then we should do everything Prof. Hambleton suggested as best practices in score report development before any assessment blueprints or items are created. 

Focusing on score reports isn’t the only way to improve the instructional utility of assessments, but it is an important way.

Let Teachers’ Feedback Shape Test Design

We shouldn’t design the assessment first when instructional utility will be the primary purpose—or even one of the purposes—of the assessment information. Instead, we should design instructionally useful score reports first and spend a bunch of time, effort, and money trying them out with real classroom educators using think-alouds, focus groups, qualitative interviews, observations, and so on. 

The purpose of trying the score reports out with classroom educators is to test our claims and gather evidence that those claims can be supported. Once we see teachers doing what we hope they will do with the assessment information, then we figure out a way to backward-design an assessment (or assessment system) that will deliver that information to teachers at the right time, when it’s useful for guiding and directing their next instructional actions.

Backward-designing assessments requires a strong theory that guides what we believe teachers should do instructionally using assessment information. And we already have that theory in the literature on formative assessment processes. 

At a minimum, teachers need assessment information at an instructible grain size that provides qualitative insights into student thinking or makes student thinking on the lesson or unit learning targets visible. And this information needs to be related to the enacted curriculum—that is, what teachers are actually teaching, so they can make sense of the information within the context of the teaching and learning cycle. 

I’m sure you can already imagine how starting with this notion of qualitative, content-referenced, and curriculum-embedded assessment results would shape test score reporting in new and better ways.