Matching Instructional Uses with Interim Assessment Designs

The Importance of Ensuring that an Assessment is Designed to Support its Intended Uses

Assessments are the most powerful and useful when designed intentionally for particular purposes – especially when it comes to interim assessments.  

In preparation for the recent NCME Classroom Assessment Conference and the 2019 Reidy Interactive Lecture Series (RILS) conference (which was focused on improving the selection, use, and evaluation of interim assessments), I revisited a set of potential blueprints for interim assessments that I first discussed at the 2010 RILS conference. Through those blueprints, I illustrated how the design of individual interim assessments and the set of assessments administered across a school year could, and should, vary dramatically based on the particular ways the results are intended to be used.

Even when the general purpose of the assessment is to improve instruction, consider the need to precisely specify “assessment use” in order to inform assessment design and validation for each of these use cases: 

a) predicting end-of-year performance on a state assessment; 

b) declaring “mastery” under a competency-based education model; 

c) eliciting “pre-assessment” information about student knowledge prior to instruction; 

d) using assessment information to inform instructional decisions of “going forward” and “going back,” and 

e) evaluating instruction and curriculum to determine improvements needed for the next instructional cycle.  

Making Interim Assessment More Useful

There is a great desire among educators to make assessments more useful and effective in supporting increased student learning and better school and district functioning. To achieve more useful and effective assessments, I offer the following suppositions:

  • Assessments must be more diversified to satisfy different uses and situations; particularly to support better instruction
  • Assessments must be more diversified because specific information is needed for particular purposes and circumstances; a key aspect that determines what information is needed are the claims made in relation to the domain or student’s content/skills, and whether the instructional path is to move forward into new content or to move back into content that was previously instructed.
  • To inform curricular and instructional decisions, contextual information—especially about the curriculum, instruction, and the student’s learning history—must be considered to be able to interpret and use assessment information.

Because of these points, assessment designs must be matched with instructional uses; there is no single design that will be able to effectively provide information for even the main instructional uses. This is true for all assessments but is especially pertinent for interim assessments.

Some important implications of these assertions for interim assessments are:

  1. No single interim assessment can provide all the needed information to “inform instruction.”
  2. Because instructional actions differ from each other and require specific information, evaluating the assessment’s match to the instructional need requires a specific claim. Those designing, using, or evaluating an interim assessment should pay special attention to the assessment’s claim.
  3. Detailed information about the test blueprint—what specific content is included, how, and when—must be available to make it possible to evaluate whether the test design is adequate to support the claim and the intended interpretations and uses. Those designing, using, or evaluating an interim assessment should pay close attention to this type of test blueprint information because it can be checked quickly and be understood by educators (as contrasted with more psychometric test information).

A Closer Look at the Relationship Among Claims, Context, and Assessment Design

Diverse uses and situations require different assessments. An influential paper named three broad classes of assessment with different purposes and characteristics: summative, interim, and formative. (Perie et al., 2009). This division reflected the common wisdom that, “Assessments must be designed to fulfill a purpose; it is difficult to get a single assessment to fulfill multiple purposes well.”

This statement is more than an aphorism—it is a design truth. For any tool, the more specifically it is designed to do a particular task, the less suitable it is for doing a different task. This logic is also true of interim assessments because an assessment is a tool that is designed to collect evidence to support a claim or inference and use. Thus, different claims lead to different assessment designs. For example:

Claim 1 (summative: end-of-year): The student has achieved a general level of proficiency over the body of content (knowledge and skills identified in the state’s content standards) at the end of the year.

Design: Assessment blueprint includes assessment items representing the full body of content; assessment performance is interpreted in terms of levels of proficiency; the evidence is collected near the end of the year.

Claim 2 (interim: end of the first instructional unit): Of the content knowledge and skills instructed in the first unit, the student’s abilities are strong enough in some areas to indicate preparedness to move on to the next unit, but are not strong enough in other areas.

Design: The assessment blueprint identifies knowledge and skills to be instructed in the first instructional unit; includes assessment items representing that body of content, which is less than the body of content for the whole year; assessment performance is interpreted in terms of a student’s strength or weakness on particular knowledge and skills; evidence is collected near end of first instructional unit.

Note: The design specifications for this interim assessment include:

  • Relation to past instruction: “taught in the first instructional unit”.
  • Domain content/skills: “body of content included in the first instructional unit”.
  • Assessment information and Interpretation: “particular domain knowledge and skills that an individual student is strong or weak on in relation to the instructional decision of whether to move on in the curriculum”.
  • Timing: “near the end of the first instructional unit”.

It is evident that the interim assessment designed to provide focused information about what was learned during the first instructional unit cannot inform the claim for the summative end-of-year assessment.  

It is also evident that while the summative end-of-year assessment might cover the content of the first instructional unit, the assessment would need to be designed very differently to inform the claim of the interim assessment for the first instructional unit.  

The particular design specifications for the assessments needed to inform Claim 1 and Claim 2 are quite different.

Where Do We Go From Here?

The need for specialized, diversified assessments stands in stark contrast to the desire to get as much information as possible from a single assessment. Examples that have received the most attention and effort recently include: 

  • attempts to squeeze more informative “subscores” out of summative assessments to help inform instruction for individual students, and 
  • attempts to combine information across sets of interim assessments to support summative claims. 

In this post, I have tried to illustrate how assessments typically yield only the information they were designed to provide. It requires careful work to design and develop an assessment that provides accurate and useful information.  

I emphasized the assessment’s claim and the design, particularly the content domain addressed in the test’s blueprint, and how those claims and designs may be different over the multiple administrations of an interim assessment. Before adopting an assessment, the user should evaluate the assessment for its match to the desired interpretation and use, as well as for its technical quality.

Interim assessments are typically administered multiple times over the course of a year. The set of interim assessments must be considered as a whole, as well as individually, so the set achieves the overall purpose.

If the goal is to make assessments more useful and effective in supporting increased student learning and better school and district functioning, we must thoughtfully identify the information needed from those assessments and then carefully design assessments that will provide that information.

Share: