Stuck in the Middle With Interim Assessments

Back

Assessment RILS

Stuck in the Middle With Interim Assessments

Improving the Selection, Use, and Evaluation of Interim Assessments

How do you select an interim assessment that will meet your specific goals and needs?  

What assessment characteristics are necessary to support a particular test use, and what evidence of quality should be evaluated to determine whether that use is supported? 

How do you know if educators and schools are appropriately interpreting and using assessment results? 

Well, we have answers. It’s almost that time again for the Center’s annual Reidy Interactive Lecture Series (RILS), which seeks to address these questions and more within a state and local context, and in consideration of the specific role an interim assessment is intended to play within a larger assessment system. 

Interim Assessments in the Context of Balanced Assessment Systems 

State and local education leaders are constantly working to develop balanced assessment systems that provide schools and educators with an accurate, comprehensive profile of student performance and progress. In most cases, these systems include a combination of state- and locally-developed assessments, or selected assessments that are intended to work together in a coherent, strategic manner to inform instruction and improve educational decision making.  

To fill the information gap between state-developed summative assessments of learning and classroom-directed formative assessment for learning, many districts and states have turned to the multitude of commercially-developed, off-the-shelf products broadly referred to as interim assessments.  

Perie, Marion, and Gong (2009) offer the following definition of interim assessments:

Assessments administered during instruction to evaluate students’ knowledge and skills relative to a specific set of academic goals in order to inform policymaker or educator decisions at the classroom, school, or district level. The specific interim assessment designs are driven by the purpose and intended uses, but the results of any interim assessment must be aggregable for reporting across students, occasions, or concepts (p. 6).

The generality of this definition reflects the diverse range of products that exist under the umbrella of interim assessments. These assessments are designed to serve a range of purposes and uses but are often discussed as if they are interchangeable. A more productive way to think about interim assessments, especially for selection and evaluation purposes, is with regard to how they are, or are not, intended to be used.  

For the purposes of the RILS conference, we are claiming that interim assessments do not include assessments developed to evaluate student performance at the end of a grade or course of instruction.  This type of assessment includes the state assessments used to inform high-stakes school accountability determinations, and assessments developed to determine grades or support promotion decisions at the end of a course or unit. Interim assessments do not include formative assessment or the processes used by teachers and students during instruction that provide feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes (Wiley, 2008, p. 3). 

Interim assessments are tools that fall in between these two ends of the spectrum, which were designed to inform teaching and learning throughout a course of instruction. The key distinction between different interim assessments is how the results are intended to be used in service to that goal.  

Selecting, Using, and Evaluating Interim Assessments

Depending on their design, interim assessments may be used summatively, to supplement the information gleaned from high-stakes assessments for accountability; or formatively, to inform instructional practice (i.e., if the process and practice are well-designed). To select an appropriate interim assessment and fully leverage the benefits afforded by its design, we argue that states/districts must engage in the following activities, all of which we will be discussing at RILS: 

  1. Articulate how teaching and learning should be supported by assessment information;
  2. Clarify the role and purpose of existing assessment information;  
  3. Identify information gaps that, if filled, can support instructional decision-making and practice;
  4. Describe the characteristics of an assessment necessary to collect this information and use it as intended; and
  5. Identify what evidence is necessary to evaluate whether assessments are meeting expected needs and being used as intended.

Finding the Right Fit For Interim Assessments

After defining interim assessments, we argue that there is less clarity regarding whether interim assessments should play a role in a balanced assessment system. As Marion and colleagues state, “interim assessments are not required components of balanced assessment systems, but such assessments may play a productive role in balanced systems of assessment only if there is sufficient evidence of coherence and utility.” (Marion, Thompson, Evans, Martineau, & Dadey, 2019, p. 4). This outcome depends not only on an assessment’s intended use but also whether it was designed to complement and extend the information provided by other elements of the assessment system. Whether as a part of a balanced assessment system or as a focused source of information to support teaching and learning, we must define why, how, and what guides the selection, design, and use of interim assessments.

The goal of RILS 2019 is to provide the context and tools to support states and districts in making thoughtful decisions about the appropriateness and utility of different interim assessment tools. Through presentations, invited panels, and small workgroups we will address the conditions that support the effective selection, evaluation, and use of interim assessments. We look forward to seeing you there. 

Save the date for the 2019 Reidy Interactive Lecture Series, September 26-27 in Portsmouth, NH. 

Share:

Prev Next