The Choice is Yours on Interim Assessment

Back

RILS Interim Assessment Use of Assessment Results Education Policy

The Choice is Yours on Interim Assessment

Introduction to the Interim Assessment Identification and Evaluation Process Tool

This week, the Center team gathers in Portsmouth, New Hampshire with state and local educators, assessment specialists, industry leaders, and old and new friends for our 21st annual Reidy Interactive Lecture Series (RILS). The topic of RILS this year and the goal for our two days together is to improve the selection, use, and evaluation of interim assessments by helping states and districts make better-informed decisions about the appropriateness and utility of different interim assessments options.

We will hear from assessment experts, state and district representatives, and practitioners who have experience selecting, using and evaluating interim assessments. 

We also take the Interactive part of RILS seriously. Active participation is a hallmark of RILS as each year we come together to tackle real challenges in assessment and accountability. This year, we’re offering a unique opportunity for participants to review and extend a set of tools developed specifically to help state and district leaders engage in a thoughtful, comprehensive process focused on identifying assessments that will meet their specific needs. 

Why Are These Tools Needed?

In their 2019 brief, Center staffers Marion, Thompson, Evans, Martineau, and Dadey discuss the conceptual issues, challenges, and recommendations associated with designing and implementing a balanced assessment system that supports improvements in teaching and learning. The authors note that balanced assessment systems are a series of assessments that are coherent, comprehensive, and continuous, recalling the criteria laid out in the 2001 National Research Council publication, Knowing What Students Know. In addition, they cite the need for considering utility and efficiency when conceptualizing comprehensive assessment systems (Chattergoon & Marion, 2016).

Building off of this work, we believe it’s important to critically evaluate the role that interim assessment should play, if any, in balanced assessment systems. This toolkit is designed to help state and district leaders engage in a process to evaluate the degree to which interim assessments are aligned with their theory of action. This toolkit presents a series of questions that are intended to guide policymakers and practitioners through a structured process that will help: 

  • specify a vision of teaching and learning, including valuable student knowledge, skills, and dispositions; 
  • articulate the assessment information that can support this vision and how it should be used;
  • determine the information provided and uses served by existing assessments to identify gaps, redundancies, or necessary adjustments;
  • narrow the focus to high priority needs and uses; 
  • clarify the use of assessments and identify the questions they can answer; 
  • determine key assessment design, administration, and reporting characteristics  that align to intended uses; and
  • engage in an evaluation of the use and utility of these assessments. 

While no set of questions will be relevant to all stakeholders, we hope this toolkit provides a robust set of considerations that are valuable to those who are interested in selecting and examining their assessments more carefully. Additionally, we expect that by authentically addressing these questions collaboratively, system designers can get closer to identifying whether interim assessments should play a role in their balanced assessment system. 

For the purposes of this toolkit, we reference Crane’s (2008) definition of interim assessments from his Council of Chief State School Officers brief, Interim Assessment Practices and Avenues for State Involvement, which extends Perie, Gong, and Marion’s (2007, 2009) definition of interim assessments: 

 

Assessments administered multiple times during a school year, usually outside of instruction, to evaluate students’ knowledge and skills relative to a specific set of academic goals in order to inform policymaker or educator decisions at the student, classroom, school, or district level. The specific interim assessment designs are driven by the purposes and intended uses, but the results of any interim assessment must be reported in a manner allowing aggregation across students, occasions, or concepts. (p. 2).

What Does the Toolkit Look Like?

The toolkit is organized into three phases, which are described briefly below. Phases 1 and 2 are most relevant prior to administering an assessment, whereas Phase 3 would be relevant after administration.  

Figure 1. Timing and Process of the Toolkit Phases.

Training and Process of the Toolkit Phases

Phase 1: Identifying Assessment Gaps and Needs

Selecting, designing, or developing assessments that can be used to support a vision of teaching and learning requires careful planning around that vision. Phase 1 of the Interim Assessment Specifications Process is intended to help you articulate a theory of action for how stakeholders can use assessment processes and information to:

  • improve student achievement and instructional practices; 
  • evaluate the degree to which the tools currently available to stakeholders align with and support that theory, and 
  • identify potential gaps in the information believed necessary to help students, educators and/or schools improve.  

Phase 2:  Identifying and Prioritizing Assessment Characteristics & Evidence of Assessment Quality

Once there is a clear vision of how assessment information should be used, it’s important to identify the assessment design, administration, and reporting characteristics necessary to support those uses and answer questions aligned with that vision. Phase 2 of the Interim Assessment Specifications Process is intended to help you articulate the characteristics and features an assessment must demonstrate in order to provide for information that supports your highest-priority intended uses of the assessment results. The characteristics and features will then help users identify and evaluate evidence and documentation provided by assessment vendors to support decisions about the appropriateness of the assessment for their intended uses.

Phase 3: Evaluating impact and Utility

Once an assessment is administered, it’s necessary to evaluate whether its use elicits intended behaviors, provides planned insights, and yields anticipated impact. These expectations are based largely on those claims articulated in Phases 1 and 2 and should be part of ongoing validation activities. Phase 3 of the Interim Assessment Specifications Process is intended to raise a series of considerations aligned to your intended uses and desired characteristics and to identify possible sources of evidence for you to examine. These considerations are aligned to the three key areas for evaluation: 

  1. Alignment to the theory of action
  2. Intended uses of the assessment based on the desired characteristics and features
  3. Impact on behaviors of the primary users identified 

We look forward to thoughtful discussion around the toolkit and feedback from state/district leaders, measurement experts, and vendors related to use of interim assessments within a balanced assessment system.

Share:

Prev Next