Preparing the Future with 2019 Summer Internships

Apr 04, 2019

Center Staff and Interns Will Address Key Issues in Assessment and Accountability

While Center staff and 2018 summer interns share their work at the NCME conference, planning is well underway for our 2019 summer internship program.  This summer, the Center will welcome six advanced doctoral students who will work with the Center’s professionals on projects with direct implications for state and national educational policy. Each intern will work with a Center mentor on one major project throughout the summer. At the end of the project, each intern will produce a written report, suitable for conference presentation and/or publication. 

The interns and their Center mentors will engage in projects addressing six pressing issues in educational assessment and accountability:

  • Evaluating Equity in States’ ESSA Accountability Systems
  • Tools for Depicting and Analyzing Achievement Gaps
  • An Analysis of Assessment Policy
  • Assessment Literacy for Policy Makers
  • The Use and Impact of Interim Assessments
  • Subscore Reporting and Use

Evaluating Equity in States’ ESSA Accountability Systems

Brian Gong and Chris Domaleski will work with Nikole Gregg from James Madison University on evaluating equity in states’ ESSA accountability systems. Given that a central purpose of ESSA is to promote equity–improving outcomes for disadvantaged students–it stands to reason that it is critical to evaluate the effectiveness of these systems with respect to equity. 

This project will involve a review of state accountability systems with a focus on the design features (e.g. achievement gap reduction, growth for students below Proficient) explicitly tied to equity, and a simulation study of how multiple states’ accountability systems compare in terms of those outcomes. Ultimately, we hope to better understand what accountability policies and features seem most important and effective in promoting equity outcomes.

Tools for Depicting and Analyzing Achievement Gaps

Brian Gong will also work with Tuba Gezer from the University of North Carolina at Charlotte on identifying and developing tools for effectively depicting and analyzing achievement gaps. Reducing achievement gaps has been a primary purpose of federal education policy for more than 50 years (Elementary and Secondary Education Act 1965, reauthorized as IASA, NCLB, and ESSA). Yet, progress has been agonizingly limited by most accounts, with a few bright exceptions. This project has three main outcomes: 

  1. Summarize the most interesting and useful definitions of “achievement gap” and how to measure them—and show how the various conceptualizations are related to larger policy concerns and to each other;
  2. Identify and/or devise a few powerful depictions and analyses of “(reducing) achievement gaps” that would apply to states’ current efforts; and
  3. program some of those depictions and analyses as a demonstration of a toolkit that might be adopted by states and others interested in understanding, communicating to others, and acting to improve educational outcomes in terms of equity.

Assessment Policy

Scott Marion will work with Zack Feldberg from the University of Georgia on a systematic review and analysis of states’ large-scale assessment policies. Given the powerful influence of state assessment laws and regulations on the design and implementation of large-scale assessment systems, it is surprising how little we, as a measurement community, know about these laws and regulations. It is difficult to propose policies that support improved large-scale assessments without knowing the current policy and political contexts. 

This landscape analysis will provide the foundation for generating policy frameworks that can be used as models for state leaders who want to improve the constraints and requirements associated with large-scale testing programs.

Assessment Literacy for Policy-Makers

Scott Marion will also work with Brittney Hernandez from the University of Connecticut to build off of the Center’s conceptualization of assessment literacy to design tools and modules for improving policy maker’s understanding of key assessment topics. Then, they will perform an evaluation of the efficacy of such tools through interviews with current state policy-makers. 

Much of the focus on assessment literacy has been on improving the knowledge and skills of educators. We have come to recognize that a lack of assessment literacy among state policy makers can lead to considerable instability, weak designs, and inappropriate uses of state assessments. Further, given the rapid turnover of state leaders, we need to create long-term structural supports for improving the assessment literacy of these state leaders. 

Interim Assessment Research Synthesis

Nathan Dadey will work with Calvary Diggs from the University of Minnesota to conduct a research synthesis on interim assessment use. Doing so will involve drawing on a theory of action framing to characterize how exactly interim assessment results are used within the extant literature.

The research on interim assessments has expanded considerably over the past 20 years. There have been numerous studies dedicated to examining operational interim assessment programs and their relationships with student learning. To date, however, there has not been a systematic review of assessments explicitly identified as ‘interim’. 

This work is meant to define the specific ways in which interim assessments have been used and what uses, if any, have empirical support. The review will characterize the designs of interim assessments and will also investigate the use of the interim assessments within specific policy contexts. 

Subscore Reporting and Use

Chris Domaleski will also work with Victoria Tamar Tanaka from the University of Georgia on identifying effective practices for subscore reporting and use. A persistent tension between test developers and test users is the reporting and use of subscores on large-scale achievement tests. Technical advisors are typically reluctant to support reporting subscores, due to lack of precision and the risks of inappropriate interpretation. However, many test users want subscores, citing their value for diagnostic purposes. 

The purpose of this project is to evaluate and classify the range of practices with respect to subscore reporting on large-scale state achievement tests. This information will help inform development of guidelines to describe and promote effective practices.

We look forward to sharing more about each of these projects, the work of our interns, and what we learn as we progress through the summer.

Share: