2020 Summer Internships: A Little More Certainty in an Uncertain Future

Jun 02, 2020

Center Staff and Interns Will Address Pressing Issues in Educational Assessment and Accountability

Although so much about the future seems uncertain, we are excited this month to bring a little normalcy into our world by addressing key questions and challenges in educational assessment and accountability through our 2020 summer internship program. During the 2020 summer internships, the Center welcomes four advanced doctoral students who will work with the Center’s professionals on projects that will have direct implications for state and national educational policy. Each intern will work with a Center mentor on one major project throughout the summer. At the end of the project, each intern will produce a written report, suitable for conference presentation and/or publication.

The interns and their Center mentors will engage in projects addressing four issues that reflect the changing face of educational assessment and accountability:

  • Mitigating the Impact of Rater Inaccuracies on Test Score Scales
  • Evaluating Assessment Accommodations
  • Examining Teacher Judgments of Student Achievement
  • An Analysis of Validity Arguments for States’ Next Generation Science Standards Assessments

Mitigating the Impact of Rater Inaccuracies on Test Score Scales

Michelle Boyer and Nathan Dadey will work with Tong Wu from the University of North Carolina at Charlotte to examine the psychometric impact of rater inaccuracies on score scales and possible mitigation procedures. As large-scale assessments continue to include higher numbers of cognitively complex assessment tasks that ask students to respond in writing, the impact on score scales will require heightened scrutiny due to its potential to threaten the comparability of scores within and across items and over time.

As there are many practical limitations to the accuracy of scores that raters produce, Tong, Michelle, and Nathan will explore possible approaches to appropriately correcting such inaccuracies, with a particular emphasis on shorter tests with a large proportion of points produced by raters.

Evaluating Assessment Accommodations

Chris Domaleski will work with Maura O’Riordan from the University of Massachusetts Amherst to better understand the impact of assessment accommodations on the meaning and interpretation of test scores. Guidance from the United States Department of Education (ED) for peer review of state assessment systems specifies that states must ensure that accommodated administrations of assessments are appropriate, effective, and allow for meaningful interpretation and comparison of results for all students (peer review element 5.3).

In particular, Maura and Chris will seek to identify and document the range of practices and sources of evidence used to determine that accommodations are appropriate and effective and identify any gaps. This will inform the development of guidance and procedures to support improvements in the selection and use of accommodations.

Examining Teacher Judgments of Student Achievement

Carla Evans will work with Alexandra (Allie) Stone from the University of Connecticut on a special validity study for New Hampshire’s federally approved innovative assessment system known as PACE (Performance Assessment of Competency Education). This project will use student work on complex performance tasks and summative assessments collected throughout the year from one participating district to examine how student achievement is accurately reflected in teacher judgments, PACE annual determinations of student proficiency, and state summative test results.

These analyses can add additional information to the validity argument about how well PACE standards represent the depth and breadth of student achievement. The study outcomes and prior literature on teacher judgments of student achievement will inform the development of guidance and procedures to improve the accuracy of teacher judgments in the PACE system.

An Analysis of Validity Arguments for States’ Next Generation Science Standards Assessments

Brian Gong will work with Sandy Student from the University of Colorado Boulder on analyzing the validity arguments for several states’ varied Next Generation Science Standards assessments. The NGSS offers a unique challenge due to the extensive and complex structure of the “three-dimensional” standards incorporating scientific and engineering practices, cross-cutting concepts, and disciplinary core ideas. This project will involve analysis and comparison of state assessment programs with a focus on their claims, reporting structures, test blueprints, alignment to the NGSS, and intended uses of the assessment results, with the analysis expressed as a validation argument. We hope to better understand the critical decision points that shape practical decisions about what is assessed and how interpretations and claims can be supported well. The project will also provide valuable examples of validation arguments applied to operational testing programs.

Share: