Making Use of Missing Data to Plan Interventions for Recovery

Part 2: The Practical Implications of COVID-19 Learning Loss Studies

It has been a year since COVID-19 made it impossible for states to administer state assessment programs last spring. Since then, multiple studies using data from commercial “interim” assessments have been put forward to help the field understand the pandemic’s impact on student learning. Through these special “learning loss” studies, assessment vendors have attempted to shed light on the effects of COVID-19 learning loss nationally. In this three-part CenterLine series, we address the practical implications of these studies for learning recovery. We believe that this series is relevant to the work of state- and district-level policymakers, assessment directors, and specialists at all levels responsible for interpreting data to inform learning recovery efforts.

The Most Useful Learning Loss Data in the Reports May Be Information on Students Not Tested

Thirty spokes share the wheel’s hub;
It is the center hole that makes it useful.
Shape clay into a vessel;
It is the space within that makes it useful.
Cut doors and windows for a room;
It is the holes which make it useful.

– Lao Tzu

In this second of our three-part series on the practical implications of the COVID-19 “learning loss” studies, we reiterate key points from our previous post and consider the implications for local districts as they prepare for students’ return to in-person schooling. Specifically, we highlight the utility of missing interim test data to inform local district recovery plans.

We commend the test organizations who commissioned the reports reviewed for this blog series. These reports and similar studies serve an important role in understanding the effects of COVID-related school disruptions on student learning. We are equally grateful to study authors for describing their sample characteristics and detailing study limitations. As a colleague recently reminded me, understanding the limitations of these studies is critical for understanding what they can tell us about the impact of the pandemic on student learning. With that in mind, what did we learn from these studies, and what are the implications for policymakers and local educators? 

The interim assessment reports on the impact of the pandemic on student learning reflect a best-case scenario.In our first post, we noted that data used in these studies do not reflect any national, state, or local landscape. Moreover, test attrition – the percentage of students who tested in Fall 2019 tests but not in Fall 2020 – was substantially higher than in previous years. And students who did not test in Fall 2020 (called attriters in one report) represented a much higher proportion of students from marginalized groups.

Across subjects and grades, the same pattern was observed: a larger fraction of attriters were ethnic/racial minority students, students with lower achievement in fall 2019, and students in schools with higher concentrations of socioeconomically-disadvantaged students (Kuhfeld et al., p.8).

These same sample limitations were inherent in all interim study reports we reviewed.

Under-representation among marginalized groups across these studies points to an even bigger problem than how to interpret the study findings. Nationally, an estimated three million students enrolled in educational systems in 2019 were unaccounted for in fall 2020 and may not have received any formal education since March 2020 (Korman, O’Keefe, and Repka, 2020). To put this in perspective, that number represents about 1 out of every 20 students and does not include the millions more students who are accounted for but are chronically absent or consistently disengaged. The fact that so many students from marginalized groups did not test in Fall 2020 suggests that estimates of the impact of the pandemic on student learning are significantly underestimated. Additionally,  for the millions of disengaged and unaccounted-for students the effects of the pandemic will continue growing in the coming months.

What are the Implications of Learning Loss for Policymakers and Local Educators? 

Although a significant number of Fall 2020 interim test scores may be missing, information from the missing data provides clues for schools and districts about who may need additional support, and what sort of support they need. Below we provide suggestions for using information about students not tested to support recovery plans.

  • Use study reports to guide local analysis and decision-making.The studies can serve as a blueprint for local data analysis by districts and schools that administered those same interim tests during the 2020-2021 school year. The methods sections provide helpful information about how to disaggregate results, and limitations described in appendices may apply to local jurisdictions. Districts and schools can refer to these studies as they consider appropriate use of these data to guide decisions. 
  • Disaggregate the data to examine patterns of missingness across student groups. Initially, district and school staff can disaggregate Fall 2020 interim test results to identify patterns of missingness across relevant student groups. For example, the cited test reports found that substantially more missing students represented underserved minorities, students from low-income households, and English language learners. Often, students representing one or more of these groups have similar needs and could benefit from similar interventions. What patterns emerge when examining the rates of missing data among these groups in your district or school? How do attrition rates for specific student groups compare to overall attrition rates in fall 2020? If these students were tested again in winter 2021 or will be tested again in spring 2021, what trends emerge in attrition rates for all students and relevant student groups? 
  • Dig deeper to examine why individuals and groups are missing and what can be done about it. Once patterns of missingness are known, district and building leaders can identify the myriad potential problems preventing students from testing. The potential solutions will undoubtedly require digging to address the root causes. For example, lack of technology access may be occurring in certain neighborhoods, but the specific reasons could range widely (e.g., unreliable connection, unavailable computer). Opportunity to learn indicators such as parent surveys can help, but not for disengaged families or students who have been absent for months. In these cases, school personnel may need to connect with parents or caregivers via phone or home visits. Districts can help by working with schools to create data collection forms that enable more standardized and digitized data collection. Doing so enables districts to aggregate information and determine how pervasive the problem is. It also establishes a common language to facilitate cross-school sharing focused on why students are missing and what schools are doing to re-engage them. Additionally, triangulating missing test results with attendance and other indicators of engagement can be useful for understanding or confirming the nature and extent of the problem.
  • Consider the full impact of the pandemic on students when planning interventions for recovery. Once a district has captured data on who is missing and why, they can develop recovery plans that target the neediest students. Lack of technology may be the tip of the iceberg in schools where students lack consistent access to food or shelter or have experienced higher levels of trauma. These schools may need to hire additional staff and/or train staff to provide trauma-informed care and social and emotional support. Spend time re-engaging students into the rhythm of school. Ensure students have access to a caring adult. Prioritize acceleration over remediation, and remember that keeping students with their peers is often more effective than holding them back a grade. Finally, vehicles like networked improvement communities can facilitate networks through which school and district staff address common problems and scale effective solutions using data sources that extend well beyond test results. 
  • Avoid testing too quickly after students return. Approach testing carefully. A strong case can be made for delaying administration of standardized tests until students and teachers sufficiently adjust to their new surroundings and routines. Poor test performance immediately after students return to school may reflect emotional trauma or continued disengagement as opposed to what a student knows and can do. In a previous post, Susan Lyons describes how formative assessment can be the school’s most powerful assessment lever for connecting with students and improving learning. Formative assessment is also useful to inform social and emotional issues that may interfere with a student’s ability to engage and learn.

Reports that use local interim results to examine the impact of the pandemic on student learning in our country provide essential information to jumpstart conversations about recovery; however, the essentials are not found in the executive summaries, key findings, or headlines. They are hidden in the details; within the sections of these reports that are easiest to gloss over. But if you read closely, you will find the key to our nation’s education recovery lies in what, or rather who, is not there.

Our final post in this series will describe how districts and states can use information about students tested and students missing from testing to gain a better understanding of the recovery task ahead of them.

Share: