Contextualizing COVID-19 “Learning Loss” and “Learning Recovery”

Jun 09, 2020

Education Reform has Always Been About Recovering Losses in Learning

In a late April survey by EdWeek, 320 district administrators were asked to indicate the most urgent needs that assessment vendors could help within the wake of COVID-19. Topping the list: “Assessment for the fall of 2020 to gauge students’ loss of learning during closures.” “Learning recovery” (or “instructional recovery” or “compensatory education”) has become an urgent priority; varied education support organizations such as ZearnANet, and Council of Chief State School Officers have been working to fill this need.

With most school campuses closed and distance learning hastily put in place for the last part of the 2019-2020 school year, many are rightly concerned that upon “reopening” (however that looks), schools will welcome back students who have lost ground relative to prior years. Moreover, the loss may be more significant for low-income students, given what we know about disparities in technology access, attendance, and live instruction during early spring 2020 distance learning.

The Challenge of Measuring Learning Loss

What do we mean when we talk about “learning loss,” and what precisely is recovered in “learning recovery”? Learning loss is best understood not as a reduction in existing knowledge or skills, but as a difference between a current reality and some ideal or at least normal condition. With the COVID-19 school closures, that normal condition is spring 2020 without COVID-19, and the loss is the difference in the learning that occurred during disruption and the learning that would have occurred in a COVID-free spring 2020.

Viewing learning loss in this way highlights how difficult it is to assess such a thing. Each spring, students sit for a comprehensive achievement test covering a sample of grade-level standards. One could compare typical results of this test to those obtained in spring 2020, and estimate a difference, probably a dip. This comparison can be done by grade, subject area, and student group. One can then track the carryover effects of this dip to see if its effects are lasting.

This exercise captures what most people understand as learning loss; determining what grades, subject areas, or student groups the loss makes a real difference over time; and the practical question of whether it will contribute to some students’ difficulties accessing new grade-level content each year.

This measurement exercise will never be carried out. It couldn’t be. One of the consequences of the spring 2020 disruption was a collapse in our ability to formally assess students on the few critical outcomes measured on tests designed to assess student achievement with respect to state standards comprehensively. It would also be unethical merely to assess students this spring to measure COVID learning loss and its carryover effects and to refrain from addressing gaps that were found.

Learning Loss in a Broader Context

Learning loss is not limited to events like the COVID-19 school closures. Education reformers point to gaps between where students are and where they should be (at/above proficiency, variously defined by NAEP and the states), highlighting the learning loss that persists in a system that performs below its full potential. Since the 1960s, schools have been called to close inter-group gaps in academic achievement measures – conceptually, a learning loss relative to what might be, if the right targeted interventions were in place and succeeded. However, despite countless reform efforts, those gaps remain.

As my Center colleague Brian Gong pointed out in a recent personal communication, these gaps are greater than any differential “learning losses” we will find between relatively advantaged and disadvantaged groups due to spring 2020 school disruptions. Professors Heather Hill and Susanna Loeb made the same point in a recent opinion piece on EdWeek.

Education Reform is Learning Recovery

Neither the challenges of measuring COVID-related learning losses and their carryover effects, nor the magnitude of these losses relative to larger systemic losses, should dissuade us from addressing them in the absence of ideal measurement and resource allocation conditions. Various proposals have been advanced. A New York Times editorial from mid-April cites an analysis by NWEA to argue that the COVID-slide would be substantial and lasting, recommending that “any reasonable approach would include” extensive testing upon reentry, “aggressive” plans for remediation and added school days, among other things.

My Center colleague Carla Evans writes that professional development on good classroom assessment practice is much more promising in addressing COVID-related achievement gaps than an approach that tries first to solve the learning loss measurement problem within the schooling parameters of a typical fall. In their EdWeek article, Hill and Loeb write that informal assessments by classroom teachers (such as those in a high-quality curriculum) are better than something more comprehensive at the beginning of the year. They argue that schools are already set up to contend with variabilities in student readiness that are much larger than the highest estimates of COVID-related learning losses. (Hill and Loeb also question the NWEA estimates, citing data from the Early Childhood Longitudinal Study that suggest smaller COVID-linked learning losses.)

The place of assessment among these more nuanced proposals to address the COVID gap is no different from what we would advocate as part of a plan to address better studied, better measured, more significant, and more persistent gaps. The goal has always been high-quality classroom assessment systems with an emphasis on powerful formative assessment practices

If we are to do the work of learning recovery, let us take advantage of this moment to recover not only from the losses of one season but also from those that have been with us for decades.

Share: