Can We Stop Obsessing About the Effects of Remote Instruction on Student Learning?
It’s Time to Focus our Energy on Strategies that Help Students Recover
The most recent NAEP results sparked yet another spate of studies and opinions about the effects of remote instruction on student achievement. As usual, the studies varied in their approaches and conclusions, adding little clarity to this complex topic. We can continue to devote our attention to isolating and quantifying the impact of remote learning on students. But at this point, I’d suggest that we move from second-guessing past decisions to focusing our energy on strategies that help kids recover.
To be clear, I believe there is value in understanding which remote learning approaches, under what conditions, helped students lose the least ground during the pandemic. But this requires good study methods and detailed information few can access. You need accurate information about the various remote schooling strategies, the unique contexts in which they were implemented, and high-quality assessment data to shed accurate light on the effects of remote learning. My colleagues—Damian Betebenner, Chris Brandt, and Jeri Thompson—are doing this valuable “close-up” work in a few states.
But most of the articles and “studies” I’ve seen about remote learning, the latest just this week, fall far short of that. They paint remote learning’s effects with too broad a brush to be useful. At best, they are academic exercises with little practical value or, at worst, attempts to score political points. These kinds of projects carry a cost: They siphon energy from the crucial work of helping kids recover.
Take a look at just a few of the recent discussions on remote instruction’s impact. These discussions, and some key dynamics in the way remote learning played out, show the challenges of trying to characterize the ways it affected student learning.
A Flurry of Claims and Contradictions About Remote Learning and Student Achievement
Peggy Carr, the commissioner of the National Center for Education Statistics, which administers the NAEP exam, said when the main NAEP results were released on October 24th, the scores didn’t demonstrate a relationship between remote learning and low student achievement.
“There is nothing in this data that allows us to draw a straight line from remote learning to student performance,” she said during a live-streamed discussion of the scores.
In an article in The 74 that carried the inflammatory headline, “Strong Link in Big City Districts’ 4th-Grade Math Scores to School Closures,” Brown University economist Emily Oster called Carr’s conclusion “odd.” Tom Kane, Sean Reardon, and colleagues reported that score drops were associated with remote learning, but they described them in more nuanced ways, as this Stanford press release notes:
The analysis also showed that test scores declined more, on average, in school districts where students were learning remotely than where learning took place in person. But the extent to which a school district was in person or remote was a minor factor in the change in student performance, the researchers found.
Can these different perspectives all be true? Yes. But it depends on the grain size of the analyses, the quality of the information, and how people interpret the strength of the relationship between remote instruction and student learning. For example, Carr’s statement was true because she focused on state-level scores and policies. Reardon and Kane used districts as their unit of analysis, so they were able to notice differences that occurred at the district level but would not be observed when aggregated at the state level.
Even in these different unit-level analyses, however, the data are too messy to support clear causal inferences.
It’s Not Clear Which Students Learned Remotely
In many cases, when districts instituted blanket policies, we have good evidence of who learned remotely. States certainly encouraged districts to close or stay open, but districts almost always got to decide. To do these analyses well, however, we also need to know who learned in person so we can establish a clear contrast. That’s the challenge.
Even during the 2021-2022 school year, when more students were learning in person, school leaders told me, “My teachers have 20 students in class every day, but never the same 20.” This highlights a dynamic that makes the research tricky: Even when districts were “open,” many students still learned remotely at least some of the time. Testing operations, whether NAEP, state tests, or—especially— interim assessments, have been unable to accurately classify whether students were remote or in-person because the information collected was too blunt to show the extent to which students learned remotely (e.g., only a few days or for the whole year).
The Quality of In-Person Student Learning Varied
We have good evidence that the quality of remote learning differed dramatically across contexts. The variability of in-person instruction, on the other hand, gets little attention. In certain places, in-person schooling might have looked almost normal, but in most, students and teachers were in masks and behind plexiglass barriers. Sure, it was good to have kids and teachers in schools, but it was very tough for students, since they weren’t allowed to work together in small groups or move around the room.
My point here is that we cannot simply categorize student learning experiences into two distinct groups: those who learned remotely and those who didn’t. The reality is much more fluid and complex.
There is No Causal Evidence of Remote Learning’s Impact on Student Achievement
The few studies that tried to quantify the relationship between remote learning and student achievement found correlations of about 0.3. For those of you who don’t spend a lot of time with scatterplots, a correlation of 0.3 means there are almost as many cases where the supposed relationship doesn’t hold as when it does. If we could untangle some of the difficulties with clearly categorizing remote and in-person schooling, we might see a stronger relationship. Even so, I think our efforts can be better spent focusing on systems that support student recovery than on trying to isolate any effects of remote learning.
The View from the Ivory Tower
I served on my local school board through the pandemic, and we were able to keep our schools open for most of 2020-2021. The year, though, was full of agonizing decisions as we worked to ensure we were doing all we could to keep educators, students, and families safe. My district was also in a privileged position: We had plenty of resources and space, so we could limit class sizes to 15 students and take all the recommended precautions.
I find it disingenuous that many of those now second-guessing decisions made during times of fear and uncertainty had the privilege of working from their remote ivory towers. It’s easy to argue that kids should have been back in school when you’re not the one deciding how much to expose students or teachers to a deadly virus.
It’s Time to Move Forward
We already know the most important thing: The pandemic hit most kids hard, not just academically. And the kids who learned remotely the longest were likely hit hardest—for many reasons, including, but not limited to, remote instruction.
The most valuable work measurement professionals can do now is to focus on creating and supporting strategies that help students recover lost academic and social-emotional ground. In my next post, I’ll discuss how we should do that: by designing and implementing assessment approaches to monitor and support accelerated learning.