The Outlook for ESSA School Accountability After COVID-19

Apr 07, 2020

A Return to the Status Quo is Unlikely and That’s Just Fine

For those hoping for minimal disruption to ESSA school accountability, I have bad news and more bad news.    

The bad news is that school accountability as we know it is entirely offline for 2020. Before the calendar even flipped to April, the U.S. Department of Education granted waivers to all 50 states, D.C., Puerto Rico, and the Bureau of Indian Education.  

And if one is expecting things to pick up right where they left off following the 2020 ‘skip year,’  there’s more bad news. They won’t. The disruptions are almost sure to have a significant multi-year impact.  

But, just as the product of multiplying two negative numbers is positive, I believe there is reason to have an optimistic outlook. Perhaps this disruption provides a rare opportunity to explore some much-needed improvement to accountability that almost certainly would not have occurred otherwise. 

Why is a Quick Return to the Status Quo Unattainable?  

There are two primary reasons the disruptions in 2020 are so consequential for 2021 and beyond. First, state ESSA accountability systems rely on a bundle of multi-year data. For example, almost every school accountability system includes estimates of academic growth. These growth measures require at least one prior measure (i.e. last year’s state test) and in some cases multiple priors. Moreover, growth isn’t the only indicator that requires data across years. Many systems rely on measures of improvement (or trend data), multi-year averaging, and lagged-data (e.g. graduation rates in 2020 may be based on 2019 data). Many states also ‘bank’ test scores, sometimes up to three years. Score banking occurs when test results from students who tested in previous years are used later. For instance, if a student takes an Algebra test as an eighth-grader in 2020, it may be banked until it’s included in the state’s high school math results in 2022. If there are no eighth-grade scores to withdraw from the bank in 2022, the school’s performance in mathematics will be changed in a way that prohibits meaningful comparison to prior years.      

Are there ways around these data holes? Sure. People like me and my colleagues at the Center for Assessment are pretty good at various statistical gymnastics to address these tricky problems. For example, it’s possible to reach back more than one year for a prior test score needed to compute a growth score. And there are approaches to impute values or create correspondence tables (e.g. via equipercentile) to help compensate for full or partial missing data. We can also tweak business rules to deal with issues related to banked or lagged data.  But that doesn’t mean these approaches are always a good idea.  In fact, there are often good reasons – technical, practical, and policy-based – to proceed with great caution when considering such approaches. 

But that scenario brings us to the second point about why the disruptions are so significant. While there are a lot of ways we can try to ‘repair’ systems to stand-up accountability in 2021 and beyond, these approaches will substantially change the meaning and interpretation of the results. This outcome is true for individual indicators and especially for aggregated indicators. It also applies to results for schools and groups within years and across years. Consider, too, the impact of these changes are almost never systematic across schools and student groups. For example, changes to the way academic growth is estimated are likely to differentially influence outcomes for students who are English language learners compared to those who are not.

Therefore, state accountability leaders will need to re-examine many foundational assumptions about the model in 2021 such as:

  • Should the indicators be combined the same way (e.g. weights)?
  • Are the legacy goals and performance expectations still appropriate or should they be reset?
  • How do changes impact criteria for exiting improvement status?
  • Are changes to the consequences and supports appropriate?

It is no trivial effort to tackle these challenges. Even states that desire to move aggressively will likely need significant time after the 2020-2021 academic year to analyze data and confer with policymakers and technical advisors to inform their decisions. The point is, asking states to quickly return their systems to the status quo is not a reasonable goal.  

Every Challenge Presents an Opportunity 

Given the scale and duration of disruption due to COVID-19, I argue there has never been a better time to invest in accountability innovations. To be clear, even the most earnest and aggressive attempts to restore legacy accountability systems will not produce results that can be meaningfully compared to those prior to 2020. Change is inevitable. Let’s make the right changes.

What could these changes look like? I’ll share a few initial suggestions below, but I hope to address this topic more fully in subsequent posts.

  • Improvements that promote coherence and balance, among federal, state, and local roles. See, for example, my paper with Damian Betebenner and Susan Lyons.  
  • More explicit connections to school improvement as my colleagues Scott Marion, Carla Evans, and Juan D’Brot have written about.    
  • Inclusion of broader and more authentic measures of college and career readiness. For example, Erika Landl provides suggestions for more coherent systems across ESSA and Perkins V.  
  • Systems that produce classifications that more credibly reflect policy priorities and performance expectations
  • Finally, my colleague Brian Gong recently imagined some more foundational shifts in accountability, emphasizing factors such as individualization of goals and evidence, differentiation of responsibility, and a focus on more proximal indicators of performance improvement in lieu of distal outcomes.  

There are countless ideas for large or smaller scale improvements to accountability. However, the intent of this post isn’t to argue for a particular vision for accountability. Rather, my intent is to advocate for a process for restoring accountability that unfolds in a manner that is friendly to improvement and innovation. For example, restoration of accountability after COVID-19 disruptions may proceed in phases. Perhaps states can roll out ‘beta’ systems and targeted pilots in 2021, as they seek to improve on the best aspects of their legacy models while investigating innovative new approaches.     

To make this process possible, federal and state policies must continue to be relaxed for the 2020-2021 year.  I’m not suggesting another ‘skip year’ entirely. For example, there is likely no good reason to assume assessment reporting can’t resume in 2021. But sensible, targeted waivers will help states rebuild and improve accountability systems over time. 

Indeed, I’ve been impressed by how quickly and effectively federal and state leaders responded to states in the midst of the current crisis.  I hope that broad goodwill, patience, and support continues throughout the next year in order to pursue ideas that transcend the status quo. 

Share: