States can better support school improvement by building stronger connections between accountability design and school improvement.

Reframing Accountability: Build Closer Connections Between School Identification and School Improvement

Mar 15, 2023

With a Few Key Changes, Accountability Systems Can More Effectively Support School Improvement 

Last fall, after a COVID-driven pause, state education agencies returned to a longstanding routine: They released accountability reports that identify schools in need of extra support. But while this routine has long been required by federal law, it hasn’t brought about the improvements necessary to close achievement gaps and help all schools lay the right groundwork for their students’ success. In short, accountability isn’t working.

In this post, and in more to follow, I’ll explore reasons why accountability systems have fallen short of their promise, and propose key ways to make them work better.

Because, yes, they can work better. I’ve seen firsthand the promise of accountability, both as Wisconsin’s accountability director and in my current work advising states on their accountability systems. I am not naïve; I see the differences between the intended and observed effects of accountability systems required by the federal Elementary and Secondary Education Act (ESEA). But I also see their tremendous promise. 

ESEA is, after all, civil rights legislation, requiring states to direct resources and supports to schools serving historically marginalized populations of students. 

We shouldn’t abandon that original intent—to improve educational conditions and outcomes for all students, especially those who have been marginalized—because we haven’t yet seen the results we want. We must think differently about the work we do and how we do it. 

Building Connections Between Accountability Design and School Improvement 

One way we can help accountability realize its promise is by designing a system that more intentionally and effectively connects accountability outcomes—identifications and related data—to school improvement activities. It’s about connection-making within a state education agency. This isn’t a new idea, but it is worth revisiting in light of the renewed attention to federal accountability systems following the pandemic.

Before we explore how to build closer connections, let’s pause for a moment to note some of the differing theories about how accountability mechanisms actually work to improve schools.  Some would say that school improvement happens when schools are subjected to the pressure of publicly reported results. Others argue that improvement happens because of the supports and resources states direct to schools. The last two reauthorizations of the ESEA—No Child Left Behind (NCLB) in 2001, and the Every Student Succeeds Act (ESSA) in 2015—each reflect aspects of these theories. 

There are major and meaningful differences between NCLB and ESSA when it comes to measuring school performance, but that’s a topic for a different post. I propose that the most important difference is the way each law envisions the levers that bring about school improvement. Where NCLB imposed one-size-“fits”-all consequences for failure to meet established annual targets, ESSA requires schools to conduct needs assessments, select evidence-based strategies as part of their improvement plans, and use those strategies to address identified needs. 

Disconnections Mean Missed Opportunities

And yet, too often, accountability design and school improvement are completely disconnected. History matters, and this approach may be a hangover from NCLB days. But ESSA shouldn’t constrain state efforts to better connect accountability and school improvement systems. There are opportunities now to connect schools’ needs with states’ support strategies, and we can do a better job taking advantage of them. 

Here’s what happens. Accountability systems and their accompanying reports are typically designed to include resources that help people understand their data and calculations. Once those reports are out and schools are identified, however, the action moves to a different team: school improvement. 

This distinction between teams—accountability support and school-improvement support —means that accountability staff are focused on helping people understand why a school was identified, but stop short of a critical question: What next? When there’s a handoff between those who design accountability systems and reports and those who help schools respond to those reports, there’s also a missed opportunity: the opportunity to support schools better by using state data to inform local inquiry and needs assessment. 

I’m arguing for improving the way we use accountability data, not just improving our accountability reports. School improvement planning and support should be enriched by the data that sparked the identifications. Too often, improvement planning moves ahead without much attention to the data that identified schools for help to begin with. When those who support improvement work have not been actively engaged in learning about the accountability system and its data, they are disconnected from important information that can guide them as they support schools. And this leads to improvement plans that are disconnected from what schools really need.

With these missed opportunities in mind, I offer some principles to support “connected” accountability design and improvement planning. 

Principle 1: Accountability and school improvement staff should understand each other’s systems. This might go without saying, but it shouldn’t be taken for granted. Those who design and execute the systems that identify schools must understand the implications of those identifications in their state. Likewise, school improvement staff should know what contributed to a school’s federal identification and be able to navigate the information in the accountability report. 

Principle 2: Accountability and school improvement staff should have a hand in each other’s systems. Accountability staff should contribute their expertise to defining how data from the identification system can inform required improvement planning, such as what evidence and evidence-based supports are appropriate to inform improvement plans. Accountability staff can advise when additional (local) data may be needed to inform selection of improvement activities. School improvement staff, who understand the resources available to support schools and the processes to select, implement, and monitor those supports, can contribute their expertise to design decisions, particularly those involving federal identification and exit criteria and report design. 

Principle 3: All resources should explicitly connect the accountability reporting and school improvement planning systems they are developed to support. When accountability reports, with their calculations and identifications, are posted on websites or dashboards without any reference to how they can be used for school improvement, it reinforces the idea that state data matter only to penalize schools through identifications. What if the guides, reports, and other resources created for accountability systems clearly signaled how they can help schools improve? And what if the resources created for school improvement planning signaled the role accountability data can play in that process? These explicit links would reinforce the connection that’s too often missing: Accountability exists to improve schools, and schools can improve by getting insight from the data it generates.

Strong Connections Build Insight, Support School Improvement 

I want to be very clear about the role accountability data should play in school improvement. This quote, from a Council of Chief State School Officers report that I re-read often when I was an accountability director, captures that role well:

It is also important to understand that the information collected through the accountability process should inform the school improvement efforts, but accountability results and identification alone will not improve outcomes for kids. Supporting the understanding and use of the accountability outcomes, and the additional data points not included in the system, is critical to drive change..

Relationships between state education agencies and their districts differ by state, but these principles should hold firm in any state-district structure. I do see shifts in some state agencies over the last two years—one positive outcome of the pandemic-driven pause of ESSA identifications—toward more connected decision-making that includes school improvement staff in accountability design. In future posts, I’ll offer some practical suggestions for how to make these connections. 

Share: