A desk covered in papers with people pointing at key information on the papers.

Accountability System Design Tensions

Dec 10, 2025

Lessons from Subtract: The Untapped Science of Less

I just finished reading Leidy Klotz’s terrific book, Subtract: The Untapped Science of Less. I can’t stop thinking about its relevance to assessment and accountability design. We continually add new assessments, accountability indicators, and tasks for leaders and educators to complete. In this blog, I’ll explore how we can adopt more streamlined accountability systems.

In his book, Klotz traces our evolutionary and cultural history to explain why we are predisposed to add. Through engaging examples, he illustrates how subtraction is an overlooked but powerful design alternative.

Earlier this year, my colleague Will Lorié eloquently summarized Klotz’s ideas and provided several examples where “less” produced better accountability and assessment systems. In this blog, I’ll focus only on accountability and dig into why designers tend to add measures to these systems. I’ll explore when adding might be good—and when it might not—and I’ll offer a way we can get to “less” in our statewide systems.

School Accountability Tensions

I’ve been involved in accountability design and redesign efforts dating back to the early years of the No Child Left Behind Act (NCLB). The Every Student Succeeds Act (ESSA) provided states with more flexibility, but also required them to add additional indicators. Many added more than the law required (see this Education Commission of the States report).

In some of my projects, the design team wanted to include even more indicators than what ultimately made it into the final design. Capacity, comparability, and data quality often prevented the inclusion of even more indicators in most existing systems.

Why We Add Measures to Accountability Systems

I’ve observed two general reasons for adding more indicators. The first is that many believe school quality is multifaceted, and we need more than reading and math test scores to better understand how well schools are serving their students.

The second involves adding what are often referred to as leading or proximal indicators. In accountability, such indicators are part of a theory of action intended to support the longer-term outcomes.

For example, 9th grade credit accumulation has become a common leading indicator in state school accountability systems. The number of credits students accumulate in 9th grade has little inherent value, but it has tremendous value as a predictor of students’ increased risk of dropping out of high school. By collecting these data, schools can help students get back on track before it is too late.

Adding leading indicators to an accountability system, grounded in a strong theory of action, can reflect an improvement mindset. This is good. Klotz is not against adding. He just wants us to consider subtraction far more often than we typically do. In this case, adding research-based leading indicators can help continuous improvement efforts.

Adding indicators to better reflect school quality presents a different set of considerations. Such additions are often based on a fairness mindset. Schools do many good things for students that are not captured by reading and mathematics test scores. If schools are to be identified as “low performing,” they want to be judged fairly. However, this approach can lead to adding so many indicators that district and school leaders lose sight of what they need to focus on to improve their school. In this case, subtraction may be warranted.

The Case for Subtracting in School Accountability Design

Most school accountability system designers want schools to enhance their ability to provide meaningful educational opportunities for all their students. I understand the need for a broad array of measures to more fairly characterize school quality, but if we are awash in so much information, how can school leaders focus their improvement efforts?

Consider this analogy: I used to teach golf. I’ve seen new golfers get easily overwhelmed trying to focus on their grip, stance, left arm, turn (not sway!), head position, and follow-through, twisting themselves into pretzels. Good golf instructors know how to help new golfers focus on what Klotz terms the “essence”—the most high-leverage practices—that will bring the other elements of the swing along.

Improvement science tells us that focusing on fewer, high-leverage indicators is more effective than spreading our attention in too many places. This is a key tension in this design work. We want to avoid the unintended consequences of NCLB’s narrow framing, when students were denied opportunities like art and recess because their school leaders wanted to focus solely on reading and math. How can we escape this conundrum?

Our push for comparability—required by federal law—leads us into addition. Think about it: If we were holding only a single school accountable, we could work with that school to develop an accountability system based on a limited number of indicators important to that school. But trying to be fair to hundreds of schools leads us to add indicators, in the hope that the multiple indicators will balance the diverse needs of the state’s schools.

Blending State and Local Indicators to Get to “Less”

What if we designed a hybrid form of accountability system, where school and district leaders engaged their local community to identify a limited number of needs or issues (e.g., improve inquiry-based science learning)?

Local leaders would need to ensure that the plan to address the identified needs is grounded in research. They would also need to identify measures and indicators to track their progress. Finally, school and district constituents would need to establish criteria to determine whether the desired improvements are occurring at the intended rate. The state would oversee and support this process to ensure that districts can credibly enact such systems.

I recognize this is likely too ambitious for current capacity levels. However, this approach could provide districts with the space to build the necessary infrastructure and culture to support local accountability efforts.

In this case, the state system could include a very limited number of indicators (e.g., achievement and growth in reading and math), with the overall rating of schools based equally on the state and local indicators. If a school’s story differs significantly when told via state indicators than through local ones, that might signal a need for the state to support school and district leaders in improving their local continuous improvement practices.

Reducing the state footprint while allowing locals to focus on a limited number of high-priority needs helps us get to less. It also helps make the system improvement-oriented by supporting local education leaders to focus on the things that matter to them and their community.  

My point here is that we need to find a way to get to less so local leaders and educators can focus their limited energy on improving things important to their school communities. I encourage others to think about subtracting before adding.

Share: