Making the Case for a New Approach to Federal School Accountability
The U.S. Department of Education followed an all too familiar pattern of publishing important guidance just before the Christmas holiday, dropping a lump of coal in the stockings of state accountability directors. The gift, or Christmas miracle, that states needed was an acknowledgment that federal school accountability requirements need to be revised, not simply restarted. Unfortunately, the draft requirements simply discuss (direct) how states must restart their accountability systems in 2021-2022 after what has essentially been a 2-year accountability hiatus.
The guidance used a frequently asked questions (FAQ) approach to communicate the rules for school accountability in 2022. My colleagues, Laura Pinsonneault and Chris Domaleski, along with Katie Carroll, just published a terrific summary of the FAQ and provided advice to state leaders for how to approach these requirements. My purpose here is not to expand on their terrific work. Rather, I contend the FAQ makes clear that the limited approaches to school accountability constrained by the Every Student Succeeds Act (ESSA) are altogether untenable. We urgently need a new approach to federal school accountability.
Concerns With the Current Accountability Guidance
I offer one example from the FAQ to highlight my concerns with the current accountability laws and rules. The parts of the guidance addressing the “other academic indicator” made my validity senses tingle. Recall, ESSA requires each state’s accountability system to include several specific indicators, including the “other academic indicator,” which in 49 states is some version of student academic growth. By definition, growth requires at least one prior score. The few states with essentially full test participation (e.g., greater than 90%) in 2020-2021 should be able to calculate student growth assuming nearly full participation on the 2021-2022 test as well, but for the majority of states, it will be challenging to calculate growth validly for all schools and districts because of missing data.
Perhaps recognizing this challenge, section B-6 & B-7 of the FAQ appears to offer flexibility to switch out the other academic indicator or otherwise modify it. However, switching growth for something else (e.g., achievement gaps) will change the meaning of the accountability system. When asked about this concern during a recent webinar with state assessment and accountability leaders, officials from the U.S. Department of Education (USED) insisted that including the other academic indicator is non-negotiable.
I understand the professionals at USED can only act within the provisions of the law, especially with congressional education leaders maintaining a firm line. Unfortunately, this stance indicates that the validity of accountability determinations takes a back seat to just simply creating a list of schools that can be designated as “low performing” schools, even if “low performing” may be somewhat arbitrary. If states just trade out indicators, the result could be an overall determination in 2022 with a noticeably different meaning than 2019 or than it might have in 2023. In other words, it appears “the list” is what’s important, not the validity of the list.
The Current System Hasn’t Worked
We’ve been at this iron-handed approach to accountability for 20 years and it seems quite clear that the reality of test-based accountability has far underperformed the promise. The general lack of progress on NAEP scores and longstanding achievement gaps are just a few indications of this shortcoming. So if our current system is broken, what do I suggest?
The Center has been offering suggestions for reforming accountability for several years. Chris Domaleski, Chris Brandt, and I provided a number of assessment and accountability proposals that we would like to see in the next federal education law. Additionally, I raised several concerns about the ways in which current accountability requirements hinder assessment innovation.
Accountability systems require a thoughtful, systematic, and equitable approach to design. The first steps involve clearly articulating the goals of the educational system and specifying how the accountability system is supposed to support and incentivize these important educational goals. Specifying goals is the first step in an extensive design process explained in many Center for Assessment publications.
It’s one thing to write about accountability design, but we always learn so much more when we have to help states and districts enact these ideas. The Center is currently working with three states and two large school districts to reform their accountability systems. While each context is unique, a common theme involves trying to support deeper learning and more equitable outcomes for students. Each context is interested in a next-generation system designed from the perspective of supporting continuous improvement for schools and districts.
I briefly illustrate how trying to support rich learning goals is starting to play out in one of these contexts, where the focus is on supporting personalized and competency-based learning tied to an ambitious portrait of a graduate. While the work is still in the early stages, three broad areas appear to have the potential for reforming accountability in ways that support ambitious learning initiatives.
- Accountability systems should include district and perhaps state indicators to recognize the responsibility of these levels in supporting school quality.
- It is important to expand the indicators beyond the very narrow focus of reading and math test scores to incentivize schools to provide students with rich opportunities to learn in a broad array of academic and non-academic areas.
- Likely, my most controversial proposal is to allow a limited number of locally-generated indicators to count meaningfully in accountability determinations.
Domaleski, Brandt, and I previously provided a rationale for and some examples of why and how the focus of accountability should be expanded to school districts. The National Academies of Science, Engineering, and Medicine authored an important synthesis of the research and a guidebook for state leaders on an expansive set of potential indicators to evaluate and improve the equitable distribution of opportunities and resources. Therefore, I focus my remaining space here trying to make a case for the use of locally-generated indicators.
For personalized and competency-based learning systems to work well, educators and others need to support students in tailoring their own learning to meet ambitious goals such as those identified in a portrait of a graduate. Further, if states and school districts aim to support schools in moving toward personalized learning for students, isn’t it a contradiction then to require all students and then all schools to meet the same targets using the same indicators at the same time? Besides the contradictory messages, we have to question the theory of change that requires all entities to shoot for the same targets measured in exactly the same ways. When do people or organizations truly improve performance in some way simply because some external entity directed them to do so? It must be internal, or at least substantially internal, to support sustained change.
Yes, I know the devil lurks in the details. We first have to address the cultural and capacity shifts necessary to support local agency. We have lived under top-down accountability systems for so long that a generation of educators and leaders suffer from an unfortunate version of Stockholm Syndrome: “NCLB-Syndrome.” Therefore, states interested in such an orientation must attend to both local and state capacity to support this work. Other issues to be addressed include:
- What could count as a local indicator?
- How would the criteria for success be determined (and reviewed) on such indicators?
- What processes (e.g., support, state reviews) need to be in place to include such indicators?
- How should the results of the local indicators “count” in the overall results?
- How do we explain/justify the designed threat to comparability?
I know there is considerable conceptual, empirical, and practical work to do before such approaches are ready for prime time. I will continue sharing ideas and progress here and through other outlets. While this work will be challenging, I see no rationale to settle for the current system. We’ve had a 20-year experiment with top-down accountability. It hasn’t worked. Policymakers must be willing to allow next-generation approaches based on the science of human and organizational learning before we subject another generation to mediocre rates of improvement.