The Lack of Consistency in Accountability Results Across the Country Provokes This Fundamental Question
States will soon publish (or have already published) the first results of their federally-mandated accountability systems.
Under these accountability systems, schools will be identified for “Comprehensive,” “Targeted,” and/or “Additional Targeted Support and Improvement” (CSI, TSI, ATSI). Across states, the percentage of all schools identified as CSI appear to range from about 2% to 9% (the federal law specifies “at least 5% of Title I schools”); and the percentage of schools identified as TSI and/or ATSI appears to range from fewer than 2% of all schools in a state to more than 45%.
What will state policy makers and the public make of the considerable variation in school accountability results from state to state?
Why is There a Lack of Consistency in Accountability Results?
It appears there is great variability in states’ definitions of what constitutes a low-performing school, regardless—or perhaps because—of the requirements of the federal ESSA law (Every Student Succeeds Act, 2015). Federal law requires states to identify low-performing schools according to a variety of criteria as in need of “Comprehensive,” “Targeted,” or “Additional Targeted Support and Improvement.” The federal law allows some flexibility in definition by the states, but has the basic requirements shown in the table below.
It is far from clear whether states will consider variation in accountability results the way they did variation in assessment proficiency.A preliminary scan of several states that have released their CSI/TSI/ATSI school identifications shows a 5X range—the state that identified the highest percentage of its schools identified about 5 times as many as did the state that identified the least percentage of schools as CSI. For TSI/ATSI, the range was almost 20X—the state that identified the smallest percentage identified about 2% of schools for TSI/ATSI, while another state identified more than 40% of its schools. (See Fig. 2) (Note that in this first year of implementation, most states identified CSI and ATSI, and not TSI.)
A Movement Toward Consistency in “Proficiency” and Content Standards
States have been required by federal law since 1994 to define and assess what students should know and be able to do in reading and mathematics “content standards.” This feature, in the Improving America’s Schools Act, which was a reauthorization of the Elementary and Secondary Education Act and includes Title 1, was credited by then Secretary of Education Richard Riley with being fundamental to supporting state educational reform efforts:
“Title I will ensure greater accountability through the use of state assessments that measure students’ progress toward new state standards. The same standards and assessments developed by a state for all children will apply to children participating in Title I. These two fundamental changes in Title I — the role of high academic standards and the use of state assessments — will help ensure that Title I is an integral part of state reform efforts…” (Riley, 1995, ibid.)
In addition, states have been required to assess all students’ knowledge of those state content standards through state tests or assessments. States were required to define levels of proficiency in terms of performance on the state tests. In this way, it might be reported to a student’s parents whether the student scored “Proficient” or not. Similarly, percentages of students scoring proficient could be reported at the school, district, or state level.
The Discrepancy in Proficiency Among States
After just a few years of reporting state assessment results, it became apparent that states had defined “Proficient” quite differently from each other.
This conclusion was made possible by comparing the states to a common measure—the National Assessment of Educational Progress (NAEP). Although NAEP did not purport to measure exactly the same things as did state assessments, it did provide a means to enable comparison of the “rigor” of the proficiency standard across states. When the analyses were done, it was clear that states varied enormously in what they considered proficient. In addition, there was no simple relationship between a state’s definition of proficiency and the state’s performance on NAEP; some states that scored relatively high on NAEP reported around 60% of the states’ students scored Proficient on the state’s assessment, while other states with relatively lower performance on NAEP reported more than 80% of the state’s students were Proficient. (See Fig. 1.)
Policymakers and the public were puzzled by the variation and the lack of correspondence between performance on NAEP and the apparent rigor of the states’ definitions of proficiency. Some went so far as to call the mismatch an “honesty gap.”
States had two main responses. First, over time states narrowed the variation between their definitions of proficiency in terms of test performance across states, and with NAEP. In addition, states undertook the more fundamental task of providing more comparable content standards.
The need for more comparable content standards was discussed by state chief school officers in 2007; by 2009, the Council of Chief State School Officers and the National Governors Association had launched a project to develop high-quality content standards in English language arts and mathematics. The resulting “Common Core State Standards” (CCSS) were issued in 2010. At one point, more than 40 states and territories had adopted the CCSS.
Although many states now avoid reference to or formal acknowledgment of the CCSS, the CCSS continue to form the basis for the large majority of states’ content standards in ELA/Reading and mathematics today. It’s clear state policy makers felt it important to develop the same or very similar content standards, and increasingly similar performance standard definitions of proficiency. Those elements were to provide a foundation for similar evaluation of student proficiency and readiness for college and careers.
It will be interesting to see the response to these results by policymakers, educators, and the public. I think there are enough differences in state context that there should be some variation in terms of accountability results. It will be informative, though, for states and others to ask such questions as:
- Why is there such variation in the percentages of schools identified across the states?
- Did the state system identify as many schools as was expected? If not, why?If yes, were the right schools identified?
- Why does the state think the number of schools identified is appropriate?
- What does the state expect will happen next in terms of support and improvement?
- If support and improvement do not occur, what is the state’s “Plan B”?
While the accountability results are interesting, the more important question now is, “What happens next in state accountability?”