Improving Equity: Understanding State Identification Systems under ESSA
Creating a Framework to Help Describe Differences in How States Identify Schools for Support and Improvement
This is the second in a series of CenterLine posts by our 2019 summer interns and their Center mentors based on their project and the assessment and accountability issues they addressed this summer. Nikole Gregg from James Madison University worked with Brian Gong on how states have attempted to promote equity through the design of their ESSA accountability systems.
As of the 2017-2018 school year, the Every Student Succeeds Act (ESSA) replaced the previous Elementary and Secondary Education Act (ESEA) legislation known as the No Child Left Behind Act (NCLB). As a result of this change, state leaders are attempting to create an identification system within their states that meet the U.S. Department of Education’s requirements, but that also balances their resources and values.
Their task has become more complex due to the fact that there is a lot of flexibility and room for interpretation in how each state can implement the ESSA identification requirements.
Potential Challenges of School Designations Under the Every Student Succeeds Act
Under ESSA, each state must use a set of indicators as a means to identify the lowest-performing schools for support and improvement. Per ESSA identification requirements, low-performing schools and student groups are designated as Comprehensive Support and Improvement (CSI), Targeted Support and Improvement (TSI), or Additional Targeted Support and Improvement (ATSI). (See Lyons, D’Brot, and Landl 2017 for more information about the three levels of support and intervention.)
These targeted schools are then to be provided resources and evidence-based interventions in an effort to increase equitable student achievement. Thus, the way states identify schools as underperforming has implications for which schools receive resources, funding, and interventions to better aid student performance. However, each state has a different method of identifying schools, meaning each state targets schools using different indicators, business rules, and timelines.
Given the flexibility in ESSA, and differences in identification processes and student populations across states, it is not surprising that states have differing rates of identification. These differing rates of identification can be seen in a report by the Center for Educational Policy (CEP). Readers should note that these numbers may not be the exact rates of identification across states, but the resource does generally demonstrate the variability in identification rates across states (Figure 1). The Center, along with the Council of Chief State School Offers (CCSSO) is developing a report that supplements this data created by the CEP.
Recently, at the CCSSO State Collaborative and Assessment of Student Standards (SCASS) Accountability Systems and Reporting meeting, it became clear that different states did not know much about the identification systems of other states. Given that ESSA identification only began in Fall 2018, this knowledge gap is not surprising. State accountability leaders did have an interest in the different decisions other states were making regarding their identification systems.
Developing an Identification Framework to Understand Differences Across States
As there is interest among state accountability leaders in understanding cross-state systems and the differences among them, we created a framework to describe variations in state identification systems. This identification framework identifies characteristics in identification systems that contribute to the variation in identification rates across states.
Ideally this framework will help state accountability, assessment, and education leaders to understand the decisions states are making in their identification processes. We expect that as time progresses, and states see how their systems, as well as others’ systems, function, they will make adjustments to better accommodate the resources, values, and needs of their schools.
For this identification framework, we identified four dimensions or categories of design decisions: Inclusion, empirical performance, frequency/rigor, and identification paradigm.
- Inclusion pertains to how states decide to include schools in the categorization or accountability process (e.g. subgroup formation).
- Empirical performance pertains to student performance on accountability indicators (e.g. variability of student performance on individual indicators).
- Frequency/rigor pertains to the timing of identification and exit criteria, as well as the rigor of these two processes.
- Identification paradigm pertains to the overall decision-making processes. All of these pieces come together and interact with one another to reflect the variability of state identification systems.
Inclusion incorporates four considerations:
- the inclusion of non-Title I schools,
- a minimum n-size requirement for inclusion into the accountability score,
- the calculation of subgroups, and
- the explicit cap on the number of identified schools.
Each of these considerations will potentially affect the number of identified schools. For example, if the state decides to include all schools, not only Title I schools, in their CSI, TSI, or ATSI calculations, then it provides a larger pool of possible schools to be identified, thus increasing the likelihood of more schools being identified in that particular category.
The empirical performance category incorporates the following three considerations:
- the typical graduation rate of the state,
- the possibility of lower-performing students to compensate on an accountability indicator, and
- the amount of variability of performance on the accountability score.
For example, as described in a post by the Center’s Chris Domaleski, the amount of variability in each indicator has a significant impact on the overall accountability score. If scores from an indicator have limited variability, it is not as large of a contributor to the variation in the accountability score in comparison to another indicator that has a large amount of score variability. In other words, the more variability an indicator has, the more likely it is to contribute a greater amount to the differentiation of schools’ accountability scores and designations. This scenario is true even when weighting occurs.
The frequency and rigor of the identification criteria and exit criteria influence the number of schools identified into CSI, TSI, and ATSI. For example, if a state identified ATSI every year, this approach will ultimately increase the number of ATSI schools that are identified. Furthermore, this level of frequency makes it more likely that schools will be mis-identified as ATSI.
Lastly, consider that the number of ATSI schools increases if ATSI schools are identified every year. In this case, each state must consider the likelihood of schools exiting ATSI before they are transitioned to CSI. If exit criteria are too rigorous (e.g. more rigorous than identification criteria), the number of CSI schools will substantially increase over time in addition to the increase in ATSI schools.
States should consider how criteria influence the rate of identification over time in relation to the state’s capacity to provide resources to identified schools: is it better to provide fewer schools with more support, or more schools with less support?
There are different forms of accountability identification systems. Some states implement a conjunctive model, requiring that schools must perform low on each of several criteria in order to be identified. In contrast, other state accountability systems use a compensatory approach. One type of compensatory system is an index system where all indicators are combined in some weighting system to create an overall accountability score. This overall accountability score is then used to rank schools from lowest to highest performing, where the identified schools are those that are lowest performing. Thus, it is likely that fewer schools are identified in a conjunctive system than in a compensatory system.
It is important to note that this identification framework was created from a review of state identification systems. Pieces of each, across categories, interact with one another to make individual accountability systems. These systems can be simple and complex, and the more complex identification systems get, the more difficult it becomes to understand exactly the influences on identification rates.
In addition, there are a lot of systematic differences across states that confound the possibility of comparisons. For example, different states have different populations of students. Thus, it is not the identification system alone that may contribute to variability in identification rates, but the interaction between the identification system and the student population. With this in mind, any comparisons of state identification rates should be made with extreme caution.
States choose identification procedures based on many considerations, such as policy input, resource capacity to help identified schools, and the values of the education department. Furthermore, it is not just the identification system itself that will improve equity of students. Identification is the first step in the improvement of equity. Other important considerations are:
- the interpretability of accountability scores to stakeholders, so that accountability information can be appropriately used for improvement,
- the alignment between the states’ theory of action and the accountability system, and,
- fidelity of the intervention to improve equity.
We review the interpretability of accountability scores to stakeholders in our next blog post, so please stay tuned.