How Can We Best Communicate Results From State Testing Given the Challenges of Learning in a Pandemic?
Even as discussions continue about the merits of state testing in 2021, the reality is that many states will soon begin spring administration. Indeed, the U.S. Department of Education (ED) has not waived or relaxed the assessment requirements specified in the Every Student Succeed Act (ESSA). While some states have applied for testing waivers in 2021, at the time of writing, no waivers have been granted. Questions about whether or how to test must soon be overshadowed by questions like “How should we communicate the results?”
We approach this topic with the assumption that the content and technical properties of the state assessment in 2021 (e.g. alignment, scoring, scaling) meet established professional standards for validity, reliability, and fairness. This assumption is not to suggest that such standards can be taken for granted; indeed, our colleagues have written extensively about these topics in previous works.
Responsible reporting of assessment results is a complex and multifaceted topic in any circumstance, but especially this year. In this piece, we offer three principles we think should guide state assessment reporting in 2021.
- Specify Purpose and Use
- Communicate Context
- Provide Ongoing Support
Specify Purpose and Use of State Testing
It’s never a good idea to simply provide assessment data and hope stakeholders use it well. Strong reporting practice requires being clear about interpretations that are and are not supported. For example, many – but not all – states have reached the conclusion that assessments should not be used for student, educator, or school accountability in 2021. If that’s the case, these caveats must be prominently communicated. Moreover, to support intended uses, states must specify how the results should be used, clearly and concretely. Doing so will help the field understand the state’s rationale for testing and potentially allay fears about misuse.
So then, what purposes and uses can be supported and how does that differ from a ‘typical’ year? The answer to this question lies in the details, which we turn to next.
We will restrict our focus to three broad issues that always influence purpose and use, but are particularly relevant in light of the uncertainties introduced by the pandemic. These issues and their implications are summarized in the following table.
Obviously, the risks of each potential issue are elevated during the pandemic.
- In the case of administration conditions, for example, we are highly skeptical that remote administrations can be regarded as comparable to in-person testing. There are simply too many unknowns, such as whether the student has a suitable environment to take the test, or receives appropriate supports during the test. Issues related to remote testing have been addressed extensively in other papers and posts.
- With respect to participation, there are likely to be inflated rates of opt-outs, particularly for students in remote learning models.
- Finally, it is uncontroversial to point out that the pandemic-related disruptions to schooling had an adverse and uneven impact on opportunity to learn.
To further understand the impact of these threats on interpretation and use, we can broadly classify the implications with respect to 1) reporting level and 2) consequences.
- Reporting level simply refers to whether the results will be used to support decisions for individual students or aggregated to a higher level of summary, such as school or district.
- We use consequences to indicate whether results will be used for lower- or higher-stakes purposes; the latter of which is typically associated with accountability policy. In the following table, we present some illustrative use cases for each of the four conditions created by combining these categories.
To summarize the information provided in the table:
- An appropriate administration is always required to support uses associated with student or summary performance.
- For any summary reports, intended for accountability or otherwise, both an appropriate administration and adequate participation is always required.
- When participation is an issue, the ‘best case’ is to use results for student-level feedback only. Even then, we caution that state tests are not typically designed to provide detailed student-level feedback to inform instruction, so we advise using this information along with other sources better designed for that purpose (see e.g. Scott Marion’s 2019 post about assessments for learning).
- We suggest exercising abundant caution with respect to using assessment results at the student or summary level for accountability purposes when opportunity to learn is diminished. It is true that decisions about whether or how to use assessment results for accountability purposes, such as assigning school grades, is always contentious. Our position is that an accountability use case is even more fragile when the out-of-school factors that influence performance are magnified. We think that fragility is especially true when those factors, such as access to technology or the availability of learning supports, are unevenly distributed.
Again, we remind the reader that all of these use cases rely on the assumption that the assessment is designed and validated for the use cases described.
In order to equip stakeholders to interpret and use assessment data appropriately, context is critical. We suggest states explicitly and prominently report this information alongside assessment results. At a minimum this reporting should include:
- Participation rates
- Primary learning model for the majority of the academic year
- In-person/remote administration (if applicable)
Additionally, provide any caveats or notes to guide interpretation. And, if conditions do not meet established thresholds to support summary reports, the information should be suppressed and an explanation provided.
We recommend providing this contextual information for all units of reporting. Below we provide an example illustrative of what school-level reporting might look like. This example is quite limited; in fact one might ask whether some choices, such as providing results of remote assessments with those administered in-person, is appropriate. Such questions are exactly the point of providing an initial table of results – to provoke careful thought and iteration on the design, so that a pressure-tested reporting approach can be developed.
Moreover, we urge states to avoid reports or visualizations that could be misleading. For example, if trends or direct comparisons are not supported, presenting side-by-side tables or graphs of 2019 and 2021 performance is unwise. Or, if school accountability is suspended, do not report information in accountability-like metrics (e.g. letter grades, or indices).
Provide Ongoing Support
Ongoing training and support for appropriate interpretation and use is always important, but never more so than in 2021. Many states already produce interpretation guides and companion resources like presentations or videos to help stakeholders, such as district and school administrators, to interpret and use assessment results appropriately. Some states also conduct ‘live’ training sessions (e.g. via webinar). These sessions are excellent initiatives and can be customized to fit the context in 2021.
Consider, too, resources and support initiatives customized for key audiences such as:
- Produce a media guide with “Dos and Don’ts” for reporting and communication about state tests in 2021. Consider also meetings or workshops for education reporters via webinar to assist with messaging.
- Create a brief for policymakers with advice for interpreting assessment results in light of the pandemic.
- Conduct training and/or create professional development modules for educators, leaders, and/or researchers who may be interested in ‘deeper dives.’
- Develop resources for parents with advice for interpreting results and strategies to promote ongoing learning.
We acknowledge that some issues in this post were only addressed at a high level, such as thresholds for adequate participation or criteria to determine if an administration can be regarded as standard or non-standard. The Center plans to create additional resources to help address these topics in the months ahead. We also urge leaders to work with advisors, such as the state’s Technical Advisory Committee (TAC). Organizations such as the Council of Chief State School Officers (CCSSO) and the National Council on Measurement in Education (NCME) are also sources of support.
There are no easy solutions for whether or how to approach state testing in 2021. Our colleagues at the Center and others have written about this topic in previous works. However, for states that proceed, attention to communication and support will be more important than ever. We hope the ideas presented here help spark discussion and inform planning.