Reflections From a Decade of Fall Testing

Back

school disruptions State Testing Fall Assessment Assessment COVID-19 Response

Reflections From a Decade of Fall Testing

Compelling Reasons to Think Twice Before Moving 2021 Spring State Testing to the Fall

When spring state testing was abruptly canceled last year, one of the first options considered briefly was administering the Spring 2020 state tests in the fall – when it was expected that students and teachers would have returned to their classrooms – and life would be back to normal. One year later, we are facing the same consideration. 

In a CenterLine post last May, Erika Landl and Michelle Boyer outlined the strong arguments against administering the Spring 2020 state tests when students returned to school. Virtually all of those reasons apply to fall 2021 as well. They also ended their post with the accurate observation that “there are currently no compelling reasons” to move state testing from spring to fall.  

In recent weeks, we have observed an uptick in interest in postponing the resumption of state testing until fall 2021, when we all hope students and teachers actually will be returning to their classrooms under conditions that are much closer to normal. Among the most compelling reasons suggested for doing so are the likelihood that states will be able to test a much higher percentage of students in the fall than this spring, and that they will be able to test them under normal, in-person, standardized conditions. Although those reasons do look compelling on paper, in this post we urge state leaders to also consider some of the practical realities of fall testing.

The New England Common Assessment Program – An Experiment in Fall Testing

The New England Common Assessment Program (NECAP among friends) was the multi-state consortium formed in summer 2003 to administer the state tests required under No Child Left Behind (NCLB). Less well known is the fact that from initial pilot testing during the 2004-2005 school year until their final administration in 2013, NECAP Reading, Writing, and Mathematics tests were administered in the fall.  

With the advent of NCLB accountability requirements, the arguments for fall testing seemed logical and were quite compelling. Beginning with the most persuasive, the main arguments for fall testing were:

  • We will finally have an assessment of an entire year of teaching and learning. It’s foolish to expect all students will be able to achieve the state’s grade-level content standards within the same 9-month period prior to spring testing. Fall testing will give districts and schools an opportunity to complete an extra three months of instructional support for students who need more time.
  • Spring testing followed by fall accountability reporting is out of synch with district and school budgeting and planning calendars, leaving them no opportunity to analyze results and then design, budget for, and plan the implementation of curricular and instructional interventions. Results returned in the early fall would provide ample time for analyses, planning and supportive budgeting.
  • Assuming that teachers are going to devote instructional time to review and prep for the high-stakes state tests, why not administer the tests at the beginning of the school year when most teachers are already devoting some time to review of the previous year’s materials – preserving precious instructional time.
  • Fall testing with results reported before Christmas will provide actionable information for teachers to use with their current students so that instructional adjustments can be made.

Like the current arguments for moving spring 2021 testing to the fall, these arguments addressed critical shortcomings and it was quite easy to be swept away by their promise, particularly in the midst of trying to get a new, multi-state assessment program up and running.

The Reality of Fall Testing Did Not Match the Promise

It will surprise no one that the reality of fall testing did not come close to matching its promise. From the outset, it quickly became clear that we had glossed over several logistical challenges associated with fall testing. 

  • Under the best of circumstances, schools, teachers, and students are not ready to administer state tests at the very beginning of the school year. On top of a host of other issues, it takes time just to figure out which students are in which school. We were unable to begin testing before October.
  • Students change schools, and sometimes districts, from one year to the next. At fifth grade, or sixth grade, or seventh grade they move from elementary school to a middle or junior high school (that varies across districts). And don’t forget the current eighth-grade students. In many districts, particularly urban districts, the high school population looks significantly different as students move to and from public, private, and religious schools.
  • These logistical challenges dashed the early hopes of producing results in early December and was replaced by the reality of delivering results in late January.
  • Data had to be presented in two ways (which few ever understood clearly.) One roster presented “learning year” results. Those were data summaries for students who were part of the cohort from the previous year. The other data summary was for the “teaching year” so that teachers and principals could review data for the students they had in front of them.

Beyond those logistical challenges, the perceived benefits of fall testing for school accountability fell short of expectations. Many teachers and administrators were concerned about the impact of summer learning loss on their accountability rating. Also, the lag between assessment and accountability reporting proved to be more confusing than beneficial to educators and the public. 

Yes, it is more likely that states will be able to test 95% of students sitting in classrooms at some point next fall. But what students will be tested, where will they be tested, and what will their performance after a month or more of instruction in the new school year tell us about the state of learning loss due to COVID-19 in particular districts and schools? 

Take a Close Look at The Promises of Fall Testing in 2021 

The promise of fall testing in 2021 is that states will be able to test a larger, more representative sample of their students (perhaps exceeding the magical 95% threshold) under normal conditions, but the benefits of the potentially larger sample come with a cost.

Revisiting a primary concern regarding testing in fall 2020, we need to consider the social and emotional impact of starting a new school year with state testing and the additional stress it places on teachers and students, even if those results are not being used for accountability. 

With regard to the use of results from fall testing, even with computer-based tests that generate student results in real-time, tests administered during the school year will produce results too late for districts, schools, and teachers to understand the magnitude and details of the learning lost to COVID, and to plan effectively for instruction during the 2021-22 school year. 

Further, this year more than any other, the state is going to need additional time to produce aggregate school and district results and to support districts and schools as they interpret the results of state tests and communicate those results (in some manner) to parents, students, teachers, policymakers, and other stakeholders. Accomplishing that task once the new school year has begun will be particularly difficult. 

Finally, if there is any hope of using 2021 state test results as a baseline for recovery efforts and future trends, we must consider the implications of fall testing on future comparisons. For reasons that we do not fully understand, fall-to-spring test results have never provided comparable information to spring-to-spring or fall-to-fall results. It will be difficult enough to place 2021 test results into context this year and next without introducing another apples-to-oranges factor into the mix.

State Testing Is Many Moving Parts and A Series of Difficult, Interrelated Decisions

In this post, we address only the issue of fall testing in isolation. Even in a normal year, administering a state assessment program requires finding the optimal solution to a puzzle of policy, technical, and logistical challenges without losing sight of the bigger picture: the purpose of the program and its long-term goals. It goes without saying that the period we are in now is anything but normal.

State leaders will weigh many factors in determining when and how to next administer state tests. To help address this puzzle, leaders and their partners must carefully articulate how the test results will be used. When tests are administered, they should be examined through the lens of how or if it advances the intended uses. Also pertinent to this discussion, especially given the learning disruptions caused by COVID-19, is how to communicate test results as accurately and responsibly as possible. Our 10-year experiment with fall testing should give leaders pause in pursuing this option.

We, along with many others in the field, have offered advice and support, here in CenterLine and elsewhere, to help inform decisions about state testing in 2021. We are confident that all will remain committed to helping state leaders implement those decisions this spring and to continue to support them through the post-pandemic recovery in 2022, 2023, and beyond. 

Mary Ann Snider is currently serving as the Academic Dean of St. Mary's Academy-Bay View; an all-girls school for students in grades PK-12. Prior to this role, she was the Deputy Commissioner for the RI Department of Education. Her focus was and continues to be the intersection of assessment and instruction for the benefit of all students. She was instrumental in creating the first multi-state assessment consortium – NECAP – and continued that work as a state lead for the PARCC assessments. Charlie DePascale, currently enjoying writing and engaging in a smattering of consulting, served as project director for the NECAP states and coordinated the NECAP Technical Advisory Committee in his role as a senior associate at the Center for Assessment. 

Share:

Prev Next