Don’t Go Changing: The Importance of Stability in State Assessment and Accountability Systems

Back

Assessment State Assessment Education Policy Accountability stability

Don’t Go Changing: The Importance of Stability in State Assessment and Accountability Systems

Frequent Changes to State Assessment Systems Place Educational Improvement on Shaky Ground.

“We just administered our third assessment in the past five years.” 

“That’s nothing; we’re on our fifth assessment in the past four years.” 

I wish these were fictional statements, but as one of the coordinators of a working group of state assessment leaders, I regularly hear stories like these from many of our 40+ state participants. 

There are many reasons for changing assessment systems, but most have been politically-based over the last several years. Changes in political climate and leadership can make untenable what were previously acceptable policies and practices. I am not opposed to regular tweaks and improvements in state assessment systems, but completely replacing one test with another should occur infrequently, such as when content standards are revised, which also should also occur very infrequently.

Recent posts by my terrific colleagues, Leslie Keng and Erika Landl, addressed how to deal with changes in state assessment systems as defensibly as possible. Leslie’s post provides key insights to help state leaders mitigate the effects of changing assessment systems. Erika’s post on re-envisioning performance standards validation guides state leaders through the decisions about whether and how they should consider establishing new performance level determinations or to validate the existing determinations (i.e., cutscores). 

However, there is another option for dealing with potential changes; don’t change your standards and assessment systems!

 

Learning from NOLS

When I was 18 years old, I spent five weeks hiking in the wilderness of Wyoming on a National Outdoor Leadership School (NOLS) program. Outward Bound was the more popular program at the time and was known for putting students in challenging situations such as being left alone in the wilderness for several days without food as a form of mental preparedness. NOLS employed a different philosophy. The best way to deal with a challenging situation is not to get caught in one in the first place. This philosophy does not mean that NOLS students are unable to deal with emergency situations, but rather that they are always focused on being aware of potential risks and taking steps to avoid them. 

This approach relates directly  back to education. There are often legitimate reasons for changing standards-based testing programs. When such changes occur, leaders should attend to my colleagues’ advice about dealing with these challenging situations. However, in most cases, especially over the past four years, states would have been better off maintaining their existing programs and making minor tweaks when necessary.

 

Costs Often Outweigh the Benefits of Change

Whatever benefits state leaders think they accrue from changing assessments are outweighed by the significant costs incurred. I am not talking about direct program costs, although such costs must be considered. Rather, I am concerned about the less visible indirect costs and the almost invisible (at least to many policymakers) opportunity costs associated with changing state assessments.

Many state assessment decision-makers often make decisions about changing assessments with an “I want my cake and want to eat it, too” attitude. They intend to fulfill some political pledge (“I promise to get rid of Test X if elected”), but are usually reticent to break accountability or monitoring trends. Depending on the scope of the assessment change, state accountability and assessment staff will have to perform statistical gymnastics to bridge the old and new assessments to maintain accountability results. This result is one of those indirect costs. Significant conceptual and analytical work, often with the help of external technical experts, is needed to ensure such bridges do not crumble because of weak infrastructure. The price tag for this work does not show up in the assessment contracting costs, but they are real costs that often, unfortunately, come out of the hides of state accountability and assessment experts.

 

Downstream Effects of Unstable Assessment Systems Impact Teaching and Learning

There are significant opportunity costs associated with major changes in the state assessment system (or state standards).Chief among them is distracting local educators from the core job of teaching and learning. In these days of relatively high-stakes accountability policies, teachers and leaders pay close attention to their school’s performance on state assessments. Therefore, when standards and assessments change, school personnel often spend considerable time on test preparation activities—a diversion from deeper learning activities—to help their students get ready for the new test. 

The downstream effects of an unstable state assessment system extend far beyond test preparation activities. School and district personnel often feel the need to adjust curriculum and instructional programs in response to new assessment programs, especially if the content standards change as well. Given how long it takes to implement curricular and/or other program reforms—some experts suggest at least five years—educators may never get to a stage of implementation fidelity if the target is always moving.

 

A Message for Policy Leaders

Here in New Hampshire, our neighbor to the south likes to tout the “Massachusetts Miracle” of education reform. I am not detracting from Massachusetts’ high-quality content standards and state test (MCAS), but the real “miracle” was the Commonwealth’s ability to keep these same standards and assessments in place for more than 20 years across Democratic and Republican administrations. This stability bought Massachusetts educators the time and space to design high-quality curricular, instructional, and assessment systems aligned to the standards and assessments. School and district leaders were able to focus their professional development efforts on ensuring high-fidelity implementation of these local systems. Massachusetts is not the only state example. Virginia and Florida have been relatively stable, as well as a few others, but the instability described above is far too common.

 

Building a Stable Infrastructure

The lack of stability has effects that ripple through the system. We, at the Center, are advocating the creation of long-serving, apolitical assessment advisory committees to help buffer the state assessment system from the winds of political change.  If necessary, these committees might develop structures such as policy guidance, and perhaps even legislation to maintain stability in state assessment. 

State assessment can be a valuable tool in education reform and school accountability. For it to be most effective, however, we should heed the advice from Al Beaton (1990), “When measuring change, do not change the measure”.

Share:

Prev