Innovative Assessments: A Plea for Clarity and Humility

How Do You Want To Improve Assessments, and Why?

It’s nearly impossible lately to read anything about assessment without encountering the term “innovation.” No doubt, the U.S. Department of Education’s recently released Competitive Grants for State Assessments will lead to more promises of states “innovating” in assessment. But I’m concerned that this word has become a catch-all for any proposed change in current assessment practice.

At the Center for Assessment, we often invoke Inigo Montoya’s famous line from “The Princess Bride”—“You keep using that word. I do not think it means what you think it means”—to question people’s use of assessment terms like instructional usefulness, diagnostic assessment and universal screening. I’ve developed a similar feeling about the term “innovation.”

Defining Innovation in Assessment

I support assessment innovation. But many people use the term without much clarity about what we’re trying to innovate toward—or away from. To be fair, many people say they are trying to move away from the current end-of-year standardized state tests. But I wonder if those moves are based more on confusion about accountability policies than on opposition to something specific about the assessments.

I don’t think people are clear about the meaning of innovation. McKinsey & Company offers a helpful definition from the business world.

Innovation is the systematic practice of developing and marketing breakthrough products and services for adoption by customers… In a business context, innovation is the ability to conceive, develop, deliver, and scale new products, services, processes, and business models for customers.

Note the words “breakthrough” and “scale.” Let’s keep these in mind as we continue our discussion. Think iPhone!

What Is—and Isn’t—Assessment Innovation

I understand and share the desire to improve assessments for the benefit of students. That’s why I got into this field. Heck, there’s even a provision of the Every Student Succeeds Act called the Innovative Assessment Demonstration Authority that encourages states to consider trying out state assessment innovations in a subset of districts. Unfortunately, the law doesn’t offer much help beyond noting that states can “use competency-based or other innovative assessment approaches” and further offering that an “innovative assessment system” means a system of assessments that may include:

  1. competency-based assessments, instructionally embedded assessments, interim assessments, cumulative year-end assessments, or performance-based assessments that combine into an annual summative determination for a student, which may be administered through computer adaptive assessments; and
  2. assessments that validate when students are ready to demonstrate mastery or proficiency and allow for differentiated student support based on individual learning needs.

Let’s focus on one part of the law and take performance-based assessments as an example. As a card-carrying member of the performance assessment movement in the pre-NCLB era, I’d be happy to “innovate” back to 1995. If that’s what folks want, great! I’ll be right there with you. But I’m not sure that in 2024 that would qualify as innovative or as a breakthrough. It’s just different from current state assessment programs.

Do we have examples of assessment innovation? Sure. At our 2021 annual conference, my former colleague, Charlie DePascale, provided several examples of successful innovations, including criterion-referenced testing and automated scoring of open-response tasks. Charlie also discussed computer-based and computer-adaptive testing. Think about our current context. Very few states ship boxes of paper tests anymore, and about half the states have shifted from fixed-form to adaptive tests. There are certainly more examples, but these represent genuine breakthroughs at scale. Even still, people are calling for assessment innovation.

Why States Want Innovative Assessments

Based on my conversations with many state and district leaders and others in our field, I think at least one reason for pushing assessment innovation is to extract instructionally useful information from state tests. This is a major driver underlying the current through-year assessment efforts. We’ve written extensively about the challenges of trying to pull this rabbit out of the hat, synthesized in my colleagues’ terrific paper, “Through-Year Assessment: Ten Key Considerations,” and in a forthcoming book by Carla Evans and me.

Returning to the concepts of breakthrough and scale, I think most of us would agree that through-year assessments do not qualify as an assessment innovation, at least not yet. Why? Only a few states are moving toward statewide implementation that meets the definition my colleagues put forth in the “Ten Key Considerations” paper; those efforts don’t currently meet the scale criterion. Further, as we have explained in many blogs and papers, there is essentially no empirical or logical evidence that through-year can deliver on its ambitious claims. Thus, it does not meet the breakthrough criterion either.

Some Hope For The Future

Don’t get me wrong. I think there is plenty of room to improve our current state assessment systems, and I agree we should do that, whether these improvements rise to the level of innovation or not. But the desire to elicit instructionally useful information from state tests—which, as I said, is prompting many of the calls for innovating state assessments—runs headlong into a fundamental tension in assessment design. Tests cannot serve disparate purposes equally well.

We would be better off spending our innovation energies trying to design state assessment and accountability policies and practices that better support high-quality classroom learning and assessment. In the National Academy of Education’s soon-to-be-published Reimagining Balanced Assessment Systems, leading scholars offer guidance on how to design balanced assessment systems in the current policy and practice environment. Is this innovative? I don’t know, but I do know it would be a great advance for students and teachers.

When it comes to true innovation, I, like many others, hold out great hope that generative artificial intelligence can dramatically reshape assessment as we know it. AI offers tremendous promise for item development, scoring, psychometrics, and almost all other aspects of the assessment cycle. My colleagues Will Lorié and André Rupp have been offering keen insights into the power of AI to transform current assessment practices. I’m most excited about the potential of AI to radically improve the way assessment users—at all levels—interpret and act on assessment information. Will AI-driven changes rise to the level of innovations? I hope so, but we don’t know yet.

A Plea For Humility and Specificity

There’s broad agreement that we need to improve our state assessment and accountability systems to better support student learning and related purposes. But let’s be as specific as possible about what we’re running away from and what we’re running toward.

Serious innovation efforts require disciplined planning, and opportunities to break things and fail fast, which is tough to do in a state assessment context with high-stakes consequences. That’s just one more reason we all should be a little more humble and avoid calling every variation on current practice an innovation.

Share: