Several people sitting at a table with laptops, typing on keyboards.

Zero-based Assessment System Design

Jun 25, 2025

How behavioral economics can help us get rid of unnecessary tests

A “zero-based budgeting” approach to local assessment system design might help us get the assessment systems we need. You might be thinking, “He’s really gone off the deep end now.” But give me a few minutes to explain my rationale and how this way of thinking might break some of the logjams we’re facing when trying to get rid of unnecessary tests.

Zero-Based Budgeting

I’ve served on my town’s budget committee for many years. When our town departments build budgets, they start from the previous year’s budget, work toward a target (increase or decrease) established by the town leaders, and adjust each line item to meet the overall budget target. Sometimes, new budget categories are added, and occasionally, lines are deleted (rarely).

This is an efficient way for leaders to create budgets when things work well. But it can make it hard to get rid of inefficient or ineffective programs, especially if the system is overloaded with them.

A zero-based budgeting approach would, as the name indicates, start from scratch, identify the goals for the department (or organization), clarify the tasks/programs necessary to meet the goals, and estimate the costs associated with each program or line item. Perhaps we need a similar exercise for local assessment system design.

Learning Management and Student Information Systems

This realization hit me during the Center for Assessment’s recent Brian Gong Colloquium, where we explored student information and learning management systems as the “last mile” of assessment reporting. Many of us have been trying to improve assessment score reporting and studying how we can help make assessments instructionally useful. However, I had been relatively naïve about the role of these systems in delivering assessment results to educators, parents, and other users.

To be fair, I’ve only started exploring how these systems take in assessment results and push them out to teachers. But as I listened to these experts, I kept thinking about an arms race. Now that we can devour all these data, we need more data, and then we need bigger systems. And the arms race continues. I wondered how we could rightfully expect teachers to make sense of all these numbers. That’s when it hit me. We needed to approach the problem differently.

Zero-Based Assessment Design

In last year’s National Academy of Education volume Reimagining Balanced Assessment Systems, we argued for recentering assessment systems to support rich classroom learning and assessment environments. However, it is hard for school leaders and teachers to accomplish these aims when they are barraged by a plethora of external assessments.

My Center colleagues have developed a set of tools and processes to help district leaders undertake local assessment reviews or audits to right-size their assessment systems. This has helped get leaders and teachers to critically evaluate all the assessments they administer. But it doesn’t seem to be enough.

Audits have an inherent limitation in asking people to drop something they likely thought was important at some point. This is related to the behavioral economics concept of loss aversion, which is our tendency to fear the pain of loss rather than pursue the hope of gain. Many people experience this when trying to purge clothes from their closets.

Where to Start

Zero-based assessment design involves starting from scratch and adding only those assessments necessary to support one’s learning and organizational goals. I suggest the following general steps to get started.

Constituents must develop clear and specific goals for the educational system. This does not mean just pointing to the state content standards. It is an articulation of the kinds of knowledge, skills, and dispositions students should possess at the end of their educational journey and how they are expected to get there.

For example, will students engage in project-based learning, direct instruction, or some other approach to develop subject matter and interdisciplinary competence? What would the competence look like when students reach the end of their school journey? Will they be effective problem solvers, clear communicators, and life-long learners? Or will they have a strong command of the content needed for the next step in their education or life? Whatever the answer, district leaders need to work with their constituents to get as clear as possible about these sorts of questions.

Leaders and educators must map the instructional and curricular systems to these learning goals. System designers would need to address issues of organizational structure, culture, and many other factors. Once these goals and related systems are defined, leaders and teachers can start identifying, as precisely as possible, the types of information they would need to support the various purposes.

It is not helpful to say, “We want assessments to support instruction.” Users must describe the specific types of information they need, when, and how they need it. For example, some might want timely information about students’ understanding of certain content standards or their thinking as they solve complex problems. Alternatively, policy leaders might want information that provides evidence about students’ educational opportunities across a large district.

Review Andrew Ho’s three Ws of educational measurement. Andrew urges assessment designers to ask: “Who uses which scores for what purpose?” Again, specificity counts here. I contend that one of the reasons we have so many extra assessments is because testing companies and test users are too vague about their intended purposes and uses. Being specific will allow users to engage in theory-of-action-type thinking to outline the steps from the test scores to the intended use cases.

Interrogate the need. As we argued in Reimagining Balanced Assessment Systems, the primary goal of balanced assessment systems is to support rich classroom learning environments and ambitious teaching practices. Therefore, we (Marion et al., 2024) urged decision-makers to ask: “To what degree and in what ways does this assessment—its content and practices—support or hinder ambitious and equitable classroom learning environments?” (p.3).

But these are not the only assessments we need in the system. I suggest treating Andrew’s “who uses” as “who needs” and then strongly interrogating the “need” before adding any assessments into the system.

Evaluate the evidence. Claims require evidence. Assessment systems get bloated for many reasons, not the least of which is companies pushing assessments. But that’s not the only reason. Just as people are reluctant to recycle that favorite sweatshirt they haven’t worn in years, many leaders are reluctant to give up assessments that at one time felt comfortable.

Therefore, in addition to considering the three Ws, system designers need to critically evaluate the evidence associated with a test’s desired uses. If there’s no evidence to support the uses, the assessment should not be incorporated into the system until convincing evidence can be amassed. Just as importantly, designers must honestly consider the risk of unintended consequences even when the assessment is used for the desired purposes.

I do not have any illusions that districts are going to completely strip away all assessments and start over. Nevertheless, a zero-based budgeting approach to assessment design should be employed to help clear out overstuffed assessment systems and help districts move towards more balanced assessment systems. 

Share: