Photo of Home Schooling

The Center for Assessment’s COVID-19 Response Resources

State and district leaders are facing multiple concerns in response to widespread and potential long-term school closures due to the growing threat of COVID-19. The concerns are broad and consequential. We launched this page to help you efficiently find the resources you need during these uncertain times.

Learn more

image

Are Test-Takers Getting the Most from Technology-Enhanced Items?

Technology-Enhanced Items (TEIs) are a kind of test question or task. A characteristic feature of TEIs is that, in contrast to traditional multiple-choice (MC) items, which require the selection or “bubbling” of a single option, TEIs generally require test-takers to make more than one interaction with the item.

The most interesting TEIs are simulations with game-like contexts. Picture a virtual laboratory where the goal is to isolate a specific compound, or a simulated garden where the test-taker can conduct an experiment to learn about (or be tested on) a concept in genetics. 

image

Theories of Action Aren’t Enough: An Argument for Logic Models

If you've ever worked with someone from the Center, been in a Center staff meeting, or even had dinner with someone from the Center, you know that we refer to Theories of Action incessantly. It may sound wonky and weedy (and it is), but there's a reason why we value it so much. That's because a theory of action (TOA) can help us clarify what we truly believe should happen if a program or system is implemented. 

Defining a Theory of Action to Help Guide Longer-Term Goals

image

How Can Every Educator Achieve Assessment Literacy?

I am encouraged that so many educational leaders are wrestling with systematically bringing educational reforms to scale. Unfortunately, as these leaders have come to realize, achieving widespread implementation of meaningful reforms is really hard – especially when pursuing a goal of increasing assessment literacy.

image

Making the Most of the Summative State Assessment

This post is based on an invited presentation Charlie DePascale made at the nineteenth annual Maryland Assessment Research Center (MARC) conference at the University of Maryland on November 8, 2019.

“Our teachers are thrilled that the new summative state assessment is so much shorter. Now, what additional student scores can we report from it to help them improve instruction?”

image

In Search of Simple Solutions for the NAEP Results

The 2019 NAEP results (National Assessment of Educational Progress) were released last week to much consternation, except perhaps in Mississippi and Washington, D.C., where improved results were celebrated.

Nationally, results were up slightly in fourth-grade math, flat in eighth-grade math, and down in both fourth and eighth-grade reading. These results continue a disturbing lack of progress over the last decade. 

image

Do Interim Assessments Have A Role in Balanced Systems of Assessment?

Interim assessments may have a role in balanced assessment systems, but that role is not conferred by title. It is conferred by logic and evidence tied to particular purposes and uses. 

image

The Reality of Innovation in Educational Assessment

This post is the follow-up to my previous post discussing the realities of innovation in large-scale educational assessment. In Part 1, I defined innovation as a change that not only improved an existing process or product, but also was found to have solved a problem or meet a need and, therefore, was adopted and used; that is, it changed the way things were done in the field.  

image

The Reality Faced by Innovators of Educational Assessments

The Innovative Assessment Demonstration Authority (IADA) provision of the Every Student Succeeds Act (ESSA) ostensibly offers states the flexibility needed to “establish, operate, and evaluate an innovative assessment system” with the goal of using that educational assessment to meet the ESSA academic assessment and statewide accountability system requirements. 

image

How Do We Improve Interim Assessment?  

In the seacoast region of New Hampshire, we are enjoying the kind of crisp early autumn temps that might call for a light sweater, and the foliage reveals just a hint of the color that draws ‘leaf peepers’ to the region each year. But it wasn’t just the postcard-perfect scene that drew more than 80 education and assessment leaders from around the country to Portsmouth on September 26-27, 2019. The Center’s annual Reidy Interactive Lecture Series (RILS) offered an opportunity for those assembled to learn and contribute ide

image

Matching Instructional Uses with Interim Assessment Designs

Assessments are the most powerful and useful when designed intentionally for particular purposes – especially when it comes to interim assessments.  

New & Noteworthy

Recent Centerline Blog Posts

image

Why Has it Been So Difficult to Develop a Viable Through Year Assessment?

There has been a buzz that “through year” or “through course” assessment represents a better way for states to assess than today’s pervasive end of the year summative assessment.  However, no through year assessment has yet been implemented statewide with results acknowledged as acceptable for use comparable to end of year summative assessment scores.

Why Has it Been so Difficult to Get a Viable Through Year Assessment?

image

Focus, Fix, Fit: Understanding the Meaning of 2021 Test Scores

To answer the question of what 2021 test scores will mean, I start by acknowledging that the interpretation of assessment results is always a process of reasoning from evidence, with some level of uncertainty. As with all things COVID, we expect to have more uncertainty this year, but we still have the tools to examine just how uncertain we are about the meaning of test scores in 2021.

image

Practical Advice for Adapting Formative Assessment Practices to Remote Learning Contexts

In a previous post, we discussed what is the same and what is different about the formative assessment process in a remote or hybrid learning environment in comparison to in-person learning. We concluded that conceptually formative assessment remains the same. Yet, practically there are key differences in how formative assessment is applied in remote contexts because of differences in instructional, student, and environmental characteristics.