The Center for Assessment’s Reidy Interactive Lecture Series (RILS)

RILS offers a unique, collaborative learning opportunity for educators and assessment professionals across the country. The 2019 conference focused on Improving the Selection, Use, and Evaluation of Interim Assessments. Hear how some of our invited speakers and Center team addressed overcoming interim assessment challenges within school districts to improve local assessment systems.

Learn more

image

Are Test-Takers Getting the Most from Technology-Enhanced Items?

Technology-Enhanced Items (TEIs) are a kind of test question or task. A characteristic feature of TEIs is that, in contrast to traditional multiple-choice (MC) items, which require the selection or “bubbling” of a single option, TEIs generally require test-takers to make more than one interaction with the item.

The most interesting TEIs are simulations with game-like contexts. Picture a virtual laboratory where the goal is to isolate a specific compound, or a simulated garden where the test-taker can conduct an experiment to learn about (or be tested on) a concept in genetics. 

image

Theories of Action Aren’t Enough: An Argument for Logic Models

If you've ever worked with someone from the Center, been in a Center staff meeting, or even had dinner with someone from the Center, you know that we refer to Theories of Action incessantly. It may sound wonky and weedy (and it is), but there's a reason why we value it so much. That's because a theory of action (TOA) can help us clarify what we truly believe should happen if a program or system is implemented. 

Defining a Theory of Action to Help Guide Longer-Term Goals

image

How Can Every Educator Achieve Assessment Literacy?

I am encouraged that so many educational leaders are wrestling with systematically bringing educational reforms to scale. Unfortunately, as these leaders have come to realize, achieving widespread implementation of meaningful reforms is really hard – especially when pursuing a goal of increasing assessment literacy.

image

Making the Most of the Summative State Assessment

This post is based on an invited presentation Charlie DePascale made at the nineteenth annual Maryland Assessment Research Center (MARC) conference at the University of Maryland on November 8, 2019.

“Our teachers are thrilled that the new summative state assessment is so much shorter. Now, what additional student scores can we report from it to help them improve instruction?”

image

In Search of Simple Solutions for the NAEP Results

The 2019 NAEP results (National Assessment of Educational Progress) were released last week to much consternation, except perhaps in Mississippi and Washington, D.C., where improved results were celebrated.

Nationally, results were up slightly in fourth-grade math, flat in eighth-grade math, and down in both fourth and eighth-grade reading. These results continue a disturbing lack of progress over the last decade. 

image

Do Interim Assessments Have A Role in Balanced Systems of Assessment?

Interim assessments may have a role in balanced assessment systems, but that role is not conferred by title. It is conferred by logic and evidence tied to particular purposes and uses. 

image

The Reality of Innovation in Educational Assessment

This post is the follow-up to my previous post discussing the realities of innovation in large-scale educational assessment. In Part 1, I defined innovation as a change that not only improved an existing process or product, but also was found to have solved a problem or meet a need and, therefore, was adopted and used; that is, it changed the way things were done in the field.  

image

The Reality Faced by Innovators of Educational Assessments

The Innovative Assessment Demonstration Authority (IADA) provision of the Every Student Succeeds Act (ESSA) ostensibly offers states the flexibility needed to “establish, operate, and evaluate an innovative assessment system” with the goal of using that educational assessment to meet the ESSA academic assessment and statewide accountability system requirements. 

image

How Do We Improve Interim Assessment?  

In the seacoast region of New Hampshire, we are enjoying the kind of crisp early autumn temps that might call for a light sweater, and the foliage reveals just a hint of the color that draws ‘leaf peepers’ to the region each year. But it wasn’t just the postcard-perfect scene that drew more than 80 education and assessment leaders from around the country to Portsmouth on September 26-27, 2019. The Center’s annual Reidy Interactive Lecture Series (RILS) offered an opportunity for those assembled to learn and contribute ide

image

Matching Instructional Uses with Interim Assessment Designs

Assessments are the most powerful and useful when designed intentionally for particular purposes – especially when it comes to interim assessments.  

New & Noteworthy

Recent Centerline Blog Posts

image

The Next Generation of State Assessment and Accountability has Already Started

This is the first in a three-part series on the future of large-scale state assessment and accountability. Of course, it is impossible to know the future, but forecasts for educational assessment can be informed by examining what has shaped state assessment and accountability in the past. In this post, I look at the role played by emerging operational capacities and the desire for efficiency – specifically computer-based assessment.  

image

A Path Forward: Recommendations for ESEA Reauthorization to Support Improvements in Assessment and Accountability

Here at the National Center for the Improvement of Educational Assessment, we think a lot about the multiple factors involved in promoting student learning through more meaningful state assessment and accountability systems. The Every Student Succeeds Act (ESSA), the current authorization of the 1965 Elementary and Secondary Education Act (ESEA), is the most significant influence on contemporary state assessment and accountability.  We believe a number of changes to ESEA could help promote innovation, restore balance, and improve outcomes.

image

Analysis – Does This Word Matter in Defining Expectations for Student Performance?

Can we call “analysis” by another name and expect educators to teach students to analyze, and expect students to demonstrate analysis in a text-dependent analysis response? Is the word “analysis” interchangeable with other words, or does its meaning matter in defining expectations for student performance? 

In the famous line from Shakespeare’s Romeo and Juliet, Juliet says  “What’s in a name? That which we call a rose by any other name would smell as sweet.”