The Center for Assessment’s Reidy Interactive Lecture Series (RILS)

RILS offers a unique, collaborative learning opportunity for educators and assessment professionals across the country. The 2019 conference focused on Improving the Selection, Use, and Evaluation of Interim Assessments. Hear how some of our invited speakers and Center team addressed overcoming interim assessment challenges within school districts to improve local assessment systems.

Learn more

image

The Importance of Educational Assessment Policy in Shaping High-Quality State Assessments

This is the fourth in a series of CenterLine posts by our 2019 summer interns and their Center mentors based on their project and the assessment and accountability issues they addressed this summer. Zachary Feldberg, from the University of Georgia, worked with Scott Marion on a systematic review of states’ large-scale educational assessment policies.

image

Improving Equity: What Makes Accountability Indicators Meaningful? 

This is the third in a series of CenterLine posts by our 2019 summer interns and their Center mentors based on their project and the assessment and accountability issues they addressed this summer; and it is the second post by Nikole Gregg. Nikole is from James Madison University and worked with Brian Gong on how states have attempted to promote equity through the design of their ESSA accountability systems.

image

Improving Equity: Understanding State Identification Systems under ESSA

This is the second in a series of CenterLine posts by our 2019 summer interns and their Center mentors based on their project and the assessment and accountability issues they addressed this summer. Nikole Gregg from James Madison University worked with Brian Gong on how states have attempted to promote equity through the design of their ESSA accountability systems.

image

Creating a Framework for Assessment Literacy for Policymakers

This is the first in a series of CenterLine posts by our 2019 summer interns and their Center mentors based on their project and the assessment and accountability issues they addressed this summer. Brittney Hernandez from the University of Connecticut worked with Scott Marion on assessment literacy for policymakers.

image

Being Innovative Under ESSA’s Innovative Assessment Demonstration Authority

In my previous glass-half-empty post, I outlined my considerable reservations with the Innovative Assessment Demonstration Authority (IADA) component of the Every Students Succeeds Act (ESSA). 

image

Balancing Skepticism and Utility in Machine Scoring

Without a doubt, the public is skeptical about using machine scoring for examinees’ written responses. This skepticism makes sense because we know that machines do not score all elements of writing equally well. Machines do not “understand” creativity, irony, humor, allegory, and other literary techniques, opening them to criticism for their insufficiency in evaluating some of these more subtle qualities of writing. 

image

An Education Innovator’s Dilemma

I was an early supporter and promoter of the Innovative Assessment Demonstration Authority (IADA) under the Every Student Succeeds Act (ESSA), but I now have serious doubts about the viability of the IADA and its ability to support deep and meaningful educational reform.

image

Part 3: What Do I Need to Know About Competency-Based Grading?

This post is the last in my three-part series on competency-based grading. In Part 1, I describe the key similarities and differences between traditional, standards-based, and competency-based grading practices.

image

Part 2: What Do I Need to Know About Competency-Based Grading?

This post is Part 2 of a three-part series on competency-based grading. Part 1 described the key similarities and differences between traditional, standards-based, and competency-based grading practices. 

image

Part 1: What Do I Need to Know About Competency-Based Grading?

This is the first in a three-part series on competency-based grading. I was motivated to write this series because of recent conversations about competency-based grading within my children’s school district. I’ve noticed confusion about terms, misinformation, propaganda, and a general lack of high-quality resources on the subject. My goal for this series is to help guide honest and transparent conversations about key issues and best practices by: 

New & Noteworthy

Recent Centerline Blog Posts

image

Understanding and Mitigating Rater Inaccuracies in Educational Assessment Scoring

Testing experts know a lot about how to conduct scoring of students’ written responses to assessment items. Raters are trained under strict protocols to follow scoring rules accurately and consistently. To verify that raters did their job well, we use a few basic score quality measures that center on how well two or more raters agree. These measures of agreement are called inter-rater reliability (IRR) statistics, and they are widely used, perhaps in part because they are easy to understand and apply. 

image

Is Our Work in Educational Assessment and Accountability Helping to Improve Student Learning and the Student Experience?

As 2019 drew to a close I had the chance to reflect on the conversations I've had with many of my colleagues throughout the year, and one topic of conversation that sticks out to me is frustration about the minimal value-add of work focused on large scale assessment and state-level accountability systems to the student experience. 

Developing a Better Understanding the Role of Assessment and Accountability in Improving Student Outcomes

image

Teaching to the Test

In his CenterLine post, Can Educational Assessment Improve Teaching?, Executive Director Scott Marion invited readers to share their thoughts on the complex, but critical issue of identifying ways that assessments can be used to improve teaching quality. In this guest post, we share Kadie Wilson’s response to Scott’s invitation. Kadie Wilson is Assistant Superintendent in New Hampshire School Administrative Unit #9.