The Center for Assessment’s 2019 Reidy Interactive Lecture Series (RILS)

RILS offers a unique, collaborative learning opportunity for educators and assessment professionals across the country. Hear from some of our multi-year attendees about what makes the conference so special and how it helps support better assessment and accountability practices nationwide. This year’s conference focuses on Improving the Selection, Use, and Evaluation of Interim Assessments. Come join us in lovely Portsmouth, NH for a terrific learning experience—September 26-27, 2019.

Learn more and register

image

How Much is Enough? 

Many schools have turned to competency-based education for meeting both equity and excellence goals. Competency-based education requires students to demonstrate mastery of key knowledge and skills rather than merely meeting some passing score “on average.” 

Local assessment data are often used to evaluate student mastery of identified competencies. There are many measurement challenges that arise when using assessments to support decisions about students’ competence. This blog focuses on one—sufficiency.

image

When It Comes to Getting Summative Information from Interim Assessments, You Can’t Have Your Cake and Eat It Too

Nathan Dadey, Associate, Center for Assessment

“You can’t have your cake and eat it too,” is a well-known idiom. In the case of educational measurement, it reflects the dilemma posed by a requirement for a single, summative score, and might read something like: “you can’t get summative scores for accountability purposes without the secure administration of carefully constructed forms in a defined window.”

image

Improving accountability: Where do we go from here?

By Chris Domaleski, Damian Betebenner, and Susan Lyons

In recent years, assessment and accountability have become charged terms to many. In fact, school accountability systems, influenced by results from standardized achievement tests, are among the most contentious aspects of contemporary education policy. 

But how did we get here–and where do we go? This ambitious topic is one of several we are poised to tackle at the Center’s annual Reidy Interactive Lecture Series (RILS) on September 27-28, 2018. 

image

Data in Schools­–Understanding What it is, How it’s Used, and How We Can Improve

Discussions of data use in schools often lead to two commonly heard refrains:  

  1. “Educators are drowning in an ocean of data”
  2. “Schools are a data desert”

When a situation is characterized by such polar opposite viewpoints, it is a signal that there are fundamental challenges that must be understood and overcome. In this case, if there are data in schools, why aren’t those data being used effectively (or at all) by teachers to support their instructional decision-making? What are the challenges?

image

A Tricky Balance: The Challenges and Opportunities of Balanced Systems of Assessment

The seminal publication, Knowing What Students Know: The Science and Design of Educational Assessment (NRC, 2001), crystalised the call for balanced systems of assessment. Yet almost 20 years have passed and there are very few examples of well-functioning systems, particularly systems that incorporate state summative tests.  Why? In spite of recent efforts to articulate principles of assessment systems, creating balanced assessment systems is really hard!  

 

image

The Center at NCSA 2018

State assessment teams, assessment industry staff, and other assessment specialists gather each June at the CCSSO National Conference on Student Assessment.  Historically, the annual conference provides an opportunity for the Center team and our partners to share innovative solutions and our latest thinking on the most pressing assessment and accountability issues of the day. This year, seven Center team members participated in eleven sessions over the three-day conference: Chris Domaleski, Carla Evans, Brian Gong, Leslie Keng, Erika Landl, Scott Marion, and Joseph Martineau

image

The Center at 20: Reliability of No Child Left Behind Accountability Designs

This is the first in a series of posts highlighting key pieces of work from the Center’s first twenty years.  Each post will feature a document, set of tools, or body of work in areas such as large-scale assessment, accountability systems, growth, educator evaluation, learning progressions, and assessment systems. In keeping with the Center’s 20th anniversary theme, Leveraging the Lessons of the Past, our goal is to apply the lessons learned from this past work to help us improve assessment and accountability practices for the future.

image

When it Comes to School Ratings, Meaning Matters

Letter grades are a popular way to describe performance. I’m referring to those same letter grades you received in school - A to F.  We all know that the coveted A is “superb,” and an F warns that performance is completely deficient. What’s a C?  Perhaps it is used to communicate “good enough” (but not great), or possibly it means “average.” Should we worry that those are often two different things?  

image

A Look Back and a Look Ahead After 20 Years of Assessment and Accountability Work

It’s been 20 years, and everyone at The Center for Assessment is excited to celebrate this milestone anniversary with a very special Reidy Interactive Lecture Series (RILS). 

image

The Need for Program Evaluation to Support Accountability Implementation

Accountability systems are supposed to incentivize behavior that promotes equity in educational opportunity and leads to positive student outcomes. But how do we really know? Even the best designs still have a burden of proof. Applying program evaluation principles that use school identification are powerful tools to examine accountability's impact, usefulness, and relevance. Program evaluation facilitates the collection, use, and interpretation of the right information to improve or understand a system or its impact. 

New & Noteworthy

Recent Centerline Blog Posts

image

The Reality of Innovation in Educational Assessment

This post is the follow-up to my previous post discussing the realities of innovation in large-scale educational assessment. In Part 1, I defined innovation as a change that not only improved an existing process or product, but also was found to have solved a problem or meet a need and, therefore, was adopted and used; that is, it changed the way things were done in the field.  

image

The Reality Faced by Innovators of Educational Assessments

The Innovative Assessment Demonstration Authority (IADA) provision of the Every Student Succeeds Act (ESSA) ostensibly offers states the flexibility needed to “establish, operate, and evaluate an innovative assessment system” with the goal of using that educational assessment to meet the ESSA academic assessment and statewide accountability system requirements. 

image

How Do We Improve Interim Assessment?  

In the seacoast region of New Hampshire, we are enjoying the kind of crisp early autumn temps that might call for a light sweater, and the foliage reveals just a hint of the color that draws ‘leaf peepers’ to the region each year. But it wasn’t just the postcard-perfect scene that drew more than 80 education and assessment leaders from around the country to Portsmouth on September 26-27, 2019. The Center’s annual Reidy Interactive Lecture Series (RILS) offered an opportunity for those assembled to learn and contribute ide