It’s Been 20 Years. What Have We Learned?

Sep 28, 2018

Key Takeaways from the Reidy Interactive Lecture Series in its 20th Year

Asking what we have learned is a fitting question with which to begin the twentieth convening of the Reidy Interactive Lecture Series. From its start, the philosophy of the Center for Assessment has been that we gain so much more from asking talented and committed professionals to reflect with us on that question than from simply standing in front of them and telling them what we have learned.

Our goal is to learn from each other by sharing what each of us has learned about important problems or challenges in educational assessment and accountability. I come to RILS each year not to provide or even to walk away with ready-made solutions, but rather to hear viewpoints, experiences, or approaches that will challenge my current thinking and lead to better solutions.

In that spirit, we begin the RILS 2018 discussion with a quick look back over the last 20 years. We want  to reflect on where we have been and what we have learned, so that together we can leverage the lessons of the past to improve the impact of assessment and accountability practices as we move forward.

A review of topics and titles reveals that three words have been dominant throughout RILS presentations over the years. The first two, assessment and accountability, are no surprise.  The third is system–meaning the ongoing quest for those elusive balanced assessment and accountability systems that exhibit comprehensiveness, coherence, and continuity.

The history of RILS, of course, is a window into the history of the Center for Assessment. The focus on systems that go well beyond the scores and achievement levels produced by large-scale state assessments is evident in a snapshot of a few of the projects that were centerpieces of the Center’s early years:

  • Improved Schools Project in North Carolina
    • As part of a larger project evaluating the North Carolina ABC Accountability System, the Improved Schools Project included case studies with North Carolina schools to determine how much an effective school can improve in achievement from one year to the next, and to determine whether gains in accountability scores are in fact the result of real change at the school level.
       
  • Maine Local Assessment System
    • In support of the Maine Learning Results, which are new content standards and expectations for all students implemented in 1997, the Maine Department of Education (with the backing of several in- and out-of-state partners) undertook the design and development of the state-supported Maine Local Assessment System.
       
  • Wyoming Body of Evidence System
    • The Body of Evidence system offered a locally-controlled alternative to high-stakes testing as a means to implement standards-based high school graduation requirements.  The goal was for the student and school to compile a “body of evidence” to demonstrate that the student knows enough to graduate from high school.
       
  • Rhode Island Proficiency-Based Graduation Requirements
    • Rhode Island was the one of the first states in the country to establish a proficiency-based diploma. Its proficiency-based graduation requirements and criteria for local assessment systems have been cited as the foundation for the design of current competency-based systems.
       
  • New Hampshire Enhanced Assessment Grant – Knowing What Students with Significant Cognitive Disabilities Know.
    • This federally-funded project was designed to define and disseminate technical criteria for alternate assessments through a research and practice partnership. A primary goal of the project was to enhance fundamental knowledge of what the results of good teaching and learning look like for students with significant disabilities; which then would inform the collection of evidence of standards-based learning that could yield valid and reliable inferences for accountability and school improvement purposes.

It is not difficult to identify the through line that connects those early Center for Assessment projects to our current projects and to the topics we will discuss at RILS this year. No Child Left Behind, Race to the Top, and now the Every Student Succeeds Act have each in their own way shined a bright light on the limited (but important) function that large-scale assessment alone can play in improving student outcomes. Improving educational assessment to support instruction and enhance student learning requires systemic solutions to systemic problems. 

This week, we come together at RILS for our annual check-in on the question, “What have we learned?”; seeking answers that will help us craft those systemic solutions. To get the ball rolling on that discussion, in the grand tradition of RILS past, I offer three observations and claims for your consideration. 

1. You cannot improve educational assessment simply by improving assessment instruments.

Over the last 20 years, we have learned a lot and made tremendous advances in building, administering, and scoring assessments; high-quality assessments aligned to college-and-career-ready standards, adaptive assessments, and to a certain degree, performance-based assessments.

Building a better assessment, however, is not sufficient to increase student learning through more meaningful educational assessment and accountability practices. 

We must devote at least as much, if not even more, of our attention, effort, and resources to understanding and enhancing how the assessments we are building will be used and who will use them.

2. You cannot use educational assessments to improve student learning without people who know how to use them.

A tool in the hands of a master produces works of art such as beautiful paintings, magnificent structures, or amazing meals. However, even the best tool in the hands of someone unskilled or unprepared to use it may produce nothing more than a hole in the wall.

We must improve assessment literacy at all levels of the system – psychometrician, policy maker, administrator, teacher, student and parent, and the general public. 

Accomplishing that goal will require an understanding of what assessment literacy means at each of those levels, what the barriers are to acquiring and using it, and what the best ways to improve it would be.

3. You cannot improve assessment literacy through books, modules, professional development programs, videos, podcasts, and/or courses on assessment and assessment literacy.

If you elect and select federal and state policy makers who are skilled in the principles of policymaking, including evaluating, validating, and adjusting their policy decisions, you will have assessment-literate policy makers.

If you train district and school administrators to be skilled program and personnel evaluators, and instructional leaders, you will have assessment-literate district and school administrators.

If you teach teachers to teach–truly teach–you will have assessment-literate teachers.

In conclusion, those three observations bring me back nearly 20 years to the conclusions reached by the authors of Knowing What Students Know (NRC, 2001). Among the greatest roadblocks to real change are disciplinary boundaries, established practices, and “existing social structures in which familiar assessment practices are now deeply embedded and thus difficult to change.”  If 2018 has taught us anything, however, it is that existing social structures and deeply-embedded practices can be challenged and changed. 

This week, as we discuss the design and validation of balanced assessment and accountability systems, also ponder what it will take to change the status quo to make it possible increase student learning through more meaningful educational assessment and accountability practices.

Share: