Back
CenterLine

Michelle Boyer

Educational Assessment Validity Reliability Fairness

2020 Summer Internships: A Little More Certainty in an Uncertain Future

Center Staff and Interns Will Address Pressing Issues in Educational Assessment and Accountability

Although so much about the future seems uncertain, we are excited this month to bring a little normalcy into our world by addressing key questions and challenges in educational assessment and accountability through our 2020 summer internship program. This summer, the Center welcomes four advanced doctoral students who will work with the Center’s professionals on projects that will have direct implications for state and national educational policy. Each intern will work with a Center mentor on one major project throughout the summer.

read more

COVID-19 Response Assessment Accountability

An Assessment Response to Anticipated Learning Gaps

Implications of School Closures on Assessment Needs

According to Education Week, “as of March 23, 2020, 7:31 p.m. ET: 46 states have decided to close schools. Combined with district closures in other states, at least 123,000 U.S. public and private schools are closed, are scheduled to close, or were closed and later reopened, affecting at least 54.8 million students”.

read more

Educational Assessment Scoring Automated Scoring Validity Test Score Reliability Assessment

Understanding and Mitigating Rater Inaccuracies in Educational Assessment Scoring

Rater Monitoring with Inter-Rater Reliability may Not be Enough for Next-Generation Assessments

Testing experts know a lot about how to conduct scoring of students’ written responses to assessment items. Raters are trained under strict protocols to follow scoring rules accurately and consistently. To verify that raters did their job well, we use a few basic score quality measures that center on how well two or more raters agree. These measures of agreement are called inter-rater reliability (IRR) statistics, and they are widely used, perhaps in part because they are easy to understand and apply. 

read more

ESSA

Balancing Skepticism and Utility in Machine Scoring

Understanding How Machine Scoring Can Be Used Now, and What We Need to Do to Expand Its Usefulness for the Future

Without a doubt, the public is skeptical about using machine scoring for examinees’ written responses. This skepticism makes sense because we know that machines do not score all elements of writing equally well. Machines do not “understand” creativity, irony, humor, allegory, and other literary techniques, opening them to criticism for their insufficiency in evaluating some of these more subtle qualities of writing. 

read more

Assessment Accountability

Beyond Faster and Cheaper, Could Automated Scoring Produce Better Scores?

Using Advances in Technology to Improve the Quality of Educational Assessment

Earlier this month, the Center for Assessment held its 15th annual colloquium, the first named in honor of Center co-founder, Brian Gong. The Brian Gong Colloquium is a two-day meeting arranged by the Center for Assessment to discuss select topics of importance in testing and accountability with recognized experts. 

The 2019 meeting focused on the present and future use of learning analytics, machine learning, and artificial intelligence in educational assessment, and highlighted topics including:

read more