Prior to joining the Center for Assessment as a Senior Associate, Will Lorié, Ph.D., has held senior scientist and business development positions at McGraw-Hill and ETS. As a senior education specialist at the World Bank, he helped build client countries' educational assessment capacity. In 2009, he was awarded a Department of Defense contract to develop and validate reading and listening comprehension tests for nine languages.
Will led education research and evaluation projects at Questar, formulating a framework for scoring technology-enhanced test items. At the Pearson Center for NextGen Learning & Assessment, Will concept-tested speech-based AI engines for language learning, an effort which led to the development of a prototype speaking and listening language learning engine under a 2018 Small Business Innovative Research award from the Institute for Education Sciences.
More recently, Will has consulted with Wall Street English, AIR, and the Council for the Accreditation of Educator Preparation (CAEP) on issues of assessment construction, analysis, and policy. His current work examines information trade-offs between global scores and subscores, the effects of different approaches to scoring technology-enhanced items, and the formulation of score comparability within a unified framework.
Will serves on the technical advisory committee on the Collaborative for the Alternate Assessment of English Language Proficiency (CAAELP), a multi-organization project led by the State of Iowa and UCLA's National Center for Research on Evaluation, Standards, and Student Testing (CRESST), to develop an alternate summative assessment of English language proficiency to be administered to English learners with significant cognitive disabilities.
Will is a member of AERA (since 2001), NCME (2001), the Psychometric Society (2008), and APA (2013). He is chair of the NCME Publications Committee. Will obtained a Ph.D. in Education from the Stanford Graduate School of Education and an M.S. in Statistics from Stanford University.
Will Lorié, Ph.D., is a Senior Associate at the Center for Assessment. His work focuses on validating subtest-level reporting, optimizing scoring for technology-enhanced items, and formulating score comparability across different testing conditions within a unified, evidence-centered design framework.
Recent and Relevant Publications
Lorié, W. (forthcoming). Review of the IPT Oral English Test. Mental Measurements Yearbook. Buros Center for Testing.
Lorié, W. (2019). Measures of academic proficiency, version 4. Council for the Accreditation of Educator Preparation.
Lorié, W. (2017). Supporting diagnostic inferences using significance tests for subtest scores. In van der Ark, L. A., Wiberg, M., Culpepper, S. A., Douglas, J. A., and Wang, W., eds. Quantitative Psychology, Springer Proceedings in Mathematics & Statistics 196, doi: 10.1007/978-3-319-56294-0_6
Lorié, W. (2016). Automated scoring of multicomponent tasks. In Rosen, Y., Ferrara, S., and Mosharraf, M., eds. Handbook of Research on Technology Tools for Real-Life Skills Development. (pp. 627-658). Hershey, PA: IGI Global.