Nathan Dadey is interested in the design, scaling, and use of educational assessments, particularly assessments used for accountability purposes. Through his work he aims to produce methodical and applied work that contributes to improved understanding and use of assessment results in policy contexts.
Nathan's work focuses on using psychometric and statistical methods to address practical problems; for example, examining interim and summative assessment jointly in order to suggest designs for comprehensive systems of assessment and ways in which statistical models can be used to leverage interim and summative data to provide unique insights into student learning. He has been involved with the design of a state's pending Next Generation Science Assessment System and the creation of an Accountability Systems and Reporting (ASR) policy brief sponsored by the State Collaboratives on Assessment and Student Standards (SCASS) that provides recommendations regarding ESSA's provision on the use of interim assessments for summative purposes. On the modeling side, he has examined the modeling of classroom and summative assessment data using diagnostic classification models and Bayesian networks.
His other work follows in a similar vein and includes providing recommendations for device comparability on standardized assessment, examining dimensionality of an alternate assessment for a consortium of states, and quantifying the impacts of interruptions to online testing.
As a doctoral student, Nathan worked on a number of projects, including several state- commissioned validity studies with his advisor, Dr. Derek Briggs, examining approaches to quantifying academic growth. He has led teams of educators in the development, revision, and standard settings of a district’s interim assessments in traditionally non-tested subjects, and as part of this process scaled all of the district’s interim assessments using item response theory. He has also conducted research in the areas of subscores, vertical scaling and non-cognitive assessment.
Nathan received a Ph.D. from the University of Colorado Bolder with a concentration in Educational and Psychological Measurement.
Nathan Dadey focuses on using psychometric and statistical methods to address practical problems, including issues related to combining interim assessment data, dimensionality of alternative assessments, subscores, and vertical scales.
Recent and Relevant Publications
DePascale, D., Dadey, N. & Lyons, S. (2016, June). Score comparability across computerized assessment delivery devices. Washington, DC: Council of Chief State School Officers (CCSSO). Available online:
Maul, A., Penuel, W. R., Dadey, N., Gallagher, L. P., Podkul, T., & Price, E. (2016). Measuring experiences of interest-related pursuits in connected learning. Educational Technology Research and Development. Advance online publication. doi: 0.1007/s11423-016-9453-6
Briggs, D.C., & Dadey, N. (2015). Making sense of common test items that do not get easier over time: Implications for vertical scale designs. Educational Assessment, 20, 1-22.
Dadey, N. & Briggs, D. C. (2012). A meta-analysis of growth trends from vertically scaled assessments. Practical Assessment, Research & Evaluation, 17(14). Available online: http://pareonline.net/getvn.asp?v=17&n=14