Program Evaluations under COVID-19

Apr 29, 2020

A Guest Post by the Joint Committee on Standards for Educational Evaluation

Juan D’Brot and the Center are pleased to host this post prepared by Juan with contributions from his colleagues on the Joint Committee on Standards for Educational Evaluation: Brad WattsJulie Morrison, and Jennifer Merriman. The JCSEE’s mission is to develop and promote standards for conducting high-quality evaluations through the use of the Program Evaluation Standards. The Joint Committee recruits high visibility member organizations to help promote the integration of high-quality evaluation methodologies into existing and emerging organizational initiatives. 

Despite it quickly becoming an overused phrase, we are truly witnessing an unprecedented global event due to the COVID-19 pandemic. For those of us in the research, evaluation, and measurement communities, it can be easy to lose sight of the opportunities we have to improve others’ understanding of the impact associated with the virus. As individuals, our focus is rightly on the need to take care of each other, support healthcare workers, and do our part to minimize the spread of COVID-19. However, professionally many of us have an obligation to help understand the effects of extended social distancing and identify potential strategies that can help mitigate the impact of isolation in each of our industries.

Below, we offer a few resources that may be worth applying generally to our work. Additionally, we’ve identified sponsoring organizations that are members of the Joint Committee on Standards for Educational Evaluation (JCSEE). The hope should be to consider how we should apply our expertise to understanding how this pandemic affects our work and our lives. 

Public Health (Sponsoring Organizations: CDCAPA)

The Centers for Disease Control and Prevention (CDC) is on the front lines studying and monitoring the impact of COVID-19. Their focus on program evaluation (see the CDC’s program evaluation framework here) lays out the following structures to help them better understand the impact of COVID-19 and other public health priorities: 

  1. framework for program evaluation in public health.
  2. Guidelines for improving the use of program evaluation for maximum health impact.
  3. Developing and using logic models
  4. Identifying and measuring indicators and measures in support of program evaluation. 

Evaluators in the public health sector should consider thinking about the short and long-term public health impact that will go beyond the virus itself (e.g., stress, isolation, shifts in social norms) and how current and ongoing program evaluations may need to be revised in light of disruptions due to social distancing, self-isolation, lockdowns, or mass quarantine. This will require a reconsideration of stakeholder viewpoints, evaluation descriptions and their designs, the evidence collected, and the conclusions we make. This unprecedented event is a major opportunity to identify and share new lessons learned that will likely be applicable in the future. 

Education (Sponsoring Organizations: NCMEAERACREATEUCEANASP)

It is important to recognize the systems-based design of education systems. NCME representative Juan D’Brot recently wrote about the multi-layered nature of schools supporting students through the provision of a safe environment, socialization opportunities, and academic preparation for the real world here). 

As state and national leaders deal with the ramifications of extended school closures, states have turned to turn to federal waivers of student testing and school accountability. Although assessment and accountability data typically help us to identify performance issues, resource inequities, potential opportunity gaps, and overall progress against state standards, these data will not be available for this school year and interpretations will be strained next year due to the COVID-slide. Therefore, state leaders are right in turning their focus away from summative statewide testing to focus more on the immediate needs of schools and students. Herein lies an opportunity to leverage this challenging time as a chance to build the capacity of the educational system to support virtual and distance learning, creatively provide services (e.g., meals, IEP-related needs, screenings, etc.), and support educators in doing so. Our NCME representative offers additional insight into his organization’s CenterLine post with a focus on non-state summative assessments). 

Program Evaluation (Sponsoring Organizations: AEACESCSSE, the Evaluation Center at WMU)

Program evaluations are grounded in the contexts in which they are implemented. As the environments that affect our programs change, our evaluations will likely need to change with them. It is safe to assume that many of our assumptions pre-Coronavirus will need to be revisited, revised, or thrown out altogether. However, this global event can provide us with an opportunity to build our capacity to be flexible, nimble, and systems-oriented. We recommend that evaluators, in general, keep in mind the Program Evaluation Standards (PES) as a way to approach the changing environments that we are under. 

The standards are organized into five major categories: (1) Utility, (2) Feasibility, (3) Propriety, (4) Accuracy, and (5) Evaluation Accountability. This framework is a great starting point in considering the impact of extended pauses in programs, major changes to interventions, or disruptions in data collection or availability (e.g., interrupted time-series). Program evaluators should consider the following: 

  1. Utility:  The utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs. We must work with our stakeholders to understand what changes are being made to programs and how they will impact process and outcome data. 
  2. Feasibility: The feasibility standards are intended to increase evaluation effectiveness and efficiency. As the time-bound for social restrictions increases, it may be important to revisit how we’re defining the evaluand and whether the original design is effective moving forward.  
  3. Propriety: The propriety standards support what is proper, fair, legal, right, and just in evaluations. We will need to engage with our program designers and implementers to ensure that we can be maximally helpful while maintaining the propriety of our evaluations. 
  4. Accuracy: The accuracy standards are intended to increase the dependability and truthfulness of evaluation representations, propositions, and findings, especially those that support interpretations and judgments about quality. We must take seriously the impact of suspended programs or missing data in our design, analysis, and interpretation of results. We can still show great value to our sponsors by focusing on process evaluations and helping them understand the impact of disruptions, when appropriate and feasible.  
  5. Evaluation Accountability: The evaluation accountability standards encourage adequate documentation of evaluations and a metaevaluative perspective focused on improvement and accountability for evaluation processes and products. It will be critical for us to fully and transparently document the impact of program disruptions on how our negotiated purposes and implemented designs and procedures might differ, and how those differences affect outcomes (if they can still be measured).  

By using the PES as a guide, evaluators can systematically and procedurally work through the implications of major disruptions to society—like COVID-19—today and in the future. For more information, see an overview of the Program Evaluation Standards here.

We hope that everyone is staying safe and healthy during this trying time. 

Share: