Illustrated images of people are shown to represent a community, and all are collaborating to refine a complex system by turning over their individual watering cans to contribute to the growth of a tumbling vine.

Improving Accountability Through Authentic Community Engagement

Sep 20, 2023

Five Common Pitfalls States and Districts Should Avoid

The nation’s K-12 students need more support than ever in the wake of a pandemic that cost them countless learning opportunities. State and district education leaders have a powerful tool at their disposal: accountability systems that can identify focal areas for support and improve students’ access to high-quality, equitable instruction. For school accountability to work that way, though, it must draw on the wisdom and needs of the people it’s meant to serve: educators, families, and community members.

A thoughtful process of gathering that input is important to a well-functioning accountability system, as discussed in this recent Center paper. But if educators’ and community members’ feedback is not clearly reflected in the design of the system, they might feel frustrated and disempowered, and wonder why they were asked to participate in the first place. The resulting systems might also fail to generate the data needed for school improvement because they don’t consider local needs and context. 

Here are five ways states and districts often fall short on engagement: 

  • Viewing it as a compliance activity—to meet the requirements of the Every Student Succeeds Act, for instance—rather than as a beneficial part of the design process
  • Gathering input sporadically, with no clear strategy for using it
  • Using data-gathering techniques that prioritize data efficiency over authenticity. They might use surveys rather than focus groups, for example, missing an opportunity to gather more nuanced perspectives. 
  • Failing to let people know how their input was—or wasn’t—used. It isn’t enough to afford people a voice in the process; they must also feel a sense of agency about how their input is incorporated. State and local education agencies should explain their decisions to those who provided feedback, especially if the accountability designs do not reflect all their suggestions. 
  • Presuming fixed notions about the structure of the system when they collect input. They consider it a given, for instance, that the system will feature five indicators and use measures scaled from 0-100, rather than asking the targeted community members which indicators and measures they consider valuable and how they should be reported to be meaningful. 

With these kinds of limitations, efforts to engage communities can end up being more a symbolic data-gathering activity than a thoughtful codesign process that elicits a substantive exchange of ideas.

Better Community Engagement in School Accountability

One potentially useful source of guidance on effectively engaging communities comes from design-based research (DBR), an approach to research and development that prioritizes close collaboration between researchers and those intended to benefit from the proposed research. It also places a high value on iterating design, piloting and modifying design tools and ideas in real-world contexts, and paying close attention to the impact and utility of the results. 

Fields ranging from biology to education have incorporated DBR in the development of products or interventions. The approach also aligns with recent calls for education R&D that is better informed by users’ needs. 

It might not seem immediately intuitive to apply a DBR approach to accountability system design. But the underlying principles offer us ways to improve the quality and impact of engagement efforts, including:

  • Engagement plans that are differentiated by group, describe the desired contribution of each group, and outline a process for collecting information that makes it both easy and enjoyable for them to participate
  • Concrete plans for incorporating feedback that can help states and districts determine how to express support for system features that communities prioritize, address conflicting feedback, and explain why they didn’t use some of the input
  • Iterative cycles of development, testing, feedback, and refinement, to validate that the intent of the system is being realized, and
  • Ongoing, authentic engagement to ensure that the community’s priorities are accurately reflected in the system design and that they can see this reflection.  

Real-World Challenges to Authentic Community Engagement

Adherence to these principles would help states and districts address many of the challenges that arise from traditional engagement activities, which often rely on a standardized process for collecting feedback. These challenges include having to juggle competing priorities, navigate a difficult political climate, grapple with the impact of controversial language or ideas in feedback, such as equity or social and emotional learning, and manage the frustration of those who feel their feedback isn’t genuinely considered or utilized. 

To see how using design-research principles could play out differently, consider a state that includes a common measure of school climate within its ESSA accountability system. State leaders have heard from principals and teachers that the survey doesn’t include the types of information they need to understand and improve their schools’ climate.

The state team decides to meet with three separate focus groups—parents and students, teachers, and school leaders—to understand the school features that would create and demonstrate a positive school climate. Each discussion would have a unique focus, based on what each group needs from a school-climate measure.

Because the results are intended to signal how climate can be improved, the state team’s discussions with school leaders would focus on the outcomes they’d like to see, but also on the conditions and resources they’d need to achieve them.

Discussions with parents and students, on the other hand, would focus on the school activities and interactions they think would make students feel engaged, confident, and well supported. Educators would discuss the resources and supports they need to have a positive work environment and a classroom that supports learning and engagement. With this feedback in hand, the state team would modify the survey and draft initial plans for reporting.

Adjusting Again and Again, Based on Feedback

In their next testing/feedback cycle, state officials would share the revised survey and proposed reporting strategy with these groups, describe how their feedback is reflected in the revisions, solicit their views on the items (How’s the wording? Did we capture the right information?) and revise them again. 

The state team would then gather another round of input from these groups, sharing the revised survey along with a sample report. This round of feedback might result in additional small edits to the survey but would focus on making the reports useful, easy to understand, and reflective of the different groups’ needs and priorities.  

Next, state officials would field-test the survey and use the resulting data to make additional revisions to the instrument and reports. In this final round of feedback, they’d identify potential approaches to school improvement that address the needs identified in the survey and account for the important contextual factors—such as suspension rates or teacher turnover—that surfaced in discussions with school leaders. 

We don’t intend this example to be prescriptive, but to illustrate how continuous cycles of testing and refinement—with genuine input from families, educators and community members—can create an accountability system that reflects the needs and priorities of the groups who will be affected by it. 

Adopting design-based research principles could also promote a shared sense of accountability in which state and local leaders, educators, and families share responsibility for creating the conditions to advance opportunities for all learners.

Laura Hamilton is the senior director for education measurement and assessment at the American Institutes for Research.

Share: