Using School and Classroom Climate Surveys to Improve Learning Environments (Part 2)

Jan 21, 2026

Tackling key decisions related to climate measurement

This is the second in a two-part series about using school climate surveys to gain insight into students’ experiences. Read Part 1 here.

As policymakers consider how to measure school performance and identify data that help educators support students’ academic development and well-being, school and classroom climate surveys offer a valuable tool. As I wrote last week, climate surveys are typically a low-burden way to produce information that’s highly relevant to improving student outcomes and allows educators to take action to modify students’ learning conditions.

In this second post, I offer more specific guidance for state and districts on four key questions:

  1. Who should you survey?
  2. How can you address diverse needs and contexts among survey respondents?
  3. Should you adopt, modify or create a survey?
  4. How should you report results to support appropriate use? 

Who Should You Survey?

For most purposes, it’s important to gather climate data from as broad a range of constituencies as possible. Students are arguably the most essential respondents, as they are the ultimate beneficiaries of high-quality learning environments. But many states and districts also survey teachers, other school staff, and families. Each group provides a distinct perspective; teachers and other staff can speak to working conditions, while families can address their engagement with schools.

That said, surveying multiple groups increases burden, and achieving high response rates among families can be especially difficult. When resources are limited, it might be most important to focus on students and teachers, while using other methods, such as conversations, to gather input from families or other groups.

Addressing Diverse Needs and Contexts of Survey Respondents

Useful survey results require questions that are designed to elicit honest, accurate responses from diverse respondents. Three considerations are especially critical here.

Students’ grade level. Surveys are generally appropriate beginning in grade 4 because of their reading and comprehension demands. Younger students can still share valuable information, but it’s best to do so in other ways, such as individual interviews. Survey developers and users must ensure that the phrasing of questions is age appropriate.

Accessibility and linguistic inclusiveness. To reflect the full range of experiences and opinions in a school or classroom, it is important to offer translated versions and make the instruments accessible to students with disabilities. Oregon provides an example; the state administers the Student Educational Equity Development (SEED) Survey annually and offers a version (Alt-SEED) for students who take the alternate form of the state’s academic assessments. The survey is also available in multiple languages.

Social and cultural contexts. Survey questions should connect with a wide variety of students and their experiences. Terms like “support” or “fairness” can be interpreted differently across cultural groups. Piloting items with diverse respondents and interviewing them as they complete the survey questions can help ensure that the final questions will be interpreted as intended and capture meaningful variation across groups.

Should You Adopt, Modify, or Create a Survey?

Once you’ve clarified your purpose and target population, the next decision is whether to adopt an existing instrument, adapt one, or create a survey from scratch.

Creating your own survey

Customization can support alignment with local values, needs and priorities. But it requires substantial expertise in areas such as question design, accessibility reviews, translations and validation. Ideally, homegrown tools undergo multiple rounds of pilot testing. Many agencies find this unrealistic for budget, staffing or timeline reasons, but skipping these steps risks collecting data that lacks validity and reliability evidence, with the potential for results that are misleading, biased or difficult to interpret.

Adopting an existing survey

This can reduce development time and provide confidence if the tool already has evidence of validity and reliability for similar populations and uses, along with clear administration and scoring guidance. But “off-the-shelf” surveys vary widely in quality. It’s important to obtain available technical documentation and ensure the survey content is aligned with local goals and proposed uses.

Modifying an existing survey

Adapting existing items or adding local items can balance timeliness and cost with relevance to local context and needs. Modifications can alter how items function, so even small changes should be accompanied by low-cost validation steps such as interviews. Resources like the Annenberg Institute’s EdInstruments, the National Center on Safe Supportive Learning Environments and the National School Climate Center provide links to instruments and validity evidence that can help you avoid starting from scratch.

Regardless of your approach, look for evidence of validity that incorporates multiple sources and was gathered from populations similar to those you’ll survey. Strong validity evidence demonstrates that the survey results will support the kinds of inferences and decisions you want to make.

Reporting Results to Support Appropriate Use 

Determining what to report and how to report it requires careful consideration. One key decision is what to report publicly—whether to share information about survey participation, the survey results, or both. 

Illinois offers one model: the state accountability system includes an indicator related to participation rates in school climate surveys, but doesn’t report the actual survey results publicly. Illinois districts do receive results and can use them for local decision-making. This approach balances transparency about survey implementation with concerns about the potential negative consequences associated with high-stakes accountability uses.

Another consideration is whether to report on group differences, such as by comparing responses across student demographic groups. Such comparisons can illuminate inequities and target improvement efforts, but they require adequate sample sizes to protect respondents’ privacy and can lead to misinterpretations or misuses such as overinterpretations of small differences or stereotyping of particular groups.

You’ll also need to decide whether to report individual survey items or combine them into multi-item scales. Scales can provide a more reliable and interpretable summary of complex aspects of climate, but they should be based on sound statistical evidence that the items truly measure a coherent construct. Reporting individual items can provide more actionable specificity but may overwhelm users with too much information.

Finally, it’s essential to develop effective mechanisms for sharing the information, including (1) a reporting system (for example, an interactive dashboard or school-specific reports) that support the survey’s intended purposes and (2) clear, accessible guidance on appropriate interpretations and uses of the data, including warnings about inappropriate interpretations and uses. 

Oregon’s SEED Survey Toolkit provides a helpful example of an approach designed to enhance understanding of survey purposes and results among key constituencies. The toolkit includes guidance on interpreting results, facilitating data discussions and connecting findings to improvement planning. Similarly, Washoe County offers toolkits customized to the needs of students, staff and parents.  

Surveys Are Complicated but Worthwhile

There’s no getting around the fact that thoughtful survey implementation requires decisions about factors like design, sampling, accessibility, analysis and reporting. Some of these decisions require specialized expertise.

The good news is that states and districts don’t have to tackle these decisions alone. Consider partnering with researchers or technical assistance providers who can provide expertise while helping you build internal capacity. You can also learn from agencies that have successfully implemented school climate surveys, such as those mentioned in this post and my previous one. 

The complexity shouldn’t be a deterrent. Investing in high-quality climate measures is ultimately an investment in the learning environments that enable students to thrive. The examples I’ve shared reflect a common principle: climate data are most powerful when used formatively—to guide reflection, collaboration, and improvement—not to rank, sanction or evaluate. When implemented and communicated well, school climate surveys provide educators with trustworthy, actionable insights to improve the conditions that matter most for student success.

Share: