How Can We Continue Monitoring  Student Performance When We’re Losing Large-Scale Assessment Data? 

Back

Student Assessment Assessment Data Student Performance State Testing Online Learning Knowledge Gaps COVID-19 Response

How Can We Continue Monitoring  Student Performance When We’re Losing Large-Scale Assessment Data? 

Focusing on Assessment Purposes and Uses to Identify Potential Sources of Instructional Information 

As seen by our growing list of recent CenterLine posts, professionals at the Center for Assessment are actively thinking about how to approach the loss of large-scale assessment and accountability data for this school year and its impact on monitoring student performance: 

  • Chris Domaleski talked about near and longer-term  implications for  accountability 
  • Nathan Dadey discussed issues with equity as a result of school closings, 
  • Scott Marion and Andy Middlestead wrote about the contractual issues that have emerged,
  • Michelle Boyer talked about the need to shift toward smaller learning targets. 
  • I wrote about identifying the information that is available in assessment systems as a stand-in for large-scale assessment information. I hope to explore that last point more thoroughly here. 

GPS, Available Information, and Assessment 

If you think about today’s GPS systems (e.g., phone-based real-time apps like Waze, Apple Maps, and Google Maps) compared to older, more static ones (e.g., Garmin and StreetPilots), they have vastly different capabilities. Older GPS devices were hard-coded and did not update routes with traffic, accidents, weather, road closures, and construction. 

Newer systems offer real-time guidance to direct the driver, adjusting when necessary. While both systems can help a driver try to reach a destination, the older GPS devices required one to process the information on the route and make critical decisions about next steps. The older GPS devices are not unlike the current state of summative assessment that provides very limited ‘along-the-way’ help. In fact, the information from end-of-year summative assessments is most similar to the checkered flag at the end of the route.  Sure, it’s useful to know that we have reached the destination, but it’s more helpful to have dynamic guidance to select a good route, which is especially true if getting to the destination is likely to be difficult.

With the loss of summative assessment this year, all we have really lost so far is the hard-coded information of those older, static devices. There’s no denying that the loss of large-scale statewide testing and the impact it will have on state accountability systems is serious. However, we will eventually have access to all of the other information that is used to monitor student performance and make instructional decisions along the way. In light of the loss of summative assessment information, we need to focus on the types of information we need and how to select the right tools to get that information. 

Supplementing Large-Scale Assessment Information 

Large-scale assessment information was never intended to help us navigate the week-to-week or unit-to-unit instructional needs of students. That use has been and should continue to be the role of a high-quality curriculum (see Scott Marion's post on assessment and teaching), well-aligned local formative assessment practices, and well-designed assessments that are aligned to pre-specified and targeted needs (e.g., evaluating progress against a lesson plan, a unit, or a course). Those needs must be known ahead of time and any assessment should be evaluated based on how well it can serve that need. 

Being Specific with "The Why" and Letting the Assessment Follow

In September of 2019, Erika Landl and I worked to develop a toolkit designed to help states and districts determine whether the intended purposes and design of an interim assessment were aligned with teaching and learning goals. There are a few key considerations that are important to restate here as we search for ways to supplement the losses from our hard-coded summative assessment information. Further, these considerations are applicable both during the current extended school closures and when students return back to traditional instructional settings. Schools must consider what information about student performance can be and should be collected during remote learning this Spring. Finally, before instruction begins, schools also will likely need some type of systematic information to see what students did and did not learn.

  1. What is the highest priority information you need? Do you need a quick understanding of where student knowledge is breaking down, or do you need a generalized understanding of progress toward grade-level standards? The former will require diagnostic information. The latter will require an assessment that samples a greater span of content. These designs are necessarily different. 
  2. What judgment do you want to make? Are you trying to understand the current performance of students overall (i.e., status)? Are you focusing on progress within or across lesson plans? Is your goal to diagnose learning needs or gaps around specific standards or groups of standards? Answering each of these questions requires a different type of assessment design, and must be considered in conjunction with decisions about your highest priority needs (number 1).  
  3. What comparisons are you trying to make? Are you trying to compare how students are doing within a class, grade, or school? Or are you primarily concerned about a student's Individual progress against what they should be learning? The former need will require more systematic administration and data collection, whereas the latter won't. However, the latter will require thoughtful assessment design to ensure the tests reflect the intended learning targets and progress compared to expectations and how that translates into next steps. 
  4. What resources are available to you? Is it possible to administer diagnostic or interim assessments online or are you reliant on paper and pencil now that so many students are learning at home? Are teachers still able to administer any assessments—formal or informal—while students are at home? Or do you have to wait until students return to the classroom? In addition to administration, we need to think about how we score and interpret responses. Without a way to monitor student performance in a systematic fashion, we won't be able to understand student progress. 
  5. How are you linking assessment information to the next steps for instruction? This question has historically been the most difficult to address even under more ideal instructional circumstances. How directly are you trying to determine student knowledge gaps? And how much direction do you want to provide with regard to re-review of materials for students to fill in those knowledge gaps under non-ideal circumstances? When students return to school, we will need to triangulate available evidence of student learning from a variety of sources. The loss of state summative assessment information could better draw attention toward other formal assessment events and informal formative assessment processes (see CCSSO, as advised by Wiley, 2018), which might include conversations, observations, and student work products.

It is unlikely that these questions will be answered by individual educators or educational leaders. As local school systems are coming together to approach out-of-class instruction collaboratively, we must jointly address these questions in order to establish an effective monitoring strategy. Currently, we are hoping that schools reopen in the fall and we can determine what kind of learning loss and gaps have emerged. However, if we find ourselves needing to close schools again, it will be critical to develop a strategy that enables us to collect and interpret information not just after, but during school closures. 

In Closing

During these uncertain times, we need to be thoughtful about using assessments to help us understand significant educational challenges as a result of school closures and the shift to online instruction. Therefore, we need to be clear in our assessment and instructional goals; ask the right questions to help clarify those goals, and evaluate existing and missing assessment opportunities to meet those goals. We have begun attempting to systematize how to ask the right questions through our interim assessment toolkit. While we are actively working to revise and improve it, we hope that it provides some support as educators wrestle with the challenge of supporting kids through non-traditional learning experiences.

Stay safe everyone. And remember to be physically distant, but stay socially connected. 

Share:

Prev Next