Assessment Selection is Like Picking the Right Golf Club
In a previous post, I (Juan) argued for the importance of a variety of tools in your assessment toolbox to help monitor recovery efforts as we come out of (or continue to wade our way through) the pandemic. In this post, we expand a bit more on assessment selection, and the types of assessments, or tools, that can be used to monitor recovery efforts and confirm evidence of progress.
We were inspired by a recent round of golf on a short course with nine par 3 holes. Since it was a walking course and we were carrying our clubs, we began to talk about how few clubs you really needed to complete the course. We went so far as to suggest that we could play the whole course with a single 9 iron—we may not even need a putter. Could we do it? Sure, but things might get dicey on a longer hole (where we’d need a longer club), or if we had a long way to go on the green.
As assessment folks are wont to do, we thought about how this scenario was relevant to assessment selection. How many assessments do you really need to monitor whether efforts to accelerate student learning as schools recover from the pandemic are on the right track? Is an end-of-year state summative assessment enough or would interim assessments be better? Could you get away with only classroom assessments? Soon we realized you’d need a pretty substantial set of clubs, but you would likely be reaching for a few of them more often than others.
The Design of the Tool Matters
Golf clubs have different designs for different reasons. Neither of us are golf pros, but we’re familiar enough to know that there are different lofts, materials, lengths, and weights; and we recognize that each club serves a very specific role. Differently-skilled golfers are more adept at selecting different clubs, especially because some are more error prone than others. The more you know what works for you (through practice, experience, and exposure), the more confident you can be that your selection makes sense.
As Juan can attest (as the novice who spends most of his time weed whacking in the rough), a fancy set of specialized clubs isn’t going to help much. But a professional is going to be able to dial in their game using a set of customized clubs much more effectively.
Assessment tools are similar. To someone who is less familiar with assessment design or use, different tests might look very similar and may be assumed to give you roughly the same signal. However, a seasoned pro will know how to select or build the perfect assessment for the right job. Our challenge is helping folks recognize what’s good enough for the job at hand while getting them the information they need. When selecting the right assessment, we should acknowledge an important practical concept – that perfection can be the enemy of good. Sometimes good enough is just that—good enough. But you need to couple that with knowing what you know (or don’t). Likely, you will still need a range of tools to tackle different scenarios. The challenge is knowing enough about the tools to pick the right one to meet your goals given the conditions.
Do you have Enough Clubs in your Bag?
Whether on the golf course or in the classroom, it’s critical to think about the range of situations you will face or the questions you will have to answer. A long par 5 may be 500 yards, where a short par 3 might only be 90 yards. These will require distinctly different clubs (i.e., tools) to get to the green in regulation. Some clubs (e.g., drivers) are built for distance and roll, whereas others are purposely built to help you recover from a sand trap (i.e., sand wedge); you wouldn’t use the same club for both cases.
Similarly in the classroom, you wouldn’t try to get a sense of whether students are missing key entry level standards coming out of grade 3 with a bell-ringer in the same way that you wouldn’t use an end-of-course exam as a dyslexia screener. You will need different tools for different situations, but you can’t decide whether you have the right club (or enough clubs) unless you have some understanding of what each of them does and doesn’t do well. Ultimately, the best way to build this understanding is through study, practice, and experience. But there is no substitute for practice.
Practice Makes Perfect: Using the Right Tool for the Right Job
So how do we get to a point where we can use the tools to support our objectives? We first have to make sure we have the right tool for the job then use it in the right way given the context or conditions. A good place to start is knowing what goal an assessment is intended to meet. Is it to engage students, determine whether the lesson plan was learned, or to evaluate end-of-course mastery? Different tools will provide you with different insights, but you can’t select an assessment without knowing what it purports to do or how you’re supposed to use it.
Another way to learn how to use the right tool is through lessons learned or “tips” from others who have played the game. Veteran teachers or instructional leaders can help provide insight into what tools might work best and when throughout the year. Through experience and practice, you learn that to make real time adjustments throughout the year, formal assessment tools will not be enough. You need to use formative assessment practices to improve student learning.
Understanding these real-time adjustments when you practice is similar to using formative assessment practices to improve student learning. Please note, we are referring to the stricter definition of formative assessment, which is a “planned, ongoing process used by all students and teachers during learning and teaching to elicit and use evidence of student learning to improve student understanding of intended disciplinary learning outcomes and support students to become self-directed learners.” CCSSO, 2018, p.2).
Whether you buy or build the perfect tool, if you don’t understand the conditions of learning or what students are bringing into the classroom, tools will be misused, and their associated findings will be misinterpreted. Only then can we effectively use all the clubs in our bag to put together a complete game based on formative practices, large-scale assessments, and everything in between (see previous D’Brot post).
Evaluating Local Tools Wisely
Regardless of how you practice and obtain experience, we suggest a few ideas to ensure that the range of local tools will actually meet your goals and context (and coherently build a picture of student performance):
- Focus on the Standards: Standards are not sufficient to just cover but must be covered to the same rigor or complexity as expected (over time) in the standards.
- Focus on Expectations: Assessments, whether locally developed or commercially purchased, are at risk of having different expectations than those on the statewide summative assessment. The differences are often a function of the complexity of the standards and should be evaluated. Is the assessment event meant to help identify opportunities for scaffolding, or is it expected to reflect mastery of a given standard? Performance expectations matter.
- Focus on Coverage: Assessment events, whether formal or informal, have a targeted span. The narrower the focus, the deeper you can target certain standards. Conversely, the wider the set of standards, the shallower the evaluation of knowledge. Understand whether locally-developed or commercially-purchased assessments are covering the lesson, the unit, the semester, or the year and whether they are giving you worthwhile information. The risk will likely be greater with commercially-purchased, off-the-shelf assessments.
Filling your “Bag” of Assessments with the Right Tools
Finally, we need to think about how to move the ideas presented here into practice. If we are aware of the standards, expectations, and coverage of a given assessment, it becomes more manageable to building the right set for your own setting. Here, we offer a few concrete suggestions for thinking about what “clubs are in your bag” from an assessment standpoint:
- Determine the purposes or uses that are most valued.
- Determine the assessment properties and practices that are required to support the prioritized uses, which may include understanding the assessment’s grain size, content coverage, complexity, and performance expectations.
- Conduct an assessment inventory to understand what’s available.
- Determine whether there are gaps in the understanding your current assessments provide and understand how those gaps can be filled.
- Identify where there are redundancies and eliminate assessments or activities that are duplicative or unhelpful.
- Plan your assessment strategy based on how standards are covered throughout the year (e.g., by aligning events to curriculum through specified scope and sequence).
The Center has also produced an interim assessment evaluation toolkit, which raises many of these same questions to help states and districts consider how their goals, intended uses, assessment characteristics, and the interpretation of results need to be coherent. You can access that tool here.
Maria Cammack is Deputy Superintendent of Assessment, Accountability, Data Systems, and Research for the Oklahoma State Department of Education.