How digital tools can help with assessment literacy.

Reorienting Conversations about Assessment Literacy Through a Digital Lens

Recognizing the Importance of Digital Tools in the Interpretation of Assessment and Accountability Information

There is a quote that accurately speaks to the focus of this post, which is the use of digital tools not just to support assessment literacy, but to evolve the way we approach it: 

“If you want to teach people a new way of thinking, don’t bother trying to teach them. Instead, give them a tool, the use of which will lead to new ways of thinking.”

— R. Buckminster Fuller, Author & Inventor

The idea that the existence of and supports around high-quality digital tools foundationally shape our professional practices in the modern world and help us develop more sophisticated computational thinking and reasoning skills is certainly not a novel insight, including in K-12 education. Consequently, I am sometimes surprised that we do not focus more on the nature of digital data visualization and manipulation environments that are at the disposal of professionals who work with assessment and accountability information. The way they are designed and implemented is often treated more in passing, rather than as an integral mediating factor of the sense-making process around data. 

In this post, I want to reorient our perspective on the issue of assessment literacy. I will specifically look at the role that digital tools play in developing the computational thinking capacities of educators as well as school, district, and state leaders so that they feel empowered to make informed decisions about how to best support student learning. This will require me to address the role of disciplinary areas of work in data engineering, data visualization, and data science, at least briefly. In many ways, I argue here that we need to leverage the best conceptions and practices for fostering computational thinking skills through digital technologies for our students in K-12 education and adapt them for our educator and leadership training.

Data-driven Inquires vs. Formative Practices

Importantly, I am not going to be talking about information teachers gather through ongoing formative classroom assessment practices, but, rather specifically, about the educational moments when teachers as well as school, district, or state leaders have to make sense of various indicators and test scores for strategic decision-making. Think of the moments when individual teachers or groups of teachers review the completion rates from summer homework assignments, past state and interim assessment results, and recent homework or unit assessment submissions to plan their instruction. Or think about the moments when district leaders review the attendance rates, school quality indicators, assessment results, graduation rates, parental engagement statistics, financial portfolio data, and other indicators to review their strategic plan. 

Intuitively, these tasks will be vastly more challenging if relevant data are incomplete, poorly linked, incorrect, or presented in pre-determined views that have serious interpretational limitations. 

Conversely, they will be vastly more manageable if relevant data are readily available in easily navigable dashboards that allow users to interact with the interface to search for information at customizable levels of detail that are appropriate to them as their understanding of the data develops and new questions of inquiry emerge.

Reframing Assessment Literacy

As a former professor of educational assessment and educational measurement, I know firsthand how critical it is to curate and design well-crafted digital experiences in labs for students who learn how to investigate data from different perspectives. Tools that allow for interactivity, whether it be searchable databases, semi-structured simulations, or coding environments that allow learners to customize further, prove particularly effective. 

Modern programming languages such as R and Python or commercial programs such as Tableau provide powerful capacities in this regard. Whenever available, they empower learners to ask critical questions that they had not thought of prior, identify and correct misconceptions, and build a deeper understanding of the subject matter. 

In the area of K-12 education, however, discussions of assessment literacy are often centered around the knowledge required to guide score interpretation and use for different types of assessment, but there is a range of skills required to leverage digital tools that must also be addressed, which are often overlooked. Foundational knowledge is of course key and there are surely practitioners who need additional training and/or refreshers. In fact, there are wonderful resources out there that can guide the development of such knowledge; see the Michigan Assessment Consortium portal or the Classroom Assessment Learning Modules developed by my Center colleagues Carla Evans and Jeri Thompson, for example, as well as the related Classroom Assessment Standards in that regard. 

To enable educators to leverage this foundational knowledge to meaningfully interpret and act upon assessment information, however, one requires modern digital tools that provide accessible entry-points to looking at and working with data. 

The Power of Interactive Digital Dashboards

Let’s focus specifically on the issue of data interactivity for a moment because it is such a critical lever for developing appropriate mindsets and practices. Unfortunately, this lever cannot often be pulled. State assessment data are rarely available in a timely manner, score reports may be static – think printable PDFs – or available as downloadable data files only, and professional supports for making sense of the assessment data at the individual or system level may be incoherent with supports for other assessments in a system or simply be insufficient for certain user groups. 

In recent years, we have seen an explosion of digital tools that allow users to query data in order to slice and dice them in many ways that might be meaningful to them, often with the click of a button. 

This shift from relatively static “score reports” in the 1990s and early 2000s to “dynamic interactive dashboards” has been a true game-changer. In fact, research underscores the importance of such interactive displays and computational environments for fostering the development of computational thinking skills, a common educational goal for K-12 students that is equally relevant for the adults who direct education. A powerful example of an interactive dashboard-driven approach at a national level continues to be the “data explorer” by the National Assessment of Educational Progress (NAEP), our educational survey of achievement various subject areas for students in grades 4, 8, and 12. 

But there are powerful state examples as well. At the recent Reidy Interactive Lecture Series (RILS) conference hosted by the Center, we had the pleasure to hear from Ajit Gopalakrishnan from the Connecticut Department of Education (see the last section of slides in this presentation). He showcased the various ways in which his department have made assessment data – scores as well as various contextual indicators – accessible to their users through a series of well-designed, user-friendly interactive dashboards. He discussed specifically how powerful these have been for state and district leaders to learn about phenomena that were previously hidden to them. 

That being said, the creation of such dashboards comes at a non-trivial price that needs to be well understood and reflected in strategic planning efforts. It is well known that the design of user-friendly dashboards is an iterative, interdisciplinary endeavor facilitated by experienced user interface/user experience specialists, content specialists, data scientists, and so on. 

The creation of dashboards begins with a well-structured and efficiently managed data warehouse so that all relevant data are available in the first place – if you don’t collect it, you can’t act upon it.  But then the data must also be cleaned, stored, and integrated with appropriate tags that allow visualization tools to engage the appropriate visualization and slicing-and-dicing functions. Maintaining a data warehouse can get especially challenging as the number of different data sources that have to be integrated increases and as the number of separate owners (e.g., state departments, districts, vendors) increases as well. 

Careful planning around technological needs, human resource needs, and strategic partnership needs – a “data strategy” – is required to truly make the ecosystem of information work for key users.

Communal Engagement around Digital Tools

Even when powerful digital tools for data visualization and data manipulation of assessment and accountability information exist, the real power in learning and capacity-building comes through professional communities of practice (CoPs) in which educators or leaders at different levels are active participants. Such communities typically support in-person exchanges but leverage digital discussion forums, learning portals, and other digital resources as well. In many ways, the coupling of in-person and virtual supports with digital tools for data visualization and manipulation multiplies the affordances of both. These CoPs can exist at multiple systemic levels, from schools to districts to regions within a state to groups of states, and, beyond providing practical and tactical support, also serve an important psychological function of providing relatively safe spaces in which professionals can share ideas and resources and engage in mutual learning. 

Thoughtful, modern frameworks for teacher assessment literacy and education or classroom observation protocols recognize the importance of these ecosystems in the development of proficient teachers. So why are discussions about assessment literacy so often framed first and foremost in terms of individuals, either explicitly or implicitly? 

While foundational knowledge, beliefs, and experiences that educators bring to their job are certainly important for their practice, school- or district-level practices around assessment, mediated by CoPs and Co-ops, are likely so much more powerful than what individuals bring to the table. 

For example, if a teacher is passionate, invested, and knowledgeable about deeper learning and project-based activities but a school or district favors more simplistic drill-and-practice approaches with associated assessments on a given subject matter, then no amount of assessment literacy on behalf of the teacher will be able to override the inherent limitations of the system. 

Similarly, if a district has a poor data engineering architecture and data science culture that prohibits the presentation and/or analysis of longitudinal data on assessment performance or summer homework, or if their dashboards are clunky with customizations available only via formal requests to IT staff, then data engagement and, consequently, assessment literacy is severely hampered right out of the gate.

Framing the Conversation

In our work at the Center, we are often tasked with addressing the assessment literacy issues in a district or state or, perhaps a bit more indirectly or broadly, to support “capacity building.” One of the most important tasks for us is to find out what individuals and teams bring to the table in terms of values, knowledge, and experiences; what kinds of digital tools and professional relationships with outside organizations they have at their disposal; and what kinds of representations they find valuable for thinking through their particular problem space. Building capacity may not be very much about increasing foundational knowledge at all but, rather, about understanding and improving the assessment ecosystem and the myriad interdependencies across digital tools overall.

While there are certainly misunderstandings and misconceptions about assessment information and uses, limitations in data engineering, visualization, and manipulation frequently stand in the way. Consequently, we should reframe our discussions of assessment literacy and associated capacity building. We should question the tools and communal resources around the tools that are available to our stakeholders and how we can either redesign them or leverage them better. Then we should engage with stakeholders in dedicated, tool-centered educational efforts – with CoP and/or Co-op support. 

After those efforts, we will be in a better position to evaluate what major assessment literacy gaps remain to re-evaluate how best to address those gaps in targeted ways. I expect us to be surprised that our framing of the issue and our understanding of educators’ capabilities will likely have changed at that time. 

I deeply believe that the human beings who seek out our advice are generally well-meaning, want to think deeply about assessment issues to help students learn, and ask a lot of thoughtful questions about assessment and data. I would rather be proven wrong on this front once in a while than assume that people are lacking while they are operating in ecosystems with many constraints and design choices over which they have little, if any, control.

Share: