What should accountability look like for student-centered learning?

Picture a mirror, not a hammer—helping answer objectively: How are we doing? Where can we improve?

December 1, 2021 • Lars Esdal

Illustrations by Khou Vue for Education Evolving

This post is part of a larger blog series looking at strategies student-centered schools have used to respond to the pandemic.

In this series, we looked at strategies student-centered schools use to gauge their own success—things like surveys, observation rubrics, and student portfolios. These school-level measures capture important dimensions of success, and are clearly useful to the educators and students we spoke with.

But, many policymakers and advocates argue that we also need some system-level measures, comparable across students and schools, that provide an overall picture of how we’re doing as a state and country, and accountability that students aren’t falling through the cracks.

This post is part of a series exploring practices and success indicators of student-centered learning in a pandemic—and beyond.

» Learn more and see other posts

The problems of bias and low expectations are long documented, and are an enduring way that racism, classism, and ableism show up in public education. Having appropriate, comparable system-level measures and accountability is a key part of challenging systemic inequities.

While we have some system-level measures right now (such as current state assessments), they capture only a partial picture of the goals students, families, and society have for education. And, they don’t yield enough data that is useful for actually improving schools.

In short: we need comparable system-level measures and we need accountability. But they must support, rather than detract, from student-centered learning.

What would that look like? Let’s explore.

Capture academic learning in a better, more student-centered way

Student-centered schools value learning that falls within conventional academic disciplines—like math, science, and ELA. But, policymakers must find more student-centered, asset-oriented ways to measure this learning for system-level accountability.

Shorter, more frequent assessments can provide timely, actionable data to educators.

One strategy is to shift state assessments (e.g. Minnesota Comprehensive Assessments or MCAs) towards what are called “through-year assessments.” Rather than one long MCA at the end of the school year, states would offer several shorter assessments that schools could give on-demand throughout the year.

These more frequent assessments provide more timely, actionable data to educators. They are more flexible and less disruptive in terms of when they are given (for example, at the end of a course) and provide multiple data points per year (which is both more representative than a single point-in-time test and also captures learning growth across the year).

The latest federal education law, ESSA, allows states to use through-year assessments (especially states that apply for Innovative Assessment Demonstration Authority waivers). For example, Georgia has partnered with NWEA to offer three assessments on-demand throughout the year. The tests both yield timely, actionable data, while also generating a student “proficiency” score for state accountability.

Another even bolder strategy for measuring academic achievement is to use performance assessments rather than traditional multiple-choice assessments. In performance assessments, students complete complex, applied tasks (such as a research paper, a science project, a presentation, etc.). Their results are scored against state standards using carefully developed rubrics.

One such example is the PACE initiative in New Hampshire. Students take several performance assessments throughout the year as part of their normal classwork, which collectively cover all of the standards for their grade. Teachers score their work, and local scoring is calibrated across districts to improve statistical reliability. In some grades, performance assessments replace the conventional state assessments to generate “proficiency” scores for state accountability.

Beyond academics: system-level measures of the “less quantifiable”

In this series, we looked at how schools are measuring dimensions of student success beyond academics—things like engagement, wellness, and social-emotional development.

Such assets are highly valued by families, employers, and society at large. And while schools we studied collect data around these concepts that has been useful at the local level, it hasn’t necessarily been aggregable up to system-level measures. Is it even possible?

One way policymakers can do this is to collect data on “proxy” measures. Common examples include student attendance and/or mobility (for engagement), teacher retention (for school culture and leadership); and safety or disciplinary incidents (for welcoming school environments). Policymakers must improve systems for collecting, aggregating, and displaying this sort of data.

Another common approach is to use surveys. For example, in Illinois all students take the 5Essentials survey, which yields data on things like whether students feel engaged, safe, and supported. In Minnesota, we do have a student survey, but policymakers must improve it; the survey is only given once every three years and is not consistently taken by all students.

System-level measures for what purpose? Reframing accountability as a “mirror.”

Imagine: Policymakers have acted on the above recommendations. We have richer data on the full picture of what matters. And the data is more consistent, timely, and actionable.

But what then? The question remains: what purpose should that data serve?

In short, system-level measures should be used to hold up a “mirror of accountability” to support local school improvement efforts. Policymakers provide a mirror that helps schools, families, and others candidly and transparently answer the questions: “How are we doing—for all our students? Where can we improve?”

Concretely speaking, holding up a mirror of accountability means policymakers provide data systems, dashboard websites, and/or performance frameworks that show clear, comparable data on those system-level measures. In Minnesota, some of these pieces are in place, but they must be more straightforward and accessible.

System-level measures should hold up a “mirror of accountability” to support local school improvement efforts.

And what would using that mirror of accountability look like? For schools and educator teams, the system-level measures would help identify particular subjects, grades, or areas of culture that need improvement.

For those outside of the school—families, districts, charter authorizers, and state departments of education—the mirror would show places the school may need outside support or, in some cases, a firm insistence that it address a recurring challenge or inequity.

Comparable and student-centered: the delicate dance

We must honor the differences among students and schools. And, use system-level measures as a “mirror of accountability”

This piece has emphasized the importance of comparability in system-level measures. Their comparability makes them an important complement to the many other forms of data and evidence used at the local level by schools, and a helpful check against bias and inequity.

But we also can’t forget: students are unique individuals. They have different assets that must be documented, celebrated, supported in different ways.

In short, our education system must do both. It must honor the unique differences among students and schools. And it must use system-level measures as a “mirror of accountability” to check our perspectives and keep them grounded in the broader world in which students and schools inevitably exist.

In some ways, this is the same delicate dance we step each day as adults. We learn to be our own selves. And we learn to exist and find our place in that broader world.

About This Series

This blog post is part of a larger series exploring the practices and success indicators used for student-centered learning in a pandemic—and beyond. We are grateful to the Leon Lowenstein Foundation for their generous support for this series.
Read More & See Other Posts