Next-Gen Assessments: All Dressed Up, But Where to Go?

"What gets measured gets done," goes the old adage. And many of those pushing for deeper, personalized, student-centered learning point to the scope of most current assessments, which cover a narrow set of rote skills in reading and math, as helping to perpetuate an industrial-era model of school.

Fortunately, much progress has been made over the last several years in developing new measures. Projects such as NGLC's Assessment for Learning Initiative; the Gordon Commission on the Future of Assessment; the Innovative Assessments Toolkit from KnowledgeWorks, Nellie Mae, and the Center for Assessment; and others—not to mention the consortia Smarter Balanced and PARCC—have garnered national attention.

Many of the new assessments these initiatives have created or helped draw attention to are "performance-based assessments," which require problem solving and creative thinking, and are often designed around and aligned with "competencies." Many of the new computerized assessments are adaptive to student ability level, adjusting the difficulty of questions as they proceed. And some are designed to gauge "noncognitive" concepts like social-emotional learning (SEL), engagement, persistence, and even hope.

New measures for schools are also emerging. They consider things like culture and climate, strength of relationships, family satisfaction, and safety; they draw from incident data such as suspensions, attendance, and graduation rates as well as surveys of students, families, and staff. Some states have even created longitudinal data systems, which can link students' college and workforce outcomes back to the schools from which they graduated.

With this growing panoply of measures available, the key question becomes: where and how might these new measures be used to most effectively shift what "gets done?" At least four main options are possible:

1. In state accountability systems?

Under the new Every Student Succeeds Act (ESSA), states must include at least one "fifth indicator" of student success or school quality (SS/SQ) in their accountability plans. While states around the country have explored numerous indicators over the last year, few truly next generation assessments or measures made it into the state draft plans.

A few of the barriers cited to incorporating the new measures—at least, based on the ESSA workgroups we've participated in here in Minnesota—had to do with outdated state data collection and reporting systems that couldn't yet support these new measures statewide and concerns about gaming, bias, and subjectivity with any survey-based measures (such as many measures of SEL, climate, engagement, etc.).

Psychometricians are also highly cautious about measure reliability and validity in any performance-based, rubric-scored assessment—especially if the results would have accountability consequences. And overall, frustrations with NCLB's shortcomings and its long delayed reauthorization have left many with the sense that uniform, statewide, punitive accountability may not be the place to exert leverage to shape the design of school.

2. Publicly reported in state report cards?

Similar to option 1 above a state could gather data and report publicly on a certain set of measures for schools and districts, but could instead not attach any punitive consequences to those results. A state would most likely include those measures on the "report card" it's required to publish for schools and districts under federal law. A related possibility is for a state to partner with a website that already provides information about schools, such as GreatSchools, and include additional measures on that website.

While this option ameliorates some of the concerns with using measures in a punitive accountability context, it still applies a one-size-fits-all-schools notion of measurement.

3. In individual school performance agreements with a district or a charter authorizer?

In a truly student-centered system of public education, there is no single model of school that will work for all students, families, and communities—no "panacea" design. By extension, how each school defines and measures success will also vary based on its learning model, curricular emphasis, values, and more.

Defining the measures on which a school is to be held accountable should be part of the process of designing the school's learning program and drafting its performance agreement. Such an agreement can exist in the district sector, between a school and its district (especially for districts using a "pilot" or "portfolio" strategy). Or in the charter sector, via the contract between a charter school and its authorizer. Innovative Quality Schools in Minnesota is a strong example of an authorizer that is intentionally developing school model-specific performance measures and accountability expectations in all its charter contracts.

To be clear: fostering in students some level of basic skill in language and math should be required for all public schools, and state data systems should continue to disaggregate measures of such skills by student subgroup to monitor unacceptable achievement gaps. The school-specific measures in a performance agreement are about what's done above and beyond the requirement for developing basic skills.

4. As formative tools and internal benchmarks?

Finally, and perhaps most importantly, many of the next generation assessments described above were originally designed as tools to guide and facilitate learning. They can, and should, still be integrated into the learning experience, so students can see where they're at in their own learning journey, and so teachers and administrators can know when students need extra help. They can also be used by a school to monitor its overall progress toward its goals, as a way of imposing soft self-accountability, and as part of a feedback loop in adjusting schoolwide strategies. The CASEL collaborative is a great example of this 4th option with SEL measures.

So, where to go?

The main purpose of this post was to describe the next-generation measures emerging, and to iterate possible options for how they might be used. But, ultimately, it seems like a mixture of options 2, 3, and 4 from above—and at least a dash of option 1, to comply with the new ESSA requirement for an SS/SQ "fifth indicator"—is probably a good bet for most states.

Found this useful? Sign up to receive Education Evolving blog posts by email.

This post first appeared in Rick Hess' blog for Education Week, Straight Up.

Comments

Absolutely fabulous article Lars! Thank you for sharing.

Add new comment