Reposted from original posting for client company ProExam.
Most readers are probably familiar with the fascinating curve ball the 2015 Federal Elementary and Secondary Education Act, known as ESSA, has thrown into state-level mandated accountability indices. In addition to a set of “substantially weighted” academic indicators, states are to add to them “at least one additional indicator of school quality or student success beyond test scores.”
Although we are presently in a moment of political uncertainty with regards to the future of all federal policy and legislation, there is some reason to think ESSA will stand as is: it was passed, after all, in legislation by the Republican-controlled House and Senate before being signed by a Democratic President.
Let us first applaud the inclusion of this additional indicator, what the media is usually labeling (though not entirely accurately) the “Non-academic Indicator” (NAI) or “Non-academic Factor” (NAF) to the mix. This is great news: we know today more than ever before how important it is to broaden our gauges of educational effectiveness.
Consider the hugely interesting finding from a 2016 NBER study (C. Kirabo Jackson), which is summarized in a recent excellent report from the Hamilton Project, “Seven Facts on Noncognitive Skills”:
When considering only the effect of a teacher on students’ test scores, Jackson finds that higher-quality teachers provide a small boost of 0.14 percentage points to high school graduation rates.
When Jackson considers the effect of teachers on both test scores and noncognitive skill factors, their effect on noncognitive skills is shown to matter more, with higher-quality teachers raising high school graduation rates by 0.74 percentage points.
Moreover, teachers who are adept at raising test scores and teachers who excel at instilling noncognitive skills are often not the same people.
In other words, if and when we incent, recognize, and reward those teachers who successfully raise test scores, and we don’t do the same for those teachers who enhance noncognitive skills, we have the potential unintended consequence of actually depressing high school graduation rates—by driving away or changing the practices of the very teachers having the most positive impact on graduation.
It’s been about a year since ESSA was made law, and in that time much attention has been directed to the new non-academic factors requirement, with some wide debate about which particular additional factor(s) should be selected for inclusion in the state level accountability index. There have been multiple recent studies and presentations, including:
- Brookings’ Hamilton Project (“Lessons for Broadening School Accountability under ESSA”)
- Transforming Education (“Expanding the Definition of Student Success Under ESSA”)
- The National Education Policy Center and the University of Colorado (“Making the Most of ESSA”)
- Chiefs for Change (“ESSA Indicators of School Quality and Student Success”)
- ASCD (“ESSA and Accountability Frequently Asked Questions”)
- Emphasis on use of multiple NAF data sources
- Debate over the pros and cons of the use of SEL measurement
- Frontrunner status for chronic absenteeism
- Importance of support for educators’ effective use of NAF data and for accompanying evidence-based interventions
Let’s look at each in turn.
1: Multiple Non-academic Indicators Usage
Easy to overlook in the ESSA regulation is that the wording states “at least” one additional measure. There’s a floor, but no ceiling, in this legislated mandate. In the spirit of more effectively evaluating “whole child” education and genuinely broadening accountability, states ought not to limit themselves to just one but instead create a carefully composed portfolio of indicators which has the advantages both of accounting for more critically important factors of K–12 schooling and reducing the negative effects of overly attending to any single indicator.
This advice comes from NEPC, the National Education Policy Center. Among its eight recommendations, numbers two and three speak to this:
Adopt multiple non-academic indicators that states and schools can report in their annual report cards. States can do this even if they must, pursuant to ESSA, adopt a system with a single composite score. The design and presentation of this information can provide a far more comprehensive and authentic view of the schools to parents and the public.
Carefully combine indicators to signal what is important and avoid perverse incentives for manipulating any one indicator.
Note that the trailblazing California CORE districts do exactly this, combining many different kinds of non-academic indicators such as climate and SEL competencies into their School Quality Index.
2: Pros and Cons of SEL Measurement
This is a hot topic in many of the reports and presentations about the new ESSA mandate, with some experts and organizations advocating the value of measuring things like student grit and growth mindset. The California CORE districts have released their student survey as part of an effort to promote their SEL measurement strategy in ESSA accountability and the Brookings Institute has published Harvard Professor Marty West’s findings of CORE’s measurement effectiveness; many others, including NEPC and Chiefs for Change, have taken the opposite point of view, arguing that SEL measurement is, in effect, not ready for prime time.
This debate certainly appears to be the most important flash point of argument in the overall ESSA NAF conversation. And rightly so. Measuring SEL/noncognitive skills accurately and using the resulting data wisely are challenging tasks.
It should be noted however that it is often the case that those dismissing the value and opportunity of SEL measurement do so by relying exclusively or primarily upon a single piece of evidence, namely a 2015 article by Angela Duckworth and David Yeager. (This is the case for both the NEPC and the Chiefs for Change reports.) That paper, as important as it is, is somewhat limited in its scope and currency; it narrowed its discussion of SEL skill assessment for accountability to just a few measurement methods, and was published too late to include any consideration of Professor West’s California research findings.
3: Chronic Absenteeism
It would certainly appear that in the race for most popular NAF, so-called “chronic absenteeism” is in the lead. This can be seen most clearly in the Hamilton Project report from Brookings Institution, which lays out the evidence that chronic absenteeism meets the NAF criteria especially well, including in providing meaningful differentiation, and having a relationship to achievement and high school graduation. Looking across the country, one finds Tennessee, Illinois, and California as among the states placing a priority on this absenteeism indicator.
4: Don’t Leave Them Hanging
Measurement mustn’t stand alone; to have the impact ESSA intends, it must be accompanied by careful guidance on data interpretation and aligned evidence-based interventions. NEPC provides three recommendations on this subject:
- Help schools make sense of data on quality and student success indicators by coupling them with opportunity and resource indicators.
- Identify potential evidence-based resources ahead of time that can support schools in improving performance.
- Develop an accountability plan that funds and supports schools that need it, such as professional development and resources for identifying, adapting, and studying evidence-based programs.
From the vantage point of our organization, which has created an evidence-based, innovative, multiple-method, SEL/noncognitive skills assessment system called Tessera™, I’d respond to these emerging themes about ESSA NAF incorporation with these thoughts.
A: Include SEL Measurement in a Basket of Multiple Non-Academic Indicators: ESSA state commissions should certainly still, even in the face of the skeptics, consider noncognitive assessment systems as one among a set of additional nonacademic indicators. Sound research evidence has been established for the enormous importance of SEL skills and the effectiveness of their measurement. However, prudence dictates that not all eggs be placed in this one SEL measurement basket; accordingly, states shouldn’t hesitate to include high-quality SEL measurement in the mix of multiple factors. Recall the evidence from the Jackson research cited at the top: we have to better recognize in our accountability systems the impact of schooling on noncognitive skills, and simply considering chronic absenteeism alone, for instance, doesn’t go far enough.
B: And/or Include SEL Measurements for Interim Assessment: Even if states choose not to select SEL/noncognitive assessment systems like Tessera for their accountability indices, it is essential for both policymakers and educators alike to understand that there is still a vital role for such systems in their districts and schools, one which is more vital than ever because of ESSA.
Let’s imagine, for instance, a state that chooses chronic absenteeism as its singular or primary NAF additional accountability measure. Chronic absenteeism (and more particularly, a negative change in such absenteeism among a student cohort) is a result, a lagging indicator if you will, of something that is at least in part not successful about students’ schooling experience. Flagging this problem in schools is certainly very valuable. But we need more information about why this is happening, and as we know more about why it is happening, we need more interim measures about how students are doing so as to prevent the problem from occurring.
Just as in academic achievement data gathering, there is a call for appropriate interim and formative assessments to identify opportunities for improvement and monitoring growth. Schools and districts being held accountable for chronic absenteeism (our example here) will come to place a much higher priority on collecting and using social and emotional learning data so as to effectively influence their results and improve their overall school performance.
As the Chiefs for Change report explains, noncognitive skill “measures may be helpful to examine as part of a diagnostic review process or of school-level reporting.” (Chiefs for Change, 2016, p.25)
This is also the recommendation of Transforming Education in its recent report:
“Use MESH data for formative purposes, while continuing to explore other potential uses for future years: We recommend that leaders gather and examine several years’ worth of data before deciding whether to incorporate MESH into a formal accountability system. In the meantime, states and LEAs should capitalize on ESSA’s flexibility and use MESH measures within needs assessments to target specific supports and interventions for struggling schools.”
C: Include Data Interpretation and Intervention Advice: Whatever indicators are chosen and implemented, state policymakers and the broader community of researchers, policymakers, advocates, and others must work swiftly and vigorously to help districts become savvy in interpreting their NAF data and implementing evidence-based interventions for addressing the gaps which are identified.
Chronic absenteeism and factors like it, simply put, aren’t enough. Though they do broaden accountability of schools beyond narrowly defined academic achievement, they offer far too little meaningful information for educators. Their data is basically binary—a student shows up or not—and knowing that you have a problem does nothing to remediate it, certainly not in the short term. In contrast, feedback from SEL and noncognitive skills assessment is immediately actionable on the front lines by the people who matter most: teachers! So let’s not get stuck again with measurement as an end in itself, which is what happened with ESSA’s predecessor NCLB system, but let’s use data that can both provide accountability and inform action.
Tessera from ProExam’s Center for Innovative Assessment is a solution ready to be deployed in any and all of these ways: for accountability under ESSA or as valuable formative and interim assessments supporting the work of schools and districts focused on strengthening whatever NAF factor is being used. Tessera is accompanied by services assisting educators in using Tessera data for continuous improvement and by evidence-based interventions to ensure student skills are being effectively developed.
Leave a Reply