sel-hands-with-hearts-silhouette-482x335

I wrote this  piece for my client organization,ProExam with Jeremy Burrus, Ph.D.; it was originally published at the Getting Smart website, and is reposted here. 

The headlines shout that it can’t be done: that there aren’t effective, evidence-based methods for measuring noncognitive skills.

Our response: Yes it can and yes there are.

A front page news article in The New York Times, Testing for Joy and Grit? Schools Nationwide Push to Measure Students’ Emotional Skills, prompted several swift follow-up pieces around the web.

It is excellent to see the effort and attention being dedicated to this subject. We now know that social and emotional skills–which overlap with what many call character strengths, and others label noncognitive attributes–are as or more important than intellectual ability and cognitive aptitude for student and adult success in school, college, careers and life.

Social Emotional Learning Efforts

Developing noncognitive strengths is something nearly every teacher addresses daily. Increasingly schools, districts, networks and states are upping the ante for social-emotional learning (SEL), investing more time, energy and expense into these programs. Accompanying this stepping up is a greater attention to evaluating what’s working and for whom by collecting evidence and assessing needs, opportunities and impact.

Regarding SEL measurement, The New York Times quotes California CORE districts Chief Accountability Officer Noah Bookman: “This work is so phenomenally important to the success of our kids in school and life.” Were it only so simple. Angela Duckworth, Ph.D.‚ a University of Pennsylvania professor who has become widely known for popularizing the term “grit,” is quoted in the piece with what will be for many readers the most salient takeaway: “I do not think we should be doing this; it is a bad idea.”

Their concerns are reasonable: the specific measurement methods cited in the article do have limitations. We concur with reporter Kate Zernike in her statement that relying on “surveys asking students to evaluate recent behaviors or mind-sets, like how many days they remembered their homework, or if they consider themselves hard workers . . . makes the testing highly susceptible to fakery and subjectivity.”

We get it. Our colleague Rich Roberts, Ph.D., chief scientist at ProExam, who is a former principal research scientist at ETS and senior lecturer at University of Sydney, literally wrote the book on the issue of faking and other problems in self-report and survey.

We are familiar with the concerns about what Zernike refers to as subjectivity, also known as reference bias. When you’re evaluating your own proficiency, or evaluating your growth over time, to what peer groups and standards are you comparing yourself?

Overcoming the Issues

Overcoming these obstacles is the task to which Dr. Roberts and his many colleagues and co-authors have devoted most of their careers. His team at ProExam’s Center for Innovative Assessments is currently building a multifaceted assessment solution that gives K-12 students and schools comprehensive reports of noncognitive skills and character strengths named Tessera.

Contrary to the headlines, researchers have now well established that we can effectively build and deploy multi-method assessment systems that are reliable, valid and largely insulated from faking and other problematic features.

Two methods that were overlooked are “forced choice” and Situational Judgment Tests (SJT). In the former, respondents are asked to select which of several socially desirable statements (such as “I keep trying when something is hard”) is most like them. Forced choice items like these have been demonstrated by researchers to be both strongly predictive of success and extremely difficult to fake as it is difficult for the responder to know which answer will better serve their advantage.

In the latter, SJT scenarios are presented to responders and they are required to rank order the choices they would make from several potential courses of action. These methods have been widely deployed in arenas outside of K-12 education with demonstrated effectiveness for predicting academic achievement and future job performance. They are also hard to fake.

Effective use of these measurements is greatly improved when they are combined–“triangulated”–measuring multiple skills with multiple methods. Indeed, it seems likely that a range of these new methods (including forced choice and SJTs) will be utilized and researched both nationally and globally by measurement scientists at NAEP and PISA.

Teachers, counselors, parents and principals everywhere already are working hard to better support their students through more formative assessment of student noncognitive skills, and their work can only be enhanced by providing more and better measurement instruments. The data emanating from them will give insight to district leaders whether certain schools need more resources and more support; whether some student groups aren’t equally progressing in their strength developments; and whether particular initiatives are having the desired impact.

With more informative data, principals can better determine if an expensive or time-consuming SEL program up for renewal should be continued. Counselors can better choose whether to spend more time getting training in developing student social skills or perseverance. Teachers can evaluate whether it’s best to prioritize curiosity, communication or resilience in students when planning new curricular programs for the coming year.

Should There Be Concern?

Are the experts whose concerns are cited in The New York Times article and the many outlets recirculating their critiques right to raise questions and express alarm? Yes, they are. Many of the methods currently being promoted and implemented do have limitations. It’s also true that nobody believes social and emotional learning measurement should be employed for over-the-top, high stakes and punitive assessment systems that we saw academic achievement testing used for in the extremes of the No Child Left Behind era.

But is there nonetheless a genuinely profound opportunity to measure these important skills and strengths, using multiple and innovative methods? Yes there is.

And do we have a responsibility–even an obligation–to implement effective measurement and assessment of these profoundly important social and emotional skills and character strengths in our students? Yes we do.

—-

We are looking still for a few more middle schools to volunteer to participate in a pilot of our instrument in development (Tessera)– at no expense– for our research to “keep improving” this kind of measurement.   Little is required of schools other than providing about 45 minutes for students to take the fun and friendly assessment online.  Contact me at jonathanemartin@gmail.com