(Disclosure: I wrote this following piece for new client ProExam, on whose blog it originally appeared.  People often inquire what I’m up; this is one of my newer and larger projects, consulting to ProExam on the development and promotion of their forthcoming tool.  

This– better school and individual level assessment and measurement, primarily for formative purposes, of noncognitive skill and character development — is something I’ve been enthusiastic about for years, of course.   Readers here may remember my enthusiasm dating back to 2013 for the Mission Skills Assessment, built by ETS for INDEX; I wrote the MSA user’s guide and toolkit in 2014.   Rich Roberts, Ph.D. and Jeremy Burrus, Ph.D., the two ETS scientist-researchers who originally designed the MSA, are now at ProExam, and have recently developed the next-generation tool described below.  Some of the evidence basis and research underpinning this new product can be found in a paper I co-authored with Dr. Roberts for the Asia Society, “A Rosetta Stone for NonCognitive Skills.”)

I’m assisting with recruiting schools interested, at no expense, in piloting the tool this winter/spring; contact me if you are interested at jonathanemartin@gmail.com)

Understanding the Big 5 Factors of Noncognitive Skills

Social Emotional Learning and Noncognitive Character Strengths Matter…and How We Measure Them is the Key to Their Improvement

Perhaps the greatest consensus in K-12 learning today centers upon the critical importance of student social and emotional learning and the development of their noncognitive character strengths—their skills for success in school and life.

This is not news to teachers.  Ask a preschool assistant teacher or ask an AP Physics teacher and you’ll find resounding, even impassioned agreement: dependability, persistence, ambition, curiosity, and getting along with others matter as much, or very often much more, than cognitive ability.  Education leaders have similarly embraced this understanding, with ASCD making the “whole child” its signature slogan and state and district leaders shifting the emphasis of schooling to skills and life success.

In the past decade or so, the common sense point of view of teachers in the field and educational leaders has been emphatically endorsed by researchers, social scientists, and think tanks, including Nobel Prize-winning economist James Heckman, New York Times journalist Paul Tough, MacArthur “genius” prize winner Angela Duckworth, the Hewlett Foundation, the RAND Corporation, the National Research Council, the Brookings Institute, and the New America Foundation, just to name a few.

As the educational field works to strengthen its effectiveness in developing and implementing social and emotional curricula, in planning and guiding ongoing improvement in this arena and holding themselves accountable therein, and in providing meaningful feedback to students in their growth and proficiency, an enormous gap is being increasingly perceived by nearly all involved.  We lack effective assessment and measurement of social and emotional learning and noncognitive character strengths: the skills of success.

Nearly every individual and organization listed above can be cited to this effect: We lack the assessments we need.  The National Research Council, in its 2012 landmark, authoritative report, Education for Life and Work, declared: “In summary, there are a variety of constructs and definitions of cognitive, intrapersonal, and interpersonal competencies and a paucity of high-quality measures for assessing them.”  In 2013, the much-discussed Gordon Commission report stated among its nine core arguments that “Assessment must fully represent the competencies that the complex world demands.”

Noncognitive skills are shown to be more impactful on longterm success

The demand is clear, compelling, and increasingly urgent.

The aforementioned 2012 NRC report makes a vigorous recommendation that there be assertive efforts to develop new assessment techniques and tools to measure these critical “intrapersonal” and “interpersonal” competencies in students and the effectiveness of educational programs cultivating them.

We can hear across the landscape that call being echoed by many, from the classroom teacher to the school principal to the district and state leader to the state and federal policymakers and the educational researchers and evaluators. A sampling of the desires and demands for SEL program and student character competency measurement includes the following.

Even as we see a dramatic shift in how federal and state accountability functions, a much overdue shifting of the pendulum away from the excesses and counter-productive mechanisms of No Child Left Behind and aspects of Race to the Top,  accountability metrics are being expanded to include more than narrowly defined academic achievement.  As the New York Times reported about the brand-new Elementary and Secondary Education Act (ESSA) (Dec. 2015), in the new accountability system “a student performance measure, like grit or school climate, has to be part of the evaluation equation.”

Some districts got a head start on the new ESSA obligation, perhaps even paving the way toward it, by incorporating school culture, climate, and student SEL proficiencies into their NCLB waiver accountability indices.  As NPR’s Anya Kamenetz wrote about California CORE: “The CORE districts drew up a School Quality Improvement System that relied 60 percent on traditional academics, and 40 percent on ‘social-emotional and culture-climate’ factors: how well schools were faring in building nontraditional, noncognitive skills.”

Complementing new federal and state accountability systems for public education are new and more demanding accrediting obligations in all educational sectors, including the private/independent ones not subject to public education accountability.  For instance, the 19 associations organized into the international “Commission on Accreditation” have established a Criterion 13: “The standards require a school to provide evidence of a thoughtful process, respectful of its mission, for the collection and use in school decision-making of data (both internal and external) about student learning.”  For many schools, this accreditation obligation entails widening the evidence of student learning they collect to ensure they are truly honoring their missions, almost all of which commit the school to more than academic achievement alone.

There’s more.  There’s expanding interest from school board members in institutional dashboards that annually report on the school or district’s progress on a wide array of the most important metrics, which usually include character competencies.  In the fast-growing world of charter schools, there is demand from all quarters—from charter authorizers, from foundation funders, from millionaire and billionaire donors (who put great attention upon the “ROI” of their philanthropic support), from comparison shopper parents–for ever more data about the effectiveness of their alternative and competitive educational programs.  Private and independent schools too need to prove their worth in a cluttered marketplace.

And more still:  Formative assessment is all the rage, thanks in part to the influence of John Hattie’s Visible Learning movement, and accordingly many schools and districts are seeking to provide better and more frequent dollops of feedback to students about their SEL progress—and seek assessment tools and systems to do so.  Coming from the Carnegie Endowment for the Advance of Teaching and elsewhere are demands for and interest in a greater use of measurement for continuous improvement. The example provided in Carnegie’s recent book about improvement measurement is of “productive persistence.”

Many school systems nationally are particularly interested in strengthening student development of academic mindsets such as self-efficacy and the growth mindset; others are equally or also attentive to resilience and perseverance as paramount for their students.  Yet another topic of fast-rising interest in the wake of adolescent depression and self-harming is student “wellness.”  Work in all three of these domains demands better measurement methods to support educational initiatives designed to better prepare and support our youth and their healthy development.

Finally, many educational leaders are seeking new and more evidence-based SEL and character education frameworks.  They want more than a piecemeal solution, but they are wary of an off-the-shelf, “canned curriculum” which de-professionalizes their local initiatives.  Instead, they desire a sound and robust framework inclusive of an assessment system and evidence-based curricular and pedagogical resources that they can then deploy locally by their own lights.

The status quo is insufficient.

The demand is strong and fast-rising but is being met currently only by first-generation solutions, representing important first steps but sharply limited in meeting the multiple and manifold requirements for effectively fulfilling their purpose.

How so?

The constructs being measured lack an evidentiary basis.

This is not always the case: Some cases include constructs that rest on a sound basis.  But, too often, the constructs selected for measurement are chosen because of their popular appeal rather than compelling scientific evidence.  Little attention is given to whether they are firmly established by research as distinct and significant “factors”—those which can be differentiated fully from one another, are malleable across time, are located in all populations, and are significant to success in school and beyond.

The measurement methodologies are flawed or sharply limited.

Student surveys and so-called self-report instruments, which provide students Likert-type rating scales, ask students the extent to which they agree with a number of statements about themselves.  They are by far the most frequently employed assessment tools, and though they can be used in limited ways for limited purposes, they are distinctly flawed as a complete measurement solution.  They are, of course, highly susceptible to faking and coaching, rendering them nearly useless for meaningful accountability, accreditation, and other program evaluation purposes.

They are also limited in more technical ways.  Many young people simply don’t “know thyself” all that well, they don’t know very much about what they are being asked (what does it mean to say I am “dependable?”), and they often don’t have great standards of reference to which to compare themselves.  Self-report reference bias operates in multiple dimensions: Students surrounded by highly socially skilled peers might rate themselves lower than those surrounded by poor socializers, though the former may be far more skilled than the latter.  Students who know little about what it means and what it requires to be persistent may rate themselves far higher in this strength before being immersed in a comprehensive educational program about persistence than after. And there are known differences among sub-groups: For example, students from Asia tend to respond more modestly than those from the West or even developing countries.

Ratings by teachers and others are limited in some of the same ways, as well as different ones.  They too are subject to potentially flawed incentives—teachers will understandably want to rate their students’ growth highly—and subject to various so-called rater biases in the same way as student ratings.  Plus, many schools and districts find it difficult or impossible to add more tasks and time to the already demanding teacher workload.

Observational assessments can serve some specific and narrow purposes. However, they are both too-time consuming (and hence virtually impossible to implement at scale) and sorely lacking in reliability to accomplish most of what assessment seeks to provide. Moreover, if used to evaluate a program, the research designs need to be relatively complex and often quite costly, and the training time often required in such instances to make observations both valid and immune from still further biases (e.g., projection) is prohibitive.

The products are too narrow.

In addition to the flawed techniques used in most contemporary measurement tools, most available today are too narrow in what they seek to achieve and provide.  Even some of the more interesting or effective tools out there serve only a small set of grade levels, and/or fail to provide both individual student and institutional (school/district/network/state) norm and criterion-referenced scoring.  Rarely are their score reports designed to provide fast, intuitive, and actionable reporting to children, youth, parents, and educators.  Almost no tools provide competency-based badging or credentialing for demonstrated proficiencies; almost no tools provide their users institutional and growth guidance for improvement on each construct aligned to the measurement findings.

Middle school student with iPad

Introducing the true next-generation assessment solution.

The ProExam Center for Innovative Assessments is pleased to present in 2016 a true next-generation solution to meet the extraordinary demand for social and emotional learning and noncognitive character strength measurement.

The constructs measured are soundly built upon the many-decades-old and enormously research-validated psychological framework known as the Big Five factors. The Big Five is used to structure facets which every educator and every researcher recognizes as vital, malleable, and proven meaningful for success, such as dependability, persistence, curiosity, and agreeableness.

The measurement approach breaks through the limitations of  student survey and self-reporting systems, utilizing a robust, evidence-based, multi-method, multi-trait methodology incorporating reference-bias-busting situational judgment items  and virtually fake-resistant “forced choice” techniques.  In 2013, RAND Researchers rated an earlier assessment tool designed by the ProExam Center’s Chief Scientist, Richard Roberts, PhD, and Principal Research Scientist Jeremy Burrus, PhD, as the best available for “measuring 21st century competencies.”  The researchers explained that “by using multiple instruments to assess the same construct, developers can better disentangle sources of error and thereby increase the precision of the measurement.”

The new ProExam product now in development takes assessment to a new level, offering many additional advantages not available in the tool so glowingly referenced in the RAND report three years ago.  Measurements will be available for grades six to twelve in 2016-17 and soon thereafter for upper elementary as well. Scoring reports will be provided in user-friendly, graphic designs for both students and institutions.  National norms will be provided in institutional reporting; student reports are being carefully designed by child development experts to ensure communications reinforce a growth mindset, avoid any negative “reification” effects, and provide child, teacher, and parents with evidence-based practices to progress in each construct.

ProExam is a national leader in the credentialing and digital badging space, and, as an added benefit, this tool will provide the opportunity for schools to have their students earn competency-based credentials for demonstrated high proficiency in each construct.

This is a truly comprehensive, wrap-around solution that meets the national and global demand for high quality social and emotional learning assessment.