This is the third of four posts about the USC Rossier Attributes that Matter Conference.
Morning Sessions: Non-cognitive Variables in Action and Attributes of Good Students and Good Professionals
Where Bill Sedlacek (see previous post) laid out the intellectual concept of noncognitive assessment with a bit of history and a lot of theory, sharing his decades of research and his passionate advocacy, the following two sessions took us from theory to practice, as five university administrators and researcher told us about the fascinating work they’d done in this field.
Two bold and innovative directors of admissions at the university level, (Oregon State and DePaul), came to report that, despite their best efforts, their experiments with noncog assessment have had only very limited success in predicting student performance on campus.
As Eric Hoover reports in the Chronicle of Higher Ed about Noah Buckley’s leadership at OSU.
In 2004 the university added to its application the Insight Résumé, six short-answer questions. One prompt asks applicants to describe how they overcame a challenge; another, to explain how they’ve developed knowledge in a given field.
The answers, scored on a 1-to-3 scale, inform admissions decisions in borderline cases, of applicants with less than a 3.0 GPA. “This gives us a way to say, ‘Hey, this is a diamond in the rough,'” Mr. Buckley says. For students with GPAs of 3.75 or higher, the scores help determine scholarship eligibility.
The Insight Résumé is a work in progress, Mr. Buckley says.
Reading 17,000 sets of essays requires a lot of time and training. Meanwhile, he believes the addition has helped Oregon State attract more-diverse applicants, but it’s hard to know for sure. A recent analysis found that although the scores positively correlated with retention and graduation rates, they did not offer “substantive improvements in predictions” of students’ success relative to other factors, especially high-school GPAs.
Details about the Insight Resume can be found in the slides above; it includes
Six short-answer questions asked as part of admissions application:
•Leadership / group contributions
•Knowledge in a field / creativity
•Dealing with adversity
•Community service
•Handling systemic changes / discrimination
•Goals / task commitment
Similarly, at DePaul as at OSU, very meaningful evidentiary results still stand further in the future. But, because DePaul is now standardized test optional, the data generated by the noncog assessment is an important piece of the picture they gather about each student, at least from those who participate in the optional additional assessment.
Hoover again in the Chron:
Elsewhere, proponents of noncognitive assessments say such tools will become more necessary as applicant pools grow more diverse: Many underrepresented minority students struggle on the SAT but excel in other ways.
“This gets us out of the habit of talking about students as a 3.8, 29 ACT,” says Jon Boeckenstedt, associate vice president for enrollment at DePaul University, which now lets applicants write short responses to four essay questions, also based on Mr. Sedlacek’s research, in lieu of submitting test scores. “If nothing else,” Mr. Boeckenstedt says, “this allows us to think of students as multidimensional.”
Although only 5 percent of this fall’s incoming class completed the essays, Mr. Boeckenstedt believes the option sends an important message to students.
“So many places miss out on good kids, and, in turn, so many good kids rule themselves out, based on test scores alone,” he says. “We have to break out of the traditional way of evaluating what makes someone capable or smart or talented. Universities are supposed to evolve.”
Boeckenstedt repeatedly reiterated the limitations in any type of prediction: Predicting student success in college, particularly the success of disadvantaged students, is about as easy as predicting the weather.
Boeckenstedt confessed to being a big fan of data visualization, so it is worth scrolling the powerpoint slides embedded atop the post to see the many things he has done to bring his data to life.
The presentation above is also very useful for schools considering experimenting in this direction, because of the detailed specifics they provide about how they “operationalize the assessment.”
For instance at DePaul, the noncog essays include these questions:
- Describe your short and long term goals and how you plan to accomplish them.
- Describe a personal challenge you have faced, or a situation which you found to be particularly difficult. How did you react and what conclusions did you draw from the experience?
- Discuss how involved you have been with your community through volunteer, neighborhood, place of worship, or other activities. Give examples of playing a leadership role in your school or community.
- Think about the interests you have pursued outside of your high school classes. Describe any knowledge or mastery of skills you have gained as a result.
But the work of determining causation and correlation in the mounds of data they’ve assembled is still underway. Boeckenstedt conveyed this with the following cartoon.
The results out of DePaul thus far are very modest.
- High school GPA is still the most significant factor for predicting first year success, by a considerable margin.
- There is evidence that higher DIAMOND scores help predict first year success, and, in some cases, retention – — for students with lower income, students of color, and students with lower ACT scores.
- Preliminary findings suggest that the DIAMOND scores can effectively bring additional information into the admissions review that is not statistically related to applicants’ socioeconomic and racial/ethnic background.
- DIAMOND scores appear to help to provide a useful, more holistic assessment of the likelihood of student success, especially for :
–Students with lower HSGPA
–Minorities with lower HSGPA
–Students with lower standardized test scores
–Chicago Public School students
–Students with lower HSSES Index
–Male Pell Grant Recipients
Boeckenstedt ended his report this way:
In conclusion, there is no silver bullet when explaining and predicting human behavior.
It is possible to believe the in the power of the variable while realizing that the instruments to measure it may not be as robust as we like.
It is a little hard for me to parse the last sentence, but here is my best effort: We can believe in the power of the variable (noncog attributes) on faith–faith which in fairness is supported by other studies and research– while still seeking the instrument which is most effective at measuring this variable, recognizing that in the interim, the results we gather from the instruments we are using are not themselves providing evidence of the power of the variable.
It needs to be recognized, and repeated from a previous post, that even when not a compelling predictor, noncog assessment can be asset in the work of supporting students through their undergraduate or K-12 journeys. As was reported about the OSU research,
In the future, the university realizes they must ask themselves if they should continue to use the IR (noncong assessment) as a filter. They see the opportunity to use the identification of non-cognitive deficiencies as an opportunity to provide support to build competency through courses for students who are low in various components of the assessment.
Other opportunities to support students include early alert monitoring (providing a support system), advisor flagging and training, intentional linking to existing programs and services and new courses to support the development of deficient areas.
–
–
Next up was a useful short presentation by Steve Kappler from ACT (above), who shared research that, more than most other sessions, was directed squarely at K-12.
It is necessary to scroll through the presentation to get to the good stuff. For our purposes, we are interested primarily in slides 21-28, on the topic of Academic Behavioral Readiness– as measured by the self-survey in the ACT ENGAGE survey, which measures motivation, social engagement and self-regulation. Perhaps there are many readers here familiar with ACT Engage, but it is a fascinating tool– focused as it is on things we have been talking about as most important in the noncog domains.
I’ve embedded the ENGAGE brochure below– and if there are any readers here who have used ENGAGE, please leave a comment with your thoughts about its value.
ACT research, provided in the slides above, demonstrate strong correlation between student self-assessed motivation and high school GPA, and strong inverse correlation between self-regulation and number of disciplinary events. Most correlated, (and most tautological, it would seem to me) are student self-reports of social engagement and the number of school extracurriculars participated in.
I’m intrigued by ACT-ENGAGE, and believe, at least at this first blush, that it belongs in a portfolio of options to be considered by schools seeking to expand their evaluation of students noncog qualities, particularly these important attributes of motivation, self-regulation, and social engagement. I’d hate to think that its predictive nature is self-confirming and reinforcing: you’d hate to have a student get a score displaying their low marks on any one of these three, particularly the first two, and then see the psychology of their actions conform to their “diagnosis.”
January 23, 2013 at 6:12 am
Is there still some truth that family backgrounds are still a predictor of success in college? I am sure that if that’s so, there are many exceptions as I know some of them personally but generally speaking, on the whole, all things being equal (which they seldom are) I wonder if we might consider looking outside of “academic” measures to measure academic success. Consider, for example, the starting point for many students and how much progress they make. The B student who goes to A has made some progress but the F student who does to a C may have gained much more than letter grades indicate. I recall a measure called residual gain that tried to equalize the starting points so we could have a more accurate measure of progress. Maybe it’s something like a handicap in golf although I don’t play that game.
We had programs called Head Start and ABC (A Better Chance) that were intended to give disadvantaged students opportunities to succeed. Without some some kind of intervention it was fairly clear they would not make it in the academic world and not have many opportunities in the future either.
My point is how about looking in other areas where there is a high degree of reliability in forecasting outcomes? Even models of weather predictions have become much more effective and I think they do that by gathering data from a whole array of sources and putting it all together. There are other examples from medicine, physics, architecture and engineering that have high degrees of success in predictions of outcomes. Perhaps someone should look at those disciplines for examples that might be transferable to education.