Pam Moran and I shared this and facilitated this conversation today at educon: our thanks to the attendees for the rich and meaningful conversation.
January 27, 2013
Leave a Comment
January 22, 2013
This is the third of four posts about the USC Rossier Attributes that Matter Conference.
Morning Sessions: Non-cognitive Variables in Action and Attributes of Good Students and Good Professionals
Where Bill Sedlacek (see previous post) laid out the intellectual concept of noncognitive assessment with a bit of history and a lot of theory, sharing his decades of research and his passionate advocacy, the following two sessions took us from theory to practice, as five university administrators and researcher told us about the fascinating work they’d done in this field.
Two bold and innovative directors of admissions at the university level, (Oregon State and DePaul), came to report that, despite their best efforts, their experiments with noncog assessment have had only very limited success in predicting student performance on campus.
As Eric Hoover reports in the Chronicle of Higher Ed about Noah Buckley’s leadership at OSU.
In 2004 the university added to its application the Insight Résumé, six short-answer questions. One prompt asks applicants to describe how they overcame a challenge; another, to explain how they’ve developed knowledge in a given field.
The answers, scored on a 1-to-3 scale, inform admissions decisions in borderline cases, of applicants with less than a 3.0 GPA. “This gives us a way to say, ‘Hey, this is a diamond in the rough,'” Mr. Buckley says. For students with GPAs of 3.75 or higher, the scores help determine scholarship eligibility.
The Insight Résumé is a work in progress, Mr. Buckley says.
Reading 17,000 sets of essays requires a lot of time and training. Meanwhile, he believes the addition has helped Oregon State attract more-diverse applicants, but it’s hard to know for sure. A recent analysis found that although the scores positively correlated with retention and graduation rates, they did not offer “substantive improvements in predictions” of students’ success relative to other factors, especially high-school GPAs.
Details about the Insight Resume can be found in the slides above; it includes
Six short-answer questions asked as part of admissions application:
•Leadership / group contributions
•Knowledge in a field / creativity
•Dealing with adversity
•Handling systemic changes / discrimination
•Goals / task commitment
Similarly, at DePaul as at OSU, very meaningful evidentiary results still stand further in the future. (more…)
January 18, 2013
Leave a Comment
[graphic from Digital Learning Now]
This post continue a small project here at 21k12 of viewing the coming Common Core Standards through a backwards prism: the testing assessments that will evaluate student and school success at learning and teaching Common Core standards. These new assessments sit at a junction of topics I’m interested in and working on regularly: integrating technology, next generation and digitally enhanced assessment, computer adaptive assessment, and performance task assessment.
These new Common Core (CCSS) assessments are the product in part of Secretary Arne Duncan’s call for a new generation of Assessments, Assessment 2.0 he calls it, about which I have written before. To advance this vision of moving “beyond the bubble,” the US DOE is spending, via Race to the Top funding, more than $300 M in developing new kinds of tests and testing systems, split between two major programs, PARCC and Smarter Balanced.
The assessment consortia are drawing on new advances in technology, cognitive science, and measurement as they develop this improved generation of assessments.
They hope these new systems will address concerns about existing state assessments—that many assessments measure skills too narrowly; return results that are “too little, too late” to be useful; and do not adequately assess whether students can apply their skills to solve complex problems, an ability students need to succeed in college, the workplace, and as citizens.
Both tests are administered digitally and online, and will require in most states and districts a massive technological infrastructure improvement to be implemented. Administering them digitally and online offers many advantages, including the ability to offer adaptive testing (which is currently intended for SB only, not PARCC), and faster results returned to teachers for instructional purposes.
Eight questions worth asking about the the new assessments:
1. Will they come on-time or be delayed, and will the technology be ready for them? Although the test design is funded (enormously), the technological infrastructure upgrades are not externally funded, and it remains a very open question whether and from where this funding will come. If districts are unable to meet the requirements, will the 2014-15 launch date for these digital and online tests be postponed?
Engaging Ed fears they will.
And I predict some radical changes will be made to Common Core assessments, as it becomes clear #urbaned schools can't afford it.—
Engaging Educators (@engaginged) January 07, 2013
Don’t phase in. With two years left to prepare, the combination of a long test window and supporting outdated operating systems allows almost all schools to support online testing now. Going further to support paper-and-pencil testing in and past 2015 is unnecessary, expensive, and reduces comparability.
It is also is unwise for districts to seek to compromise by the use of less than 2:1 ratios of computers to students. Imagine the schools which are trying to use current computer labs to assess their students– it will take 12 disruptive weeks to roll all students through the labs, and the labs themselves won’t be available for any other learning during that time. (more…)
January 18, 2013
Above are the slides for my presentation today to the Association of Colorado Independent Schools Heads and senior administrators, a three hour workshop. Sadly, I (again!) made the mistake of trying to stuff in too much information, and several of our intended activities and videos had to be cut.
Below are first, some of the key links to think to which I referred, and below that, some of the videos I showed or intended to show as part of the presentation.
A very valuable reference is the excellent presentation, Measuring What We Value by Lyons and Niblock. (Note: I borrowed/adapted a small number of these slides for integration into my presentation: My thanks to Lyons and Niblock. )
My thanks to Lee Quinby, the very fine ACIS executive director, and all those who took the time for the session this morning.
- Grant Wiggins on Measuring Creativity
- Will Richardson Graph on the Immeasurable
- Hubbard’s How to Measure Anything
- 21k12 (my blog) posts on CWRA
- 21k12 posts on HSSSE
- 21k12 posts on MAP
- 21k12: PARCC testing and the potential to transform learning:
- SSATB Think Tank on the Future of Admissions Assessment
- SSAT Think Tank Blog
- Choate Student Self Assessment
- Duckworth Grit Score Do it yourself online tool.
- PARCC sample assessment questions
- Open Computer Testing- Resources.
January 18, 2013
Leave a Comment
So much of what really matters in education just can’t be measured. — Independent school educators everywhere
Count me in. The quotes above are words I’ve uttered not dozens or scores but hundreds of times during my 15 years of independent school administration—and I very much believe I am in good company. Indeed, how can I argue with Albert Einstein?
But perhaps I am wrong. I’ve been enjoying reading this month a book which shakes my conviction that there is much of value that cannot be measured—and which gives very good guidance in how we can improve the way we capture in data just about anything we desire to know more about. The book is entitled How to Measure Anything by Douglas Hubbard—and although in my experience it is not a much discussed book in educational circles, I think it should be.
Grant Wiggins, author of the essential education book, Understanding by Design, is a fan of this book, directed me to Hubbard’s work in a blog post entitled “Oh You Can’t Measure That.”
Recently, I read a great book that might be of interest to anyone who wants to get beyond a knee-jerk reaction about what can and can’t be measured. The book makes the point from the git-go, in its title: How to Measure Anything: Finding the Value of Intangibles in Business, by Douglas Hubbard. Don’t let the ‘in Business” part throw you off. Almost everything in the book speaks to educational outcomes.
Hubbard writes with an axe to grind, and what becomes clear in the reading is that education is far from the only field or profession where managers express, frequently, their view that something, or most things, can’t be measured. This is Hubbard’s bête noir, one he is determined to confront with this book.
Often an important decision requires better knowledge of the alleged intangible, but when an executive believes something to be immeasurable, attempts to measure it will not even be considered.
As a result, decisions are less informed than they could be. The chance of error increases. Resources are misallocated, good ideas are rejected, and bad ideas are accepted.
Hubbard embeds as foundations to his argument three genuinely inspiring and impressive stories of measurement—times when individuals generated creative, ingenious methods for measuring something thought to be immeasurable—most famously and wondrously, Eratosthenes’ uncannily accurate measurement of the circumference of the earth two hundred years before the Common Era, using nothing more than shadows of the sun.
December 20, 2012
Leave a Comment
“We still really don’t know how to assess problem-solving,” I heard a university professor of engineering say last week, and it resonated because it is so clear to me that while we all want to do more to educate our students in the work of solving complex problems and creative thinking, and we know the importance of assessment in driving this instruction, we nevertheless stumble in our clarity about what and how we ought to assess these things.
Most often the books I write about here are what might be viewed as the superstructure books– the writing about the future of learning and the most innovative practices for reinventing what and how we should be teaching.
But sometimes it is useful to return to the foundations, and firm up our terms and concepts at more basic, but critical, levels— indeed, if we don’t do so, the superstructures will be that much more unwieldy.
This 2010 title, from ASCD, is exactly that, and I hope readers will forgive the “primer” nature of this post. It would seem to me that schools which simply do the work to try to unify and make more consistent our language and practice around higher order thinking skills assessment will be well poised to then experiment, iterate, and innovate in this essential realm.
Brookhart begins by defining the core elements of what we mean by higher order thinking:
- Transfer: relating learning to other elements beyond those they were taught to associate with it.
- Critical thinking (judgment): reasonable, reflective thinking focused on what to believe or do, applying wise judgment and producing an informed critique.
- Problem solving, including creative thinking: the non-automatic strategizing required for solving an open-ended problem, involving identifying problems, creating something new as a solution.
establishing three core components of what exactly effective assessment entails:
- Specify clearly and exactly what it is you want to assess.
- Design tasks or test items that require students to demonstrate this knowledge or skill.
- Decide what you will take as evidence of the degree to which students have shown this knowledge or skill.
and elaborating with three more principles of higher order thinking assessment:
- Presenting something for students to think about, usually in the form of text, visuals, scenarios, resource material, problems.
- Using novel material–material new to students, not covered in class and not subject to recall.
- Distinguishing between level of difficult, easy versus hard, and level of thinking, lower order thinking/recall versus higher order thinking) and control for each separately. (more…)
December 17, 2012
This month I’ve been in conversation with an outstanding school superintendent preparing his district for PARCC assessments. As many understand, PARCC (and its counterpart Smarter Balanced), requires districts prepare their schools with technology sufficient for their student to take what will be entirely online, computer based high stakes tests.
“Of course,” he explained to me, “we need to become PARCC-ready. But that is just the tip of the iceberg. If we are going to invest in these substantial, even enormous technology upgrades, it would be foolish not to use this new technology in ways beyond the new tests.
“PARCC tech upgrades give us an opportunity to transform our schools to places of 21st century, student-centered– and this is an opportunity not be wasted.”
In addition, he added, preparing students for success on PARCC is not just a matter of ensuring the tech is there for them to take the test– it needs to be there and used in ways in which students develop the comfort and confidence.
This is a tremendous opportunity, and we can only hope that every superintendent recognizes as well as this one has the chance being presented to leverage an externally imposed new test and new test format– even when that new test perhaps is in and of itself unwelcome– to transform the equation of classroom learning toward 21st century, student-centered, technology in the hands of students programs.
This has been also recognized recently in a valuable new white paper from SETDA, State Educational Technology Directors Association, which I’ve embedded below.
The report has many important messages. First, districts must carefully focus and determine their current technology’s capacity for supporting the new tests.
While there are compelling advantages to a technology-‐based assessment system as compared to current paper-‐ and pencil-‐based approaches, schools and districts will need to validate their technology readiness for 2014-‐15.
Validation for technology readiness is important even for states and districts currently administering tests online, as these Common Core assessments are being designed to move beyond multiple-‐choice questions to technology-‐enhanced items to elicit the higher order knowledge, skills, and abilities of students.
An article last spring in THE, Technology Challenges and the Move to Online Assessments, also explored these issues.
The 2014-15 school year is a long way off, isn’t it? That depends on your perspective. If you are an eighth-grader, Friday night is a long way off, but if you are a technology leader in a school district or a state, the 2014-15 school year may be here all too soon.
Critically, the SETDA report insists that this PARCC/Smarter Balanced minimum specs, must not be the only factor to be considered when these enormous investments are made. (more…)