[slides shared with permission of the authors]

The presentation above, by Doug Lyons and Andrew Niblock, is from a session which was a highlight of last spring’s NAIS Annual Conference, a session which sadly I was unable to attend, but about which I heard great things– and the slides carry much of that value.

This presentation covers terrain that I too spend a lot of time examining.  (for comparison, see my presentation to the Canadian Heads at their annual meeting last year.)

I am very well aware that there are many fine minds and outstanding educators who are arguing against measurement in education– or for dramatically reducing the measurement we do of learning– or that much of what we most value is hard or effectively impossible to measure.

And certainly, there is a somewhat appalling misuse and abuse of student learning measurement data in the US today– of course there is.

But, my ongoing approach, aligned exactly with this high quality presentation, is that we seek diligently to improve and correct the way we use learning measurement, but not abandon or reject evaluating and measuring learning.   Indeed, to best change education from its current course and to bring it to a far more student centered, 21st century oriented, technology accelerated, and innovative place, we have to have data to support our campaign and change current policies.

Among the things I appreciate about this presentation is its breadth, looking at both internal and qualitative ways we assess learning AND external/quantifiable ways.   We have to look at this topic broadly– what gets measured gets done, what gets measured gets valued, we can’t manage what we can’t measure: these mantras are compelling and significant, and if we want to transform learning we have to transform what we assess and measure.

It is good too that it is built in part upon Criterion 13 of the Commission on Accreditation standards, which is essential to framing the issue of assessment in independent schools:

The Standards require the school to provide evidence of a thoughtful process, respectful of its mission, for the collection and use in school decision-making of data, both external and internal, about student learning.

This evidence is required for the accreditation of all independent schools in coming years, , as, I believe, it should be.

Some thoughts, comments, and observations on this presentation.

1.  I love the citation from Ted McCain’s Teaching for Tomorrow–a highly valuable, but, I fear, highly under-valued, book on the topic.  As they quote McCain:

“we need to invert the conventional classroom dynamic: instead of teaching information and content first, and then asking students to answer questions about it second, we should put the question/problem first, and then facilitate students with information and guidance as they seek the answer and hold them accountable for the excellence of their solutions and of their presentation of their results”.

In my own 2008 research visiting 21 schools and shadowing students at each, I found McCain proven right– that putting problems first made a huge difference.

2.  I love the use of Clay Shirk’s story of the little girl looking for the missing TV mouse, which I use often though I tell this story just a little differently from the way it is rendered here.

Measuring What we Value-- Lyons and Niblock presentation 15

3.  The video they include (below), of students at Science Leadership Academy discussing their school’s assessment is indeed terrific.

Listen to the students talk about cooperative learning, project-based learning, the way they teach each other– “never does the whole class present on the same subject”– that would be boring, critiques, moodle, rubrics– and how well these students have mastered and integrated their understanding of the rubric, quality feedback, trust.

“We are all trying to work together for the same goal, [learning and preparation] so we try to help each other out.” 

4.  I admire the presenters’ framing of “4 ways of knowing,” a framework unfamiliar to me and which offers a great way of thinking about how we know and demonstrate our conclusions.

Measuring What we Value-- Lyons and Niblock presentationMeasuring What we Value-- Lyons and Niblock presentation2

5.  The slides on the new school-day SAT enhanced reports introduce something new to me, which I intend to learn more about.

6.  The presenters here offer several ways to think about how schools can best report on standardized test scores.  As they report, most commonly schools report either average percentile scores (which I did for almost a decade in my elementary school headship years), or by average grade equivalence at each year.

Instead, they suggest, be strategic and intentional: choose your norm group (public schools, suburban schools, independent schools, other), set and communicate your goals for where your students (average or median student?) will place in that norm: top quintile, for instance, or top third; and then chart your success at that goal.  As an example:

Measuring What we Value-- Lyons and Niblock presentation6

Now, I wasn’t present and am not sure whether they address the question, but what leaps to mind is the issue of value-add reporting, which I don’t see mentioned in the slides (apologies if somehow I overlooked it).   It is essential we work to report on how much our students have advanced from year to year or test session to session relative to the norm group’s progress.

This is a key element of the reporting and reporting capacity of both CWRA and MAP and I’ve used this analysis for each, and it is obviously of critical importance in those schools educating mostly highly prepared and highly achieving incoming students. You can’t take credit for your school’s success, nor can you glean how to improve your learning program, if you are reporting that the students who came to you as highly advanced in achievement are performing at a high achievement level.

 The value your school added cannot be demonstrated significantly to anyone’s eyes except in a value-add analysis– or am I wrong?

7.  Next on their agenda is the fascinating new opportunities emerging for having our students participate, in one form or another, in international comparison testing.   Our parents don’t want their students to be educated to be competitive just with other American kids, but with students from across the globe, and if they read, as we all do, that US students in general perform well below Finland/Singapore/Hong Kong/Shanghai, etc., how much comfort can they take if we report their kids are in the top tenth of US students?

As they share here, the opportunity is presenting itself for independent schools to administer a new TIMMS-like test, using actual released TIMMS questions (thank you New York State disclosure laws!).   I am eager to follow this development and learn more about this program, which I believe Lyons and Niblock themselves are developing.

Measuring What we Value-- Lyons and Niblock presentation7

As regular readers here know, I am excited about a similar, analagous program to a TIMMS like test, the PISA Based Testing for Schools, which allows individual US public and private schools to administer an exact PISA format test and then get reporting of exactly how their students rank relative to every national population.   As EdWeek reported:

“We just keep hearing how bad U.S. schools are and that we don’t measure up,” said James R. Hogeboom, the superintendent of the 10,700-student Lucia Mar district in California, in explaining his system’s plans to have one of its high schools take part. “Well, is that true or not true? And what do we need to improve our kids’ critical-thinking skills? I’d love to know that.”

8.  The next move in this comprehensive presentation is away from external testing and toward first alumni surveying, which many schools are now doing more and more with and which is required by my most recent accrediting association, ISAS, which they note is only one of two independent school associations with this requirement.

Associated with alumni surveying is (as they point to on one slide) seeking reports from the National Student Clearinghouse, which provides data on college graduation rates for a school’s graduating class cohort.  I am fascinated by this tool, but have yet to meet someone who has used it effectively– I am eager to hear a user report on the clearinghouse.  Anyone?  Anyone?

9.  Parallel to alumni surveying is current student surveying, as with the High School Survey of Student Engagement, which I’ve administered several times and written about often.   It is a critical piece of the picture.

Measuring What we Value-- Lyons and Niblock presentation8

I’d only add that the Tripod Project bears consideration as an alternative to HSSSE, more expensive I think, which adds in additional perspective from teacher surveying as well.  Has anyone out there reading this used Tripod?  Love to hear your experience.

9.   Demonstrations of learning are a really great way for schools to differentiate themselves, be transparent about their goals and intentions for student learning, and then be able to hold themselves accountable and have reportable internal data to track that.

Declare what you want your students to be able to do– and make it ambitious– and then track accomplishments.  I’ve written about  demonstrations of learning before here.

Measuring What we Value-- Lyons and Niblock presentation9

10. After an overview of 21st century learning, the presenters take us to redefining the role and purpose of assessment: Measuring What we Value-- Lyons and Niblock presentation10

11.   As an example of next-gen assessment, they share some useful information and guidance on one of my very favorite topics these days, performance task assessment, which they describe as

Students assume roles in a scenario that is based in the “real world” and contains the types of problems they might need to solve in the future. The task requires critical thinking, analytical reasoning and problem solving. Communication skills are used in describing the solution.

 So glad to see this get the attention it deserves.   As they point out, PTA is a core component of the new assessment coming for Common Core standards, embedded especially in the new Smarter Balanced testing being developed.    (The unlabeled map of states on slide 73 indicates Smarter Balance participating states).

They also give a set of helpful examples of performance task assessments from the CWRA and other sources.     What they don’t discuss on the slides– and I don’t know how much they might have spoke of this while presenting these slides– are some of the lingering issues of the actual assessment of these new online performance task based testing– which usually use automated scoring “robots,” and which therefore leaves something to be desired, and are subject accordingly to some criticism which I am still working to think through.

12.  Next up: C-PAS and CBAL, both tools new to me.    C-PAS, and its I’m Ready,  comes from the University of Oregon and College/Career readiness guru David Conley, whose book on the subject is definitive.

College   Career Readiness–I' mReady™-160855

The I’m ready assessment is built around 4 keys:

College   Career Readiness–I'mReady™-161201

CBAL comes from ETS, and is a new initiative there to build a more 21st century assessment, built to provide tools to assess of, for and as learning.

  • documents what students have achieved (“of learning”),
  • helps identify how to plan and adjust instruction (“for learning”), and
  • is considered by students and teachers to be a worthwhile educational experience in and of itself (“as learning”).

ETS Research  About the CBAL Research Initiative-161552

CBAL, which is still piloting it appears, looks to provide a comprehensive instructional management system aligned with common core and overlapping with what I see happening with Smarter Balanced and PARCC.  As they explain some of their elements:

    1. The system will attempt to unify and create synergy among accountability testing, formative assessment and professional support. We envision this system having these key characteristics:Accountability tests, formative assessment and professional support will be derived from the same conceptual base. That base will be built upon cognitive-scientific research, Common Core or state standards and curricular considerations.
    2. The CBAL assessments will consist largely of engaging, extended, constructed-response tasks that are delivered primarily by computer and, to the extent feasible, automatically scored.
    3. Because of their nature, the CBAL tasks should be viewed by teachers and students as worthwhile learning experiences in and of themselves. Ideally, taking the test should be an educational experience and preparing for it should have the effect of improving student domain competency, not just improving performance on the test
    4. Accountability tests will be distributed over several administrations throughout the school year so that: (1) the importance of any one assessment and occasion is diminished; (2) tasks can be more complex and more integrative because more time is available for assessment in the aggregate; and (3) the assessments provide prompt interim information to teachers while there is time to take instructional action.

An example test question from CBAL

Measuring What we Value-- Lyons and Niblock presentation11

13.   Onto the ETS iSkills, the concept of which I think is great, but the tool itself imperfect.   See my full post on iSkills here.

14.  And then creativity assessment is briefly mentioned, and I would have loved to have heard what they had to add to the creativity slides.   They cite Sir Ken Robinson and his well known argument that creativity declines across the course of K-12 schooling, tragically, and the presenters share, several states are developing creativity assessments, such as Massachusetts and  Oklahoma– so as to raise up the visibility and prominence of creativity teaching.   The one example of creativity measurement is the prominent tool, the Torrance.

Measuring What we Value-- Lyons and Niblock presentation12

This is and will be an ongoing subject of attention here at 21k12.

15.  This extraordinarily comprehensive presentation begins to wrap up with an overview of how assessment works at a series of “schools of the future,” which they explain share a set of common qualities, “urban public charter schools experimenting with a dramatically different view of teaching and learning; A Collaborative, Conceptual Model.”  (Except that one of the schools featured in this set is a private/independent, Avenues.)   The set includes schools with which I am very familiar and also a huge fan, such as High Tech High and Science Leadership Academy, as well as Big Picture and Microsoft School of the Future.

Three slides document the differences these schools exhibit from the norm, or from 20th century learning, perhaps best captured by the phrase I often associate with Tony Wagner: “Ability to use or apply information in new and/or novel settings most important (It’s not what you know, but what you can do with what you know)”

14

Few topics are more important in the campaign to transform our schools to places of true 21st century learning and into become Schools of the Future than that we transform what and how we assess and measure in student learning, and this presentation is an outstanding, comprehensive introduction, overview, and guide to key tools and resources in this area.

I’ll be presenting on exactly this topic often in the coming months, including to independent school heads and senior administrators in Colorado next month and to the OESIS conference February 1, and look forward to doing my best to carry and extend this conversation widely and deeply.