This sixty page guide is really several things in one.

  • It is in part a guide to this particular tool, the OECD Test for Schools (Based on PISA), a test which individual schools, public and private, can participate in.
  • It also provides some high level treatment of the test’s alignment of PISA testing with 21st century skills and  “Deeper Learning.”  See the Appendix.
  • Finally, in the first full section, “leading your OECD program” and in the Case Studies section, the assessment example is OECD testing, but the framework and the treatments can serve as a guidance generally for how schools can best manage a new assessment tool project, using that new test or tool to advance student learning outcomes.

Enjoy.

 

It was terrific to have the chance this month both to see the keynote from Angela Duckworth at NPEA and to have 90 minutes sitting with her in a small group conversation with the SSATB Think Tank.

As many now know, she has become something of “the guru of grit” in the last year or two, particularly with the attention brought to her work by the writing of Paul Tough in his book and New York Times magazine cover story.  She is an Assistant Professor at the University of Pennsylvania.  I wrote about her work, her TEDx talk, and the Tough book previously here. 

images (4)Duckworth opened her keynote with the message that academic skill development is always interwoven with so-called “non-cog” skills.

The stuff kids need to learn in school is hard.   It’s really hard.  But it is not too hard.  Every child in my classroom– whether it took two hours or twenty hours– could learn this.    It isn’t quantum mechanics, it is Algebra.    In other countries most kids get it because they have the expectation that everyone can do this and they have attitude that it just takes a lot of work to get there.

IQ is not the limiting factor for most of our children.” We shouldn’t tolerate lower expectations for some kids.

Algebra is hard in another way- psychologically, for instance.  Is it hard to persist when it is challenging.

“if you can build non-cog skills, you will boost academic achievement. It is NOT either/or, but BOTH/AND.”

The message, of course, about the value of persistence, is not just for our kids: it is for all of us.   As she explained, and tied it to her own work and the work of everyone in the audience at NPEA, doing the hard work of providing quality education to disadvantaged youth, “It’s not a one year or two year project for any of us in life, tackling something hard and trying to make a real difference.”

angeladuckworthGrit is about “remaining loyal to your commitments.  Perseverance and Passion for long-term goals. Achievement = talent x effort. Anything multiplied by 0 = 0. Grit is about some talent but more about passion and perseverance.”

But we are all deceived, so much of the time, by the false impressions most others give off of gently gliding along the surface, like a duck with no worries.    “We need to show kids, and help them see, that below the waterline we are all paddling furiously.”

Duckworth emphasized the importance of not just teaching grit in some narrow method, but of deeply “Building a culture of grit, making it self-conscious and publicly visible for all.”

In an amusing and telling example, she shared the importance in Finland of a term roughly equivalent to grit, “sisu.”   There, she explained, Sisu is surfaced constantly:  “How’s your sisu today?”  “I’m feeling a bit down in my Sisu this week.”

Duckworth, speaking to an audience whose lives are devoted to helping students succeed in K-12 and collegiate education, stated the problem boldly and baldly: “We are not succeeding– we are getting kids well prepared academically, but they’re still not succeeding in college and careers– what do we need to do differently?”

We need to research, design interventions, experiment, and study results.  (more…)

kegley100910stg1821Call me crazy: Common Core Assessments aren’t too long in testing duration and shouldn’t be shortened.

Forgive me for being contrary: I know I threw a few friends when I wrote last week we shouldn’t assess projects in PBL (though my full argument was far more nuanced than my headline/thesis), and now I know I take the risk of irking more friends by making the argument which follows.

Among the many caveats to my argument, I’ll prioritize these two:

First, I too am appalled by the misuse and abuse of current or future standardized testing, particularly in regards to punishing schools and teachers.  What Bill Ferriter wrote recently on this topic is nothing short of brilliant. “It’s time that you start asking your policymakers some difficult questions about their positions on value-added measures of teacher performance.    If Jackson is right, those policies — which have rapidly become the norm instead of the exception in most states in America — are wasting our time AND our money.

I want quality testing to be used for meaningful purposes: advancing student learning, not teacher-bashing.

Second, these important advances in testing are certainty not the end of the line; they don’t represent a complete arrival at a place of testing excellence.  They are instead a significant and meaningful advance from the status quo toward that place of excellence, an advance I think we should applaud.  For more on the continued advances needed, see this recent Edweek post and the report from the Gordon Commission on the Future of Assessment in Education upon which it is commenting.

But here goes: Common Core Assessments PARCC and SBAC (Smarter Balanced) tests shouldn’t be any shorter in their time duration than they are planned to be.

Why?

1. Because we shouldn’t be so quick to call this testing time “lost” to teaching and learning. In even only a moderately good testing experience, testing time is learning time– sometimes superior learning time.

2. Because these new tests assess in ways far more authentic and meaningful than any previous generation of standardized K-12 educational tests, and assess the deeper learning our students greatly need to learn to be successful (learning which far too few are indeed learning), assessment information we need to improve their “deeper learning.” 

But both of these things will be compromised or lost if the tests get any shorter.

The length of these tests is being hotly debated and combated.

Edweek published last week a short article about the duration of the tests, and it is worth reviewing.

New tests being designed for students in nearly half the states in the country will take eight to 10 hours, depending on grade level, and schools will have a testing window of up to 20 days to administer them, according to guidance released today.

The tweets which followed the Edweek piece were not at all positive: the following tweet is entirely representative of the attitude in the feed of tweets about the Edweek post, although it is not entirely representative of the tone of those tweets, because many were more vulgar.

Let me flesh out my argument:

1.   We shouldn’t be so quick to call this testing time “lost” to learning: in even a moderately good quality testing experience, it is quite the opposite.

I don’t believe that time spent taking a good test is “time away from learning.”  It doesn’t even have to be a great test– just a good test will do.  When I look back at my K-16 education, I am certain that on average, I learned more, was more engaged, more challenged, more interested, more analytical and creative, when I was taking a half-decent test than I was when I was sitting in class watching a teacher talk in the front of the room.

Quite often– though not always– my test-taking times as a student were among the very most intellectually exciting and growth-oriented events and experiences in my education. (more…)

It was a dynamite four days in Philly last week at the NAIS annual conference: although I was unsure how it would feel to be attending in a different capacity, not as a Head but in my new role of writer/consultant/presenter, it ended up very fun and engaging.   As always, the best parts are outside the formal conference in the camaraderie and fellowship found there with so many pursuing with parallel passion the meaningful and rewarding work of remaking learning for our fast-changing times.

The slides above come from a most fascinating session sharing what I’d argue is genuinely breakthrough work from the folks at the Index group on what they call their new Mission Skills Assessment, MSA, for Middle School Students.

(It was a big team presenting, including Lisa Pullman from Index, Tim Bazemore from New Canaan Country School (CT), Jennifer Phillips from Far Hills Country Day (NJ), and Rich Roberts from ETS; see the last slide for all their names and contact info)

As they explained, and as I often try my best to pursue here at 21k12, we have long as educators believed and proclaimed that character development, defined broadly, is of importance equal to that of intellectual and academic development, and yet truly, outside of the not-always-deeply successful advisory programming and a few assemblies here and there, how far do we usually go with this character education?

And, when students know that grades are the coin of the realm and that nearly all of the grades they earn and the feedback they get is on the academic-intellectual side, how well are we signaling to them the importance we place or guiding them with the feedback which is so important on the non-cognitive side of the equation?

Here with the MSA, the group has identified, after review of both the research of what makes for success out there, and of what our schools state in our missions we do in here, six key traits, and I love this list:

Teamwork, Creativity, Ethics, Resilience, Curiosity, Time Management. 

As the slides demonstrate, this has been an investigation carried out in the most serious of ways, spread out over five years and drawing upon the expert resources of and collaboration with ETS.  Their ETS partner, Rich Roberts, explained that as surprising as it might seem, ETS has been working on Noncog for over a decade, and indeed, the pursuit of noncog assessment which can match the quality of cognitive assessment goes back more than 60 years.

Roberts argued that the consensus view after decades of study is that noncog is not, no it is NOT, twice as important as cognitive skills and attributes for success in life– but it is EQUAL.

But assessing it has never been easy– this is the rub.  But, the research here conducted finds strong validity and reliability for a tripartite approach, as described in the image below, of student self-report, teacher evaluation, and a third tool for “triangulation.” NAIS and the Mission Skills Assessment from the Index Group   21k12

These third tools are discussed in slides 36-38, and include Situational Judgement Tests (SJTs), which were similarly touted at the Boalt Hall Law School study I described here, biographical studies, and Creativity Performance Tests.

For those that are skeptical that even with this triangulation we get to an effective measurement, check out the discussion of reliability and validity on slides 48-55, where reliability is found to be just a tad less than on the SAT and validity in prediction better than standardized test scores and GPA for student quality ratings and student well being and just a little less well than standardized test scores for GPA.

As for the inevitable question– whether and when this tool will become more broadly available, beyond the membership of the Index group, it appears as I view it that these questions have yet to be answered.   As soon as they are, I’ll do my best to report that news here.

But, there is no reason for schools outside of Index to not use these ideas and resources to advance their own work in assessing student development of these essential qualities.

Last week I presented (for a second time) a webinar for Simple K12 on the topic, Performance Task Assessment is 21st century Assessment.

Those slides are embedded above, and the webinar is available here (free for members, for a fee if you’re not).

In that presentation I discuss various strategies for designing and developing your own performance tasks for assessment, and suggest that one avenue is to borrow an existing one and adapt for your purposes.     In the PBL world where I also spend a lot of time, we refer people often to PBL libraries (BIE has a list of them here), and so it is important we match them with performance task libraries.

Performance Task assessment is becoming increasingly important, as I’ve posted here several times before, because of its role in Common Core assessments, (more…)

Delighted today to have the opportunity to share these slides and thoughts with folks here at OESIS today.  I continue to think that using technologies, current and emerging, to reinvent testing and assessment is among the primary projects for 21st century K-12 learning in the current decade, and I’m going to continue to do my best to support this reinvention.

As I explained at some length in the opening of my session, and I realize I may stand a bit alone here, I still love tests– of all kinds, including the “test” that is asking students to demonstrate their learning in challenging ways– and a huge part of my personal mission is to make testing more engaging and meaningful for students: let’s improve the way we use assessment as, for, and of learning! 

Pam Moran and I shared this and facilitated this conversation today at educon: our thanks to the attendees for the rich and meaningful conversation.

This is the third of four posts about the USC Rossier Attributes that Matter Conference.

Morning Sessions: Non-cognitive Variables in Action and Attributes of Good Students and Good Professionals

Where Bill Sedlacek (see previous post) laid out the intellectual concept of noncognitive assessment with a bit of history and a lot of theory, sharing his decades of research and his passionate advocacy, the following two sessions took us from theory to practice, as five university administrators and researcher told us about the fascinating work they’d done in this field.

Attributes

Two bold and innovative directors of admissions at the university level, (Oregon State and DePaul), came to report that, despite their best efforts, their experiments with noncog assessment have had only very limited success in predicting student performance on campus.

As Eric Hoover reports in the Chronicle of Higher Ed about Noah Buckley’s leadership at OSU.

In 2004 the university added to its application the Insight Résumé, six short-answer questions.  One prompt asks applicants to describe how they overcame a challenge; another, to explain how they’ve developed knowledge in a given field.

The answers, scored on a 1-to-3 scale, inform admissions decisions in borderline cases, of applicants with less than a 3.0 GPA. “This gives us a way to say, ‘Hey, this is a diamond in the rough,'” Mr. Buckley says. For students with GPAs of 3.75 or higher, the scores help determine scholarship eligibility.

The Insight Résumé is a work in progress, Mr. Buckley says.

Reading 17,000 sets of essays requires a lot of time and training. Meanwhile, he believes the addition has helped Oregon State attract more-diverse applicants, but it’s hard to know for sure. A recent analysis found that although the scores positively correlated with retention and graduation rates, they did not offer “substantive improvements in predictions” of students’ success relative to other factors, especially high-school GPAs.

Details about the Insight Resume can be found in the slides above; it includes

Six short-answer questions asked as part of admissions application:
•Leadership / group contributions
•Knowledge in a field / creativity
•Dealing with adversity
•Community service
•Handling systemic changes / discrimination
•Goals / task commitment

Similarly, at DePaul as at OSU, very meaningful evidentiary results still stand further in the future.   (more…)

Presentation7

[graphic from Digital Learning Now]

This post continue a small project here at 21k12 of viewing the coming Common Core Standards through a backwards prism: the testing assessments that will evaluate student and school success at learning and teaching Common Core standards.  These new assessments sit at a junction of topics I’m interested in and working on regularly: integrating technology, next generation and digitally enhanced assessment, computer adaptive assessment, and  performance task assessment.

These new Common Core (CCSS) assessments are the product in part of Secretary Arne Duncan’s call for a new generation of Assessments, Assessment 2.0 he calls it, about which I have written before.   To advance this vision of moving “beyond the bubble,” the US DOE is spending, via Race to the Top funding, more than $300 M in developing new kinds of tests and testing systems, split between two major programs, PARCC and Smarter Balanced.

As the Ed Leadership article by Nancy Doorey reports,

The assessment consortia are drawing on new advances in technology, cognitive science, and measurement as they develop this improved generation of assessments.

They hope these new systems will address concerns about existing state assessments—that many assessments measure skills too narrowly; return results that are “too little, too late” to be useful; and do not adequately assess whether students can apply their skills to solve complex problems, an ability students need to succeed in college, the workplace, and as citizens.

Both tests are administered digitally and online, and will require in most states and districts a massive technological infrastructure improvement to be implemented.   Administering them digitally and online offers many advantages, including the ability to offer adaptive testing (which is currently intended for SB only, not PARCC), and faster results returned to teachers for instructional purposes.

Eight questions worth asking about the the new assessments:

1.  Will they come on-time or be delayed, and will the technology be ready for them?    Although the test design is funded (enormously), the technological infrastructure upgrades are not externally funded, and it remains a very open question whether and from where this funding will come.   If districts are unable to meet the requirements, will the 2014-15 launch date for these digital and online tests be postponed?

Engaging Ed fears they will.

Digital Learning Now, in a recent report embedded below, pleads with the consortia: Don’t Delay.

Don’t phase in. With two years left to prepare, the combination of a long test window and supporting outdated operating systems allows almost all schools to support online testing now. Going further to support paper-and-pencil testing in and past 2015 is unnecessary, expensive, and reduces comparability.

It is also is unwise for districts to seek to compromise by the use of less than 2:1 ratios of computers to students.    Imagine the schools which are trying to use current computer labs to assess their students– it will take 12 disruptive weeks to roll all students through the labs, and the labs themselves won’t be available for any other learning during that time. (more…)

Above are the slides for my presentation today to the Association of Colorado Independent Schools Heads and senior administrators, a three hour workshop.   Sadly, I (again!) made the mistake of trying to stuff in too much information, and several of our intended activities and videos had to be cut.

Below are first, some of the key links to think to which I referred, and below that, some of the videos I showed or intended to show as part of the presentation.

A very valuable reference is the excellent presentation, Measuring What We Value by Lyons and Niblock.   (Note: I borrowed/adapted a small number of these slides for integration into my presentation: My thanks to Lyons and Niblock. )

My thanks to Lee Quinby, the very fine ACIS executive director, and all those who took the time for the session this morning.

Links:

brookhart book“We still really don’t know how to assess problem-solving,” I heard a university professor of engineering say last week, and it resonated because it is so clear to me that while we all want to do more to educate our students in the work of solving complex problems and creative thinking, and we know the importance of assessment in driving this instruction, we nevertheless stumble in our clarity about what and how we ought to assess these things.

Most often the books I write about here are what might be viewed as the superstructure books– the writing about the future of learning and the most innovative practices for reinventing what and how we should be teaching.

Examples of this would be my reviews of Net Smart by Rheingold, Networked by Wellman and Rainey, Future Perfect by Johnson, and Zhao’s World Class Leaners. 

But sometimes it is useful to return to the foundations, and firm up our terms and concepts at more basic, but critical, levels— indeed, if we don’t do so, the superstructures will be that much more unwieldy.

This 2010 title, from ASCD, is exactly that, and I hope readers will forgive the “primer” nature of this post.   It would seem to me that schools which simply do the work to try to unify and make more consistent our language and practice around higher order thinking skills assessment will be well poised to then experiment, iterate, and innovate in this essential realm.

Brookhart begins by defining the core elements of what we mean by higher order thinking:

  • Transfer: relating learning to other elements beyond those they were taught to associate with it.
  • Critical thinking (judgment): reasonable, reflective thinking focused on what to believe or do, applying wise judgment and producing an informed critique.
  • Problem solving, including creative thinking: the non-automatic strategizing required for solving an open-ended problem, involving identifying problems, creating something new as a solution.

establishing three core components of what exactly effective assessment entails:

  1. Specify clearly and exactly what it is you want to assess.
  2. Design tasks or test items that require students to demonstrate this knowledge or skill.
  3. Decide what you will take as evidence of the degree to which students have shown this knowledge or skill.

and elaborating with three more principles of higher order thinking assessment:

  • Presenting something for students to think about, usually in the form of text, visuals, scenarios, resource material, problems.
  • Using novel material–material new to students, not covered in class and not subject to recall.
  • Distinguishing between level of difficult, easy versus hard, and level of thinking, lower order thinking/recall versus higher order thinking) and control for each separately. (more…)

Home   Assess4ed.net-193758

This month I’ve been in conversation with an outstanding school superintendent preparing his district for PARCC assessments.   As many understand, PARCC (and its counterpart Smarter Balanced), requires districts prepare their schools with technology sufficient for their student to take what will be entirely online, computer based high stakes tests.

“Of course,” he explained to me, “we need to become PARCC-ready.  But that is just the tip of the iceberg.   If we are going to invest in these substantial, even enormous technology upgrades, it would be foolish not to use this new technology in ways beyond the new tests.

“PARCC tech upgrades give us an opportunity to transform our schools to places of 21st century, student-centered– and this is an opportunity not be wasted.”

In addition, he added, preparing students for success on PARCC is not just a matter of ensuring the tech is there for them to take the test– it needs to be there and used in ways in which students develop the comfort and confidence.

This is a tremendous opportunity, and we can only hope that every superintendent recognizes as well as this one has the chance being presented to leverage an externally imposed new test and new test format– even when that new test perhaps is in and of itself unwelcome– to transform the equation of classroom learning toward 21st century, student-centered, technology in the hands of students programs.

This has been also recognized recently in a valuable new white paper from SETDA, State Educational Technology Directors Association, which I’ve embedded below.

The report has many important messages.  First, districts must carefully focus and determine their current technology’s capacity for supporting the new tests.

While there are compelling advantages to a technology-­‐based assessment system as compared to current paper-­‐ and pencil-­‐based approaches, schools and districts will need to validate their technology readiness for 2014-­‐15.

Validation for technology readiness is important even for states and districts currently administering tests online, as these Common Core assessments are being designed to move beyond multiple-­‐choice questions to technology-­‐enhanced items to elicit the higher order knowledge, skills, and abilities of students.

An article last spring in THE, Technology Challenges and the Move to Online Assessments,  also explored these issues.

The 2014-15 school year is a long way off, isn’t it? That depends on your perspective. If you are an eighth-grader, Friday night is a long way off, but if you are a technology leader in a school district or a state, the 2014-15 school year may be here all too soon.

Critically, the SETDA report insists that this PARCC/Smarter Balanced minimum specs, must not be the only factor to be considered when these enormous investments are made.    (more…)

[slides shared with permission of the authors]

The presentation above, by Doug Lyons and Andrew Niblock, is from a session which was a highlight of last spring’s NAIS Annual Conference, a session which sadly I was unable to attend, but about which I heard great things– and the slides carry much of that value.

This presentation covers terrain that I too spend a lot of time examining.  (for comparison, see my presentation to the Canadian Heads at their annual meeting last year.)

I am very well aware that there are many fine minds and outstanding educators who are arguing against measurement in education– or for dramatically reducing the measurement we do of learning– or that much of what we most value is hard or effectively impossible to measure.

And certainly, there is a somewhat appalling misuse and abuse of student learning measurement data in the US today– of course there is.

But, my ongoing approach, aligned exactly with this high quality presentation, is that we seek diligently to improve and correct the way we use learning measurement, but not abandon or reject evaluating and measuring learning.   Indeed, to best change education from its current course and to bring it to a far more student centered, 21st century oriented, technology accelerated, and innovative place, we have to have data to support our campaign and change current policies.

Among the things I appreciate about this presentation is its breadth, looking at both internal and qualitative ways we assess learning AND external/quantifiable ways.   We have to look at this topic broadly– what gets measured gets done, what gets measured gets valued, we can’t manage what we can’t measure: these mantras are compelling and significant, and if we want to transform learning we have to transform what we assess and measure.

It is good too that it is built in part upon Criterion 13 of the Commission on Accreditation standards, which is essential to framing the issue of assessment in independent schools:

The Standards require the school to provide evidence of a thoughtful process, respectful of its mission, for the collection and use in school decision-making of data, both external and internal, about student learning.

This evidence is required for the accreditation of all independent schools in coming years, , as, I believe, it should be.

Some thoughts, comments, and observations on this presentation.

1.  I love the citation from Ted McCain’s Teaching for Tomorrow–a highly valuable, but, I fear, highly under-valued, book on the topic.  As they quote McCain:

“we need to invert the conventional classroom dynamic: instead of teaching information and content first, and then asking students to answer questions about it second, we should put the question/problem first, and then facilitate students with information and guidance as they seek the answer and hold them accountable for the excellence of their solutions and of their presentation of their results”.

In my own 2008 research visiting 21 schools and shadowing students at each, I found McCain proven right– that putting problems first made a huge difference. (more…)