Last week I spoke at the US Education Department about three “next generation” assessments which I believe can really expand and improve the way we assess and improve learning in our schools; what I didn’t entirely realize is that only two weeks earlier, Secretary Duncan himself had also spoken about next-gen assessments, and our remarks are remarkably aligned.

I have criticized, and many, many of my friends in the Twitter/blogosphere have attacked, Secretary Duncan’s perpetuation of NCLB’s unwarranted, narrow,  student soul-deadening, and distorting use and even abuse of fill-in-the bubble, standardized multiple choice tests of basic skills.   I don’t think we should necessarily eliminate these tests altogether, but I very deeply believe they must be supplemented extensively by tests which far more authentically assess the development of the higher order thinking skills such as analytic reasoning and innovative problem-solving, things that a bubble test can never capture.

I also believe, and also spoke about at the Ed Department, that when and where we do use multiple choice tests to evaluate more basic skills in math and reading, we should do so in computer adapted methodologies that configure themselves quickly to students actual skill level, inform us far more acutely about students’ proficiencies, and provide that information to teachers immediately, in real-time.

These two points are almost exactly parallel to what Secretary Duncan called for in his Assessment 2.0 speech: Beyond the Bubble Test: Next Generation Assessments on September 2.  It was also written about in the New York Times on September 4, US Asks Educators to Reinvent Tests, and How they are Given.   (For the record, I published my first piece on this same topic on August 4.)  I have been using already the term “next-generation” assessments, but I also appreciate the term Assessment 2.0, and I think readers here can expect to see it appear frequently in the future.

Duncan’s speech, which I extensively quote below, celebrates emphatically that we will in the future have authentic assessments which measure higher order thinking skills, and that we will have computer adapted testing which provides real-time assessments of basic skills.   He calls upon us to celebrate these valuable, wonderful steps “beyond the bubble tests,” and I join his celebration in offering two cheers.

However, he also says again and again that these will be coming in the future “for the first time,” and that they will not be available until 2014.   Hence my singular caveat and objection to his remarks, and  I want to say this loudly (!), is that both these kinds of “next generation” assessments already exist, in the form of the CWRA and the MAP from NWEA.   I should also point out that I believe St. Gregory is the only school in the country, (or if I am wrong, one of no more than a dozen at most), public or private, that is performing currently both of these assessments.

A series of quotes from Secretary Duncan, in three parts: general, about the higher order thinking skills assessment, and about the computer adaptive testing.


Almost everywhere I went, I heard people express concern that the curriculum had narrowed as more educators “taught to the test.   Existing state assessments in mathematics and English often fail to capture the full spectrum of what students know and can do. Students, parents, and educators know there is more to a sound education than picking the right selection for a multiple choice question.

It’s for all these reasons that shortly after taking office, President Obama called on the nation’s governors and state education chiefs “to develop standards and assessments that don’t simply measure whether students can fill in a bubble on a test, but whether they possess 21st century skills like problem-solving and critical thinking and entrepreneurship and creativity.

I believe the impact of this next generation of assessments in the classroom will be dramatic—and that the new assessments will support learning and instructional practices that teachers have long hungered for themselves.

Higher Order Thinking Skills Authentically Assessed in non-Multiple Choice Tests

One of the biggest frustrations of teachers with existing assessments is that they fail to test higher-order reasoning and writing skills, and thus fail to show what students know and can do.

For the first time, many teachers will have the state assessments they have longed for– tests of critical thinking skills and complex student learning that are not just fill-in-the-bubble tests of basic skills but support good teaching in the classroom.

And last but not least, for the first time, the new assessments will better measure the higher-order thinking skills so vital to success in the global economy of the 21st century and the future of American prosperity. To be on track today for college and careers, students need to show that they can analyze and solve complex problems, communicate clearly, synthesize information, apply knowledge, and generalize learning to other settings.

The PARCC consortium will test students’ ability to read complex text, complete research projects, excel at classroom speaking and listening assignments, and work with digital media. Problems can be situated in real-world environments, where students perform tasks or include multi-stage scenarios and extended essays.

By way of example, the NAEP has experimented with asking eighth-graders to use a hot-air balloon simulation to design and conduct an experiment to determine the relationship between payload mass and balloon altitude. As the balloon rises in the flight box, the student notes the changes in altitude, balloon volume, and time to final altitude. Unlike filling in the bubble on a score sheet, this complex simulation task takes 60 minutes to complete.

These are excellent ideas, and, I think, critically important for our schools to adopt in their measurements, in their accountability, and in their school reform projects.  But as CWRA’s parent, the Council for Aid to Education, has pointed out in a not-yet publicly available slideshow, CWRA has been doing this kind of assessment for high school students since 2006.   In fairness, it doesn’t measure students’ public speaking or use of digital media, but it most certainly does measure their ability to “analyze and solve complex problems, communicate clearly, synthesize information, apply knowledge, and generalize learning to other settings.”  It does so with problems, just as Duncan calls for, “Problems [which] can be situated in real-world environments, where students perform tasks or include multi-stage scenarios and extended essays.”

Computer Adaptive Assessment with Real-Time Reporting

One-shot, year-end bubble tests administered on a single day, too often lead to a dummying down of curriculum and instruction throughout the course of the entire school year.

In short, most of the assessment done in schools today is after the fact and designed to indicate only whether students have learned. Not enough is being done to assess students’ thinking as they learn to boost and enrich learning, and track student growth.For the first time, state assessments will make widespread use of smart technology. They will provide students with realistic, complex performance tasks, immediate feedback, computer adaptive testing, and incorporate accommodations for a range of students.

The new assessments will help drive the development of a rich curriculum, instruction that is tailored to student needs, and multiple opportunities throughout the school year to assess student learning.

The SMARTER consortium will test students by using computer adaptive technology that will ask students questions pitched to their skill level, based on their previous answers. And a series of interim evaluations during the school year will inform students, parents, and teachers about whether students are on track.

Better assessments, given earlier in the school year, can better measure what matters—growth in student learning. And teachers will be empowered to differentiate instruction in the classroom, propelling the continuous cycle of improvement in student learning that teachers treasure.

The use of smarter technology in assessments will especially alter instruction in ways that teachers welcome. Technology enables the use of dynamic models in test questions. It makes it possible to assess students by asking them to design products or experiments, to manipulate parameters, run tests, and record data.

All these things are good: it is embarrassing really, in the 21st century, for students to be using number 2 pencils to complete a scantron bubble form.   For many of our students, these one-size-fits-all paper tests are either much too easy, and thus boring and meaningless, or much too hard, and thus boring and discouraging.   Even a good test will often be too easy for a long period, just right briefly, and then too hard at the end.

Smarter tech in assessment corrects for this: the students answers a first question, and then immediately the second question is harder if the first was correct, easier if it was wrong; we don’t bore or discourage students, and we quickly go deeper into what a students’ real proficiency is.   Teachers get reports the same day, and then we can reassess multiple times in a year; teachers will be far more empowered in their ability to differentiate instruction and meet the needs of individual students.

Secretary Duncan is correct: this is the future, and these next-gen assessments will dramatically improve teaching and learning.  But they are already here; all this is exactly what NWEA’ Measurement of Academic Progress provides.  (There may be others as well of which I am as of yet unaware).   Even as I write this, our St. Gregory students this week are on computers each day, taking computer adaptive testing which is giving immediate feedback and reports to our teachers about their strengths and their challenges, information we can use to improve student performance this year.

I am entirely unaware why Secretary Duncan chose to ignore or overlook the bold pioneers who are already now, not four years from now, carrying forward these essential next-gen assessments which are indeed transforming education. But in the meantime,  I will continue my work and leadership both in calling for the ongoing development, broadening, and strengthening of these next generation assessments, of Assessment 2.0,  and also in actually administering now, not five years from now, these assessments to my students and using the results to improve their learning.