“But what about collaboration?   There certainly aren’t any assessments available to evaluate our students’ proficiency in that critical 21st century skill.”

A common question and remark in my presentations about assessment, it hits on an important issue.  If we prize connected learning and collaboration as not only key 21st century workforce skills, (it is almost always among the various listings of skill sets prized in hiring) but also as key to lifelong learning and indeed, to the larger work of creativity and innovation– then shouldn’t we take steps to ensure our students are learning this key skill?

(I’m tempted here to make the case for the critical power of collaboration, and its fast-rising significance in the discovery and development of new ideas and new tools, but I think it is well enough understood that it can be stipulated here.)

An ongoing thesis of this blog is that we should all, as K-12 educators, be more clear with ourselves and our students about our specific and key intended learning outcomes, and if we are going to test or assess their learning at all, we should align that testing and assessing with those ILO’s, and we should use the information we collect to improve their learning.

Accordingly, we need more and better tools and resources for better assessing collaboration— something many of the best assessment minds have been working on in really exciting ways.

Clearly, authentic assessment of collaboration is key to this work– and many teachers are using PBL learning programs to provide more opportunities for student collaboration, and are building comprehensive rubrics to assess such collaboration.  Buck Institute for Education has a freely available such rubric available here.

I’ve written and presented with some regularity about open internet testing as a way to improve the teaching and learning of information literacy, web research, critical thinking, and applied problem-solving, and Will Richardson has built on my open internet posts to argue for the importance and value of “open network” testing, in which it is not just web-sites which are accessible in testing but other  persons, collected into a learning network, whom can be interviewed or surveyed to collect information for use in problem-solving.

But as regular readers know, I’m always interested in how we can mix and match both internal, such as the two examples above, and external measurements of what we think matter.  In Connecticut, I’ve heard from a source there, they are developing a new section for their secondary science standardized test for collaborative problem-solving.

As the video at top displays, there is exciting work in this direction coming from some super-smart Australians at the University of Melbourne and its ATC21S: Assessment and Teaching of 21st Century Skills.  To the best of my understanding, the kind of Collaborative problem-solving assessment tool featured in the video is not yet available, but still in development.

PISA, from OECD, is perhaps the broadest and most significant player in the new field of assessing collaboration and, more specifically, collaborative problem solving.  As is being increasingly reported, its 2015 test administration will include for the first time this key element.

As an example, creativity expert Keith Sawyer wrote about this development (and the work of ATC21S) on his blog recently. 

This new assessment will be included in the international PISA assessments to be implemented in 2015. Look for news stories describing which countries are “better at collaboration” based on these skills! The need for standardized tests is unfortunate in many ways, but I believe that having politicians and parents focused on collaboration will lead our teachers, students, and schools to emphasize collaboration more.

The PISA definition (version A):

Collaborative problem solving competency is the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution.

These slides from PISA lay out a bit more information.

As the slides explain, PISA has been on a journey of many steps in improving its assessment of problem solving– which is, after all, quite well situated as the ultimate expression of what we want students to be able to do.

The article embedded below quotes Karl Popper to the effect that “All life is problem solving.”


From the article:

In 2003, Problem solving first was added to PISA, but embedded within a domain.

The PISA 2003 framework explicitly stated that: “The processes of problem solving . . . are found across the curriculum” and “educators and policy makers are especially concerned about students’ competencies of solving problems in real-life settings” (OECD, 2003, p. 154).

the working definition described problem solving as “an individual’s capacity to use cognitive processes to resolve real, crossdisciplinary situations where the solution path is not immediately obvious”

The cognitive processes involved were subdivided into two main branches labeled problem-solving processes and reasoning skills. Reasoning represented the ability to draw valid conclusions from given information or to transfer a solution strategy to similar problems.

The branch of problem-solving processes consisted of additional abilities required for problem solving, such as understanding and representing the problem (knowledge acquisition), finding solutions (knowledge application), reflecting progress, and communicating the results.

As much as problem solving represented progress, the journal article authors point out, and OECD recognized, that there were sharp limitations to this model of problem-solving: it was too static.   Problem solving is not a matter of just seeing a problem and determining, in one fell swoop, the solution; it is an iterative process and requires multiple steps and processes.  It is dynamic– and, as an aside, we have to take care to ensure that our student learning experiences have the dynamism necessary to generate this more important problem solving aptitude.

In a real-world setting, most problem solvers would have been likely to interactively try out different options (based on hypotheses about how the system works or on trial-and-error to see how the system responds.

Hence the move from the PISA 2003 Analytic Problem Solving to the PISA  2012 IPS: Interactive Problem Solving, made possible by the more dynamic test-taking environment made possible by computer generated testing.

When encountering real-world artefacts such as ticket vending machines,air-conditioning systems or mobile telephones for the first time, especially if the instructions for use of such devices are not clear or not available.Understanding how to control such devices is a problem faced universally in everyday life. In these situations it is often the case that some relevant information is not apparent at the outset. (OECD, 2010, p. 18)

The move away from Analytical Problem Solving (see previous section) was motivated by the desire to adequately represent the complexity of our modern world and by the opportunity to simulate this complexity offered by computer-based assessment.

The decomposition of the underlying cognitive processes in PISA 2012 distinguishes four problem-solving processes: exploring and understanding, representing and formulating, planning and executing, and evaluating and reflecting.

The first two processes can be seen as subcomponents of knowledge acquisition, whereas the other two represent subcomponents of knowledge application.

Sophisticated thinking certainly seems to be demanded for the successful solution generation in this IPS testing environment.  Wow is what comes to mind when viewing the sample provided in the article below:  at first glance it is hard to know what one would do to address this, but that is partly because we are looking at a static paper representation, not an interactive computer based model we can manipulate to make sense of.

PISA 2015 takes “interactivity” to the appropriate and next level.

By doing so, the interaction between a problem solver and a task—a central feature of IPS for PISA 2012 (OECD,2010)—will be extended to interactions between several problem solvers.

Thus, the steep rise of communicative and team tasks in modern society (Autor et al., 2003) will be acknowledged and Vgotsky’s view that there is an inherent social nature to any type of learning or problem solving (Lee & Smagorinsky, 2000) will be incorporated into an international LSA for the first time.

As an enthusiast not just for teaching collaboration, which, good as it is, feels too small, but also and especially more for the larger compelling nature of social and connected learning, how can I not applaud!   Hurrah.  Even if the 2015 initiative itself proves limited or flawed, it is an important step forward.   Our schools, I believe, fiercely need to better appreciate and emphasize the power of social and connected learning, and if there are tests which will measure this, reward this, inform this, the better it will happen.  In the test design, the article authors explain,

collaboration and problem solving could be considered to be correlated but sufficiently distinct dimensions.

That is, for problem solving,the cognitive processes of IPS in PISA 2012 will still be included (see previous section),whereas a new assessment of social and collaborative skills, which are associated with noncognitive skills (Greiff, 2012), will be added.

Although the exact nature of these noncognitive skills has yet to be specified, the understanding of collaboration within the Assessing and Teaching 21st Century Skills initiative (Griffin et al., 2011) constitutes a reasonable starting point.

There, participation and cooperation, perspective taking,and social regulation jointly form the collaborative-social dimension of ColPS (Griffinet al., 2011), and the first empirical results indicate that—in principle—these skills maybe accessible to measurement.

It’s not that it’s going to be easy to make these kinds of tasks, as the article authors write.   There are key challenges, and in the article, published in Spring 2013, it seems to remain an open question whether the “collaboration” will be between and among multiple human agents, or, a human test-taker and a virtual/automated collaborator.

One of them alludes to the question of whether problemsolvers should interact with artificially simulated agents (human-agent) or real students located at another computer (human-human). Whereas a broad spectrum of agents could be incorporated into the assessment from a technical perspective and would allow for standardized control over the assessment situation, the external validity of this approach has not been verified. Human-human interactions, on the other hand, are high in face validity, but they are difficult to control and to match in an LSA setting.

Keith Sawyer’s blog post, (June 2013) however, reports that this issue has been resolved.

One intriguing development is their decision to administer the test via computer, with the child collaborating not with another actual person, but with a computational agent. (This is the only way you can reliably measure the skills of any one person; in a real team, each person’s performance is affected by the other’s.)

PISA itself has also put out a comprehensive,  lengthy, articulation of its Collaborative Problemsolving framework, which I’ve embedded below.

In it, they offer a slightly different definition of the work:

Collaborative problem solving competency is the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills and efforts to reach that solution.

They then do a very useful activity, breaking out each element of this definition.   First, they note with some emphasis that they are not seeking to evaluate groups in their effective collaboration as a group.  Instead,

the focus is on individual capacities within collaborative situations. The effectiveness of collaborative problem solving depends on the ability of group members to collaborate and to prioritise the success of the group over individual successes. At the same time, this ability is a trait in each of the individual members of the group.

The term agent is used, here, intentionally, to distinguish from individual or person, because they are allowing agent to be a computer stand-in.

The word ‘agent’ refers to either a human or a computer-simulated participant. In both cases, an agent has the capability of generating goals, performing actions, communicating messages, reacting to messages from other participants, sensing its environment, adapting to changing environments, and learning

The core competencies of collaborative problem solving are more clearly simplified and spelled out this way:

1. Establishing and maintaining shared understanding;
2. Taking appropriate action to solve the problem;
3. Establishing and maintaining team organisation.

Drawing upon the earlier 2012 IPS problem-solving framework, “exploring and understanding, representing and formulating, planning and executing, and evaluating and reflecting,” allows the PISA team to generate the following grid framework for CPS.

CPS PISA

Let’s do this: let’s become more serious about facilitating the kind of learning which this new international, gold-standard, test will evaluate, and let’s assess it as the serious intended learning outcome we believe it should be.