As George Siemens has written in edu-cause, learning analytics, though still a young term, already suffers from “term sprawl.”  Nevertheless, he offers this definition:

 “learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs.”11

I’ve written previously about learning analytics– particularly in my review of the NMC Horizon report, which predicts that learning analytics will become increasingly significant in K-12 education in the coming two to three years, writing this:

The goal of learning analytics is to enable teachers and schools to tailor educational opportunities to each student’s level of need and ability. Learning analytics promises to harness the power of advances in data mining, interpretation, and modeling to improve understanding of teaching and learning, and to tailor education to individual students more effectively.

What I wrote then continues to express my stress about learning analytics: “I can’t seem to decide if it perpetuates and worsens our viewing students as assembly line widgets to process and manipulate, or if it is a great breakthrough in differentiation and individualizing instruction.”

One of the ways to recast my concern is to ask: is learning analytics essentially or especially about large scale, institutional level or higher, views of learning, or can it be situated closer to the learning experience, in the hands of teachers and students?

Teacher-centered Analytics: 

I am eager to support, and last spring at St. Gregory this was my biggest single push, the use of action research protocols and procedures by teachers: I think that school-leaders should work hard to develop these embedded within professional learning communities (such as Critical Friends Groups) and action research teams should have quality time provided to them for the work.   Now, I haven’t seen these two concepts, learning analytics and teacher action research, conjoined, but I think they should be able to be linked, right?   To quote the second definition, it would seem they can: in action research, teachers could conduct “data mining, interpretation, and modeling to improve understanding of teaching and learning.”

But, learning analytics has most often been associated closely with the looming construct of “Big Data,” and even in the previous, we see the term “data mining,” which suggests combing through massive pools of data unlike anything teachers are likely to generate in their own action research.  So the difference is one of size: big vs. small data.  Can learning analytics truly occur inside of small data?

Student Centered Analytics

Empowering students to command their own learning is a consistent theme here at 21k12, and I’m quite excited about the prospect of putting learning analytics directly in the hands of students.     Siemens in the edu-cause article writes a list of nine possible values learning analytics can provide to higher ed, and number nine is the following:

They can provide learners with insight into their own learning habits and can give recommendations for improvement. Learning-facing analytics, such as the University of Maryland, Baltimore County (UMBC) Check My Activity tool, allows learners to “compare their own activity . . . against an anonymous summary of their course peers.”15

The Maryland project seems terrific, and more can be found about it here.    They report that

Analysis of 1,461 courses using Blackboard in spring 2010 showed that D and F students used the course management system 47 percent less than students earning a C or higher.

This would seem to pose the inevitable, and in this case highly problematic, question of correlation not causation, but it serves I suppose as a start to the analysis.   That link also takes you to a video about how Check My Activity works.

Not the Check My Activity is an effort to bridge the gap– it permits each individual student to compare his or her own data to that of a large pool of data– thousands of students in the university.

At the same time, a project I’ve been working on in Rhode Island is connected, indirectly, to an initiative in that state promoting a new instructional management system, (IMS), leading me to research information available on IMS systems and to discover a fascinating report from ies, Using Student Achievement to Support Instructional Decisionmaking from the what works clearinghouse.   Not being a brand new report, it dates back to 2009, it does not use the term learning analytics, but it would seem to be an effort to connect the principles of data analysis to the classroom environment– to bring the macro to the micro.

This IES report has a section specifically on “Teaching students to examine their own data and set learning goals,” and again, I find this an especially fascinating area of data informed instructional improvement.      They explain:

Teachers should provide students with explicit instruction on using achievement data regularly to monitor their own performance and establish their own goals for learning. This data analysis process—similar to the data use cycle for teachers described in recommendation 1—can motivate both elementary and secondary students by mapping out accomplishments that are attainable, revealing actual achievement gains and providing students with a sense of control over their own outcomes. Teachers can then use these goals to better understand factors that may motivate student performance and adjust their instructional practices accordingly.

Students are best prepared to learn from their own achievement data when they understand the learning objectives and when they receive data in a user-friendly format. Tools such as rubrics provide students with a clear sense of learning objectives, and data presented in an accessible and descriptive format can illuminate students’ strengths and weaknesses

The research support provided for this approach, the report itself notes, is only limited and low, but it does meet their minimum threshold.    They offer four suggestions for how to implement:

  1. Explain expectations and assessment criteria.
  2. Provide feedback to students that is timely, specific, well formatted, and constructive
  3. Provide tools that help students learn from feedback.
  4. Use students’ data analyses to guide instructional changes.

Each section has useful detail, useful for schools and school-leaders trying to implement a more student-centered IMS approach.    For example, section three provides rubrics and grids that could be used by students managing their own learning journey, and explains that

after returning test results to students at the beginning of the school year, a teacher might ask all students to identify specific strengths and weaknesses by analyzing their responses to specific questions on the test. She could then guide the students to submit in writing realistic improvement goals for two particular skills with weak scores. Students with no demonstrated weaknesses could be invited to select a topic for which enrichment could be provided. By helping students make data-based decisions about their own learning goals, the teacher would be emphasizing their responsibility for improving their own learning.

In the conclusion of the excellent Siemens edu-cause article, Penetrating the Fog, he helpfully sums up key pros and cons of this new emerging phenomenon.    On the con side, he asks:

  •  How can the potential value of the data be leveraged without succumbing to the dangers associated with tracking students’ learning options based on deterministic modeling?
  • Additionally, how transparent are the algorithms and weighting of analytics?
  • How “real time” should analytics be in classroom settings?
  • Finally, since we risk a return to behaviorism as a learning theory if we confine analytics to behavioral data, how can we account for more than behavioral data?

He continues with a further articulation that aligns closely with my own anxieties:

there are reasons to be cautious as the development of analytical tools for modeling learners’ interactions gains attention. Like other behavior patterns, models that are deterministic assume that future conditions can be completely determined by knowing both the past and the present conditions of the subject involved.

We must guard against drawing conclusions about learning processes based on questionable assumptions that misapply simple models to a complex challenge. Learning is messy, and using analytics to describe learning won’t be easy.

Learning is messy!   Analytics have the alarming quality of rendering what is messy seemingly clean, and that can be dangerous.   But if we try to bring analytics back, regularly and frequently, to the student and teacher level, and ask them to apply it and live with it and report back to us, sometimes with their own research, how it is improving what they are doing and how they are learning, we can ease some of this LA anxiety.

Siemens also shares the positive view:  the pros include:

Learning analytics is essential for penetrating the fog that has settled over much of higher education. Educators, students, and administrators need a foundation on which to enact change. For educators, the availability of real-time insight into the performance of learners—including students who are at-risk—can be a significant help in the planning of teaching activities.

For students, receiving information about their performance in relation to their peers or about their progress in relation to their personal goals can be motivating and encouraging.

Finally, administrators and decision-makers are today confronted with tremendous uncertainty in the face of budget cuts and global competition in higher education. Learning analytics can penetrate the fog of uncertainty around how to allocate resources, develop competitive advantages, and most important, improve the quality and value of the learning experience.