We are now OE GLOBAL.
You are viewing archived content. Please visit oeglobal.org for our new site.

Can the open web provide the future of assessment?

This is a post about assessment of learning – finding out what people know and learn. My main point is that there is a great opportunity to make assessment of learning more meaningful and scale, and increase the value of open educational resources.

New types of assessment are enabled by tools and practices common to the open web. They are used in lots of place already every day – places like Stack Overflow or open source software communities – but not in many Universities. 

At the end of this post, I mention two initiatives that are taking these ideas and building prototypes to try them out. If you are interested in getting involved, please leave a comment below.

Is it going to be on the test?

The best type of assessment is implicit and authentic – it is a by-product of meaningful activity within a community of other learners. However, many assessment practices – such as final exams or multiple choice tests – are explicit and artificial. The link between assessment and learning goal becomes tenuous. The ability to write good software, or to speak French, is hard to establish through checkboxes.

As a result there is a risk that assessment itself, rather than learning, becomes the objective of education. Students asking, “Is this going to be on the test?”, rather than focusing on the benefits that new knowledge might bring to them, highlight the disconnect between assessment and learning. We “teach” in order to prepare students for life and work, but they “learn” to succeed on the test.

There is another problem with current assessment practices. A model where few experts assess the work of many non-experts doesn’t scale very well. Experts create bottlenecks. If open educational resources are to be useful to lots and lots of learners, then we need assessment practices than are meaningful and scale as part of the learning.

Wouldn’t it be wonderful if we could identify certain meaningful tasks that people engage in, and recognize them in an authentic (non artificial) way that scales?

The good news is that this is already common practice in online communities, where lots of users evaluate each others’ contributions in order to achieve a common goal. The bad news is that as of today, these practices are rarely found in formal education.

An example from the open web

Stack Overflow is an online question-and-answer platform for software developers. It is built on software that tracks every user’s activity and awards badges (yes, like boy scouts) for all kinds of contributions. There is a badge for the first question a user asks, a badge for answering a certain number of questions, a badge that is awarded for submitting the best answer to a particular question, and so on. Answers are voted up or down by the community, and a higher reputation gives users more power to influence an answer’s score (which determines if it is shown at the top of a long thread where it is easy to find, or near the bottom). New and less experienced developers benefit from the expertise and skills of their more experienced peers – as they learn from the voting and the discussion that typically accompanies the separation of good from bad answers.

What users in online communities do as part of their collaborative practice, looks strikingly similar to effective and authentic assessment. Stack Overflow doesn’t use these peer review and -assessment or badges, to recognize learning achievements. It uses them to identify high quality answers. What we think of as assessment is simply a way for Stack Overflow to improve what it does – help software developers get good answers to their questions. Nevertheless, we can already learn quite a lot about users’ participation, interests, and draw some conclusions regarding their expertise, by looking at the badges they have collected.

A second example that highlights the point, is the work of open source software communities. Users that have demonstrated their ability to write or review code, and act responsibly in the interest of the project, are awarded “commit rights”. They are allowed to make changes to the actual codebase. Is a degree in computer science really a better gauge of a developer’s ability to write great software than the “commit rights” awarded by her community of fellow developers?

Using the open web as our experimentation lab, we can imagine community-based learning assessments, that are implicit, authentic and scale. They will be driven by the norms and practices of the learning community itself; will be open for anyone to observe and learn from; will include feedback loops that enable individual learner’s improvement; and can be broken down into small assessments of individual skills or artefacts.

Badges to connect assessment to certification

Not only can these forms of assessment improve the way we recognize skills, they also enable a new and distributed way of certifying learning and signaling it to potential employers. What is needed for this to happen is a badges infrastructure that is decentralized, controlled by users and driven by the types of assessment discussed above.

Badges become signals for learning achievements, and within a secure badges infrastructure users can control how they manage these signals. Learners collect badges from different communities (think of this as individual credits from different Universities). They control where to store and how to display these badges, for example to potential employers only or on a  public website. And there is a way to authenticate badges, to make sure that claims about achievements are in fact true.  See Mark’s post about the details of this badges infrastructure for more information on how this could work.

The P2PU / Mozilla School of Webcraft will take these ideas about assessment and certification and test them for web developer training. We plan to use community voting and discussion models similar to those created by sites like Stack Overflow, and connect them to learning achievements. We have started identifying different types of activities and behaviors – those that express skills relevant to web developers – and are now working on formalizing them as badges, similar to the way Stack Overflow recognizes types of contributions.

In late September, Peer 2 Peer University – in collaboration with Carnegie Foundation, Mozilla, and Shuttleworth Foundation – is organizing a workshop about “Learning assessment on the open web” to identify other mechanisms used by open source communities that might be applicable to assessment of learning achievements. And a first prototype of the badges infrastructure will be presented at the Mozilla Drumbeat Festival in November – we plan to roll it out more broadly, with more partners, in early 2011.