GoogleTranslate Service


Open Badges, assessment and Open Education

August 25th, 2011 by Graham Attwell

I have spent some time this morning thinking about the Mozilla Open Badges and assessment project, spurred on by the study group set up by Doug Belshaw to think about the potential of the scheme. And the more I think about it, the more I am convinced of its potential as perhaps one of the most significant developments in the move towards Open Education. First though a brief recap for those of you who have not already heard about the project.

The Open Badges framework, say the project developers, is designed to allow any learner to collect badges from multiple sites, tied to a single identity, and then share them out across various sites — from their personal blog or web site to social networking profiles. The infrastructure needs to be open to allow anyone to issue badges, and for each learner to carry the badges with them across the web and other contexts.

Now some of the issues. I am still concerned of attempts to establish taxonomies, be it those of hierarchy in terms of award structures or those of different forms of ability / competence / skill (pick your own terminology). Such undertakings have bedeviled attempts to introduce new forms of recognition and I worry that those coming more from the educational technology world may not realise the pitfalls of taxonomies and levels.

Secondly is the issue of credibility. There is a two fold danger here. One is that the badges will only be adopted for achievements in areas / subjects / domains presently outside ‘official’ accreditation schemes and thus will be marginalised. There is also a danger that in the desire to gain recognition, badges will be effectively benchmarked against present accreditation programmes (e.g. university modules / degrees) and thus become subject to all the existing restrictions of such accreditation.

And thirdly, as the project roils towards a full release, there may be pressures for restricting badge issuers to existing accreditation bodies, and concentrating on the technological infrastructure, rather than rethinking practices in assessment.

Lets look at some of the characteristics of any assessment system:

  • Reliability

Reliability is a measure of consistency. A robust assessment system should be reliable, that is, it should yield the same results irrespective of who is conducting it or the environmental conditions under which it is taking place. Intra-tester reliability simply means that if the same assessor is looking at your work his or her judgement should be consistent and not influenced by, for example, another assessment they might have undertaken! Inter-tester reliability means that if two different assessors were given exactly the same evidence and so on, their conclusions should also be the same. Extra-tester reliability means that the assessors conclusions should not be influenced by extraneous circumstances, which should have no bearing on the evidence.

  • Validity

Validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. There are three sorts of validity. Face validity implies a match between what is being evaluated or tested and how that is being done. For example, if you are evaluating how well someone can bake a cake or drive a car, then you would probably want them to actually do it rather than write an essay about it! Content validity means that what you are testing is actually relevant, meaningful and appropriate and there is a match between what the learner is setting out to do and what is being assessed. If an assessment system has predictive validity it means that the results are still likely to hold true even under conditions that are different from the test conditions. For example, performance evaluation of airline pilots who are trained to cope with emergency situations on a simulator must be very high on predictive validity.

  • Replicability

Ideally an assessment should be carried out and documented in a way which is transparent and which allows the assessment to be replicated by others to achieve the same outcomes. Some ‘subjectivist’ approaches to evaluation would disagree, however.

  • Transferability

Although each assessment is looking at a particular set of outcomes, a good assessment system is one that could be adapted for similar outcomes or could be extended easily to new learning.  Transferability is about the shelf-life of the assessment and also about maximising its usefulness.

  • Credibility

People actually have to believe in the assessment! It needs to be authentic, honest, transparent and ethical. If people question the rigour of the assessment process, doubt the results or challenge the validity of the conclusions, the assessment loses credibility and is not worth doing.

  • Practicality

This means simply that however sophisticated and technically sound the assessment is, if it takes too much of people’s time or costs too much or is cumbersome to use or the products are inappropriate then it is not a good evaluation!

Pretty obviously there is going to be a trade off between different factors. It is possible to design extremely sophisticated assessments which have a high degree of validity. However, such assessment may be extremely time consuming and thus not practical. The introduction of multiple tests through e-learning platforms is cheap and easy to produce. However they often lack face validity, especially for vocational skills and work based learning.

Lets try to make this discussion more concrete by focusing on one of the Learning Badges pilot assessments at the School of Webcraft.

OpenStreetMapper Badge Challenge

Description: The OpenStreetMapper badge recognizes the ability of the user to edit OpenStreetMap wherever satellite imagery is available in Potlatch 2.

Assessment Type: PEER – any peer can review the work and vote. The badge will be issued with 3 YES votes.

Assessment Details:

OpenStreetMap.org is essentially a Wikipedia site for maps. OpenStreetMap benefits from real-time collaboration from thousands of global volunteers, and it is easy to join. Satellite images are available in most parts of the world.

P2PU has a basic overview of what OpenStreetMap is, and how to make edits in Potlatch 2 (Flash required). This isn’t the default editor, so please read “An OpenStretMap How-To“:

Your core tasks are:

  1. Register with OpenStreetMap and create a username. On your user page, accessible at this link , change your editor to Potlatch 2.
  2. On OpenStreetMap.org, search and find a place near you. Find an area where a restaurant, school, or gas station is unmapped, or could use more information. Click ‘Edit’ on the top of the map. You can click one of the icons, drag it onto the map, and release to make it stick.
  3. To create a new road, park, or other 2D shape, simply click to add points. Click other points on the map where there are intersections. Use the Escape to finish editing.
  4. To verify your work, go to edit your point of interest, click Advanced at the bottom of the editor to add custom tags to this point, and add the tag ‘p2pu’. Make its value be your P2PU username so we can connect the account posting on this page to the one posting on OpenStreetMap.
  5. Submit a link to your OpenStreetMap edit history. Fill in the blank in the following link with your OpenStreetMap username http://www.openstreetmap.org/user/____/edits

You can also apply for the Humanitarian Mapper badge: http://badges.p2pu.org/questions/132/humanitarian-mapper-badge-challenge

Assessment Rubric:

  1. Created OpenStreetMap username
  2. Performed point-of-interest edit
  3. Edited a road, park, or other way
  4. Added the tag p2pu and the value [username] to the point-of-interest edit
  5. Submitted link to OpenStreetMap edit history or user page to show what edits were made

NOTE for those assessing the submitted work. Please compare the work to the rubric above and vote YES if the submitted work meets the requirements (and leave a comment to justify your vote) or NO if the submitted work does not meet the rubric requirements (and leave a comment of constructive feedback on how to improve the work)

CC-BY-SA JavaScript Basic Badge used as template5.

Pretty clearly this assessment scores well on validity and also looks to be reliable. The template could easily be transferred as indeed it has in the pilot. It is also very practical. However, much of this is due to the nature of the subject being assessed – it is much easier to use computers for assessing practical tasks which involve the use of computers than it is for tasks which do not!

This leaves the issue of credibility. I have to admit  know nothing about the School of Webcraft, neither do I know who were the assessors for this pilot. But it would seem that instead of relying on external bodies in the form of examination boards and assessment agencies to provide credibility (deserved for otherwise), if the assessment process is integrated within communities of practice – and indeed assessment tasks such as the one given above could become a shared artefact of that community – then then the Badge could gain credibility. And this seems a much better way of buidli9ng credibility than trying to negotiate complicated arrangements that n number of badges at n level would be recognized as a degree or other ‘traditional’ qualification equivalent.

But lets return to some of the general issues around assessment again.

So far most of the discussions about the Badges project seem to be focused on summative assessment. But there is considerable research evidence that formative assessment is critical for learning. Formative assessment can be seen as

“all those activities undertaken by teachers, and by their students in assessing themselves, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged. Such assessment becomes ‘formative assessment’ when the evidence is actually used to adapt the teaching work to meet the needs.”

Black and Williams (1998)

And that is there the Badges project could come of age. One of the major problems with Personal Learning Environments is the difficulties learners have in scaffolding their own learning. The development of formative assessment to provide (on-line) feedback to learners could help them develop their personal learning plans and facilitate or mediate community involvement in that learning.Furthermore a series of tasks based assessments could guide learners through what Vygotsky called the Zone of Proximal Development (and incidentally in Vygotsky’s terms assessors would act as Significantly Knowledgeable Others).

In these terms the badges project has the potential not only to support learning taking place outside the classroom but to build a significant infrastructure or ecology to support learning that takes place anywhere, regardless of enrollment on traditional (face to face or distance) educational programmes.

In a second article in the next few days I will provide an example of how this could work.

Please follow and like us:

4 Responses to “Open Badges, assessment and Open Education”

  1. I agree with you on the importance of formative assessment in networked open learning. I like the idea of any form of assessment as long the personal learning and not to external criteria are at the heart of it as these criteria might be totally irrellevant to the person who is learning. Of course we will always keep the problem of validation what someone has learnt against societal or institutional criteria. I think badges will go down welll in schools with younger learners, but I doubt that there will be many adults who will feel great when they have earned a badge. It makes me feel quite uncomfortable. ePortfolios suddenly don’t sound that bad!

  2. Hi Graham,

    Isn’t “Replicability” the pre-condition of “Reliability” ? 😉

    cheers,

    Seb

  3. Judy Bloxham says:

    I’ve been doing a case study for LSIS about the use of the stamp module in Moodle, the stamp can be seen as a badge. It has been used to mark the competency in the use of software or the ability in soft skills in learners. The lecturer instigating this has now transferred it to staff development and has also now equated the number of stamps with a level of competency. It has an added advantage in that it also acts as a visible award as these stamps are there on the individual’s home page. It also acts as a driver as it creates competition both in the sense of competition with others and also in an ipsative sense in that it drives you to improve on your own performance.

    It hasn’t been published at this moment but is pending if you want to read the article – look on the LSIS Excellencve Gateway under good e-practice.

  4. Judy Bloxham says:

    Sorry should have added the learners were on a degree programme so worries about how this goes down with adults ….

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.

    Please follow and like us:


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.

    Please follow and like us:


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.

    Please follow and like us:


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.

    Please follow and like us:


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

      Please follow and like us:
  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories