GoogleTranslate Service


Open Accreditation – a model

October 14th, 2008 by Graham Attwell

Can we develop an Open Accreditation system.  What would we be looking for. In this post Jenny Hughes looks at criteria for a robust and effective cccreditation system.

An accreditation system depends on the robustness of the assessment system on which it is based.

Imagine you were in a shop that sold accreditation / assessment systems ‘off-the-peg” – what would criteria would you use if you went in to buy one?

Reliability
Reliability is a measure of consistency. A robust assessment system should be reliable; that is, it should be based on an assesssment process that yields the same results irrespective of who is conducting it or the environmental conditions under which it is taking place. Intra-tester reliability simply means that if the same asessor is assessing performance his or her judgement should be consistent and not influenced by, for example, another learner they might have just assessed or whether they feel unwell or just in a bad mood! Inter-tester reliability means that if two different assessors were given exactly the same questions, data collection tools, output data and so on, their conclusions should also be the same. Extra-tester reliability means that the assessor’s conclusions should not be influenced by extraneous circumstances, which should have no bearing on the assessment object.

Validity
Validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. There are three sorts of validity. Face validity implies a match between what is being assessed or tested and how that is being done. For example, if you are assessing how well someone can bake a cake or drive a car then you would probably want them to actually do it rather than write an essay about it! Content validity means that what you are testing is actually relevant, meaningful and appropriate and there is a match between what the learner is setting out to do and what is being assessed. If an assessment system has predictive validity it means that the results are still likely to hold true even under conditions that are different from the test conditions. For example, performance assessment of airline pilots who are trained to cope with emergency situations on a simulator must be very high on predictive validity.

Replicability
Ideally an assessment should be carried out and documented in a way which is transparent and which allows the assessment to be replicated by others to achieve the same outcomes. Some ‘subjectivist’ approaches to assessment would disagree, however.

Transferability
Although each assessment should be designed around a particular piece of learning, a good assessment  system is one which could be adapted for similar  situations or could be extended easily to new activities. That is, if your situation evolves and changes over a period of time in response to need, it would be useful if you didn’t have to rethink your entire assessment system. Transferability is about the shelf-life of the assessment and also about maximising its usefulness

Credibility
People actually have to believe in yourassessment! It needs to be authentic, honest, transparent and ethical. If you have even one group of stakeholders questioning the rigour of the assessment process or doubting the results or challenging the validity of the conclusions, the assessment loses credibility and is not worth doing.

Practicality
This means simply that however sophisticated and technically sound the assessment is, if it takes too much of people’s time or costs too much or is cumbersome to use or the products are inappropriate then it is not a good assessment system !

Comparability
Although an assessment system should be customised to meet the needs of particular learning events, a good assessment system should also take into account the wider assessment ‘environment’ in which the learning is located. For example, if you are working in an environment where assessment is normally carried out by particular people (e.g teachers, lecturers) in a particular institution (e.g school or university) where ‘criteria reference assessment is the norm, then if you undertake a radically different type of assessment you may find that your audience will be less receptive and your results less acceptable. Similarly, if the learning that is being assessed is part of a wider system and everyone else is using a different system then this could mean that your input is ignored simply because it is too difficult to integrate.

Also, if you are trying to compare performance from one year to the next or compare learning outcomes with other people, then this needs to be taken into account.

2 Responses to “Open Accreditation – a model”

  1. jen hughes says:

    I think I might want to add ‘scale-ability’ – a bit like a sub division of transferability but meaning a system which can cope with ‘big bits’ of learning and also ‘small bits’ of learning. Am also thinking about flexibility as another sub-division just meaning the extent to which you can stretch the system without breaking it. If anyone has any more ‘-bilities’, let me know.

    Also, I am using the word ‘accreditation’ strictly in the sense of systems for recognition of learning NOT in the sense of programme or course accreditation i.e individual (…does it need to be individual?) achievement not institutional licence to practice. ‘Assessment’ is also used in its broadest sense to include a wide range of strategies not simply end-testing.

    Just as an aside….not many people know this…but I first met Graham when he was one of my students on an initial training course for adult education teachers. On the first day of that course, every year, I used to ask the students what things were likely to act as barriers to their learning. Almost without exception people used to say ‘worrying about whether I pass or fail’. The solution was really easy – the first thing I used to do on the first morning was to present them with their signed certificates and then tell them to stop worrying. It did actually work.

    I also told them they could give it back at the end if they thought they didn’t deserve it. (One woman did – and came back the next year. Coral…if you ever read this, that was awesome!)

    The moderators had a bit of a problem with it but couldn’t really argue their point unless they had ‘evidence’ that the students were not competent at the end of the course. They are probably still looking for Graham…..

Tweetbacks/Trackbacks/Pingbacks

  1. […] Reflecting on the possibility of initiating an Open Accreditation System, Jenny Hughes, Pontydysgu-Bridge to Learning, outlines the characteristic features of such a system. Snippets: Reliability- it should be based […]

  • Search Pontydysgu.org

    News Bites

    MOOC providers in 2016

    According to Class Central a quarter of the new MOOC users  in 2016 came from regional MOOC providers such as  XuetangX (China) and Miríada X (Latin America).

    They list the top five MOOC providers by registered users:

    1. Coursera – 23 million
    2. edX – 10 million
    3. XuetangX – 6 million
    4. FutureLearn – 5.3 million
    5. Udacity – 4 million

    XuetangX burst onto this list making it the only non-English MOOC platform in top five.

    In 2016, 2,600+ new courses (vs. 1800 last year) were announced, taking the total number of courses to 6,850 from over 700 universities.


    Jobs in cyber security

    In a new fact sheet the Tech Partnership reveals that UK cyber workforce has grown by 160% in the five years to 2016. 58,000 people now work in cyber security, up from 22,000 in 2011, and they command an average salary of over £57,000 a year – 15% higher than tech specialists as a whole, and up 7% on last year. Just under half of the cyber workforce is employed in the digital industries, while banking accounts for one in five, and the public sector for 12%.


    Number students outside EU falls in UK

    Times Higher Education reports the number of first-year students from outside the European Union enrolling at UK universities fell by 1 per cent from 2014-15 to 2015-16, according to data released by the Higher Education Statistics Agency.

    Data from the past five years show which countries are sending fewer students to study in the UK.

    Despite a large increase in the number of students enrolling from China, a cohort that has grown by 12,500 since 2011-12, enrolments by students from India fell by 13,150 over the same period.

    Other notable changes include an increase in students from Hong Kong, Singapore and Malaysia and a fall in students from Saudi Arabia and Nigeria.


    Peer Review

    According to the Guardian, research conducted with more than 6,300 authors of journal articles, peer reviewers and journal editors revealed that over two-thirds of researchers who have never peer reviewed a paper would like to. Of that group (drawn from the full range of subject areas) more than 60% said they would like the option to attend a workshop or formal training on peer reviewing. At the same time, over two-thirds of journal editors told the researchers that it is difficult to find reviewers


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • still time to register: workshop on 'theorising of technology and education' www2.warwick.ac.uk/fac/soc/ce…

    About 4 hours ago from Cristina Costa's Twitter via TweetDeck

  • Sounds of the Bazaar AudioBoo

  • Recent Posts

  • Archives

  • Meta

  • Upcoming Events

      There are no events.
  • Categories