GoogleTranslate Service


Open Accreditation – a model

October 14th, 2008 by Graham Attwell

Can we develop an Open Accreditation system.  What would we be looking for. In this post Jenny Hughes looks at criteria for a robust and effective cccreditation system.

An accreditation system depends on the robustness of the assessment system on which it is based.

Imagine you were in a shop that sold accreditation / assessment systems ‘off-the-peg” – what would criteria would you use if you went in to buy one?

Reliability
Reliability is a measure of consistency. A robust assessment system should be reliable; that is, it should be based on an assesssment process that yields the same results irrespective of who is conducting it or the environmental conditions under which it is taking place. Intra-tester reliability simply means that if the same asessor is assessing performance his or her judgement should be consistent and not influenced by, for example, another learner they might have just assessed or whether they feel unwell or just in a bad mood! Inter-tester reliability means that if two different assessors were given exactly the same questions, data collection tools, output data and so on, their conclusions should also be the same. Extra-tester reliability means that the assessor’s conclusions should not be influenced by extraneous circumstances, which should have no bearing on the assessment object.

Validity
Validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. There are three sorts of validity. Face validity implies a match between what is being assessed or tested and how that is being done. For example, if you are assessing how well someone can bake a cake or drive a car then you would probably want them to actually do it rather than write an essay about it! Content validity means that what you are testing is actually relevant, meaningful and appropriate and there is a match between what the learner is setting out to do and what is being assessed. If an assessment system has predictive validity it means that the results are still likely to hold true even under conditions that are different from the test conditions. For example, performance assessment of airline pilots who are trained to cope with emergency situations on a simulator must be very high on predictive validity.

Replicability
Ideally an assessment should be carried out and documented in a way which is transparent and which allows the assessment to be replicated by others to achieve the same outcomes. Some ‘subjectivist’ approaches to assessment would disagree, however.

Transferability
Although each assessment should be designed around a particular piece of learning, a good assessment  system is one which could be adapted for similar  situations or could be extended easily to new activities. That is, if your situation evolves and changes over a period of time in response to need, it would be useful if you didn’t have to rethink your entire assessment system. Transferability is about the shelf-life of the assessment and also about maximising its usefulness

Credibility
People actually have to believe in yourassessment! It needs to be authentic, honest, transparent and ethical. If you have even one group of stakeholders questioning the rigour of the assessment process or doubting the results or challenging the validity of the conclusions, the assessment loses credibility and is not worth doing.

Practicality
This means simply that however sophisticated and technically sound the assessment is, if it takes too much of people’s time or costs too much or is cumbersome to use or the products are inappropriate then it is not a good assessment system !

Comparability
Although an assessment system should be customised to meet the needs of particular learning events, a good assessment system should also take into account the wider assessment ‘environment’ in which the learning is located. For example, if you are working in an environment where assessment is normally carried out by particular people (e.g teachers, lecturers) in a particular institution (e.g school or university) where ‘criteria reference assessment is the norm, then if you undertake a radically different type of assessment you may find that your audience will be less receptive and your results less acceptable. Similarly, if the learning that is being assessed is part of a wider system and everyone else is using a different system then this could mean that your input is ignored simply because it is too difficult to integrate.

Also, if you are trying to compare performance from one year to the next or compare learning outcomes with other people, then this needs to be taken into account.

2 Responses to “Open Accreditation – a model”

  1. jen hughes says:

    I think I might want to add ‘scale-ability’ – a bit like a sub division of transferability but meaning a system which can cope with ‘big bits’ of learning and also ‘small bits’ of learning. Am also thinking about flexibility as another sub-division just meaning the extent to which you can stretch the system without breaking it. If anyone has any more ‘-bilities’, let me know.

    Also, I am using the word ‘accreditation’ strictly in the sense of systems for recognition of learning NOT in the sense of programme or course accreditation i.e individual (…does it need to be individual?) achievement not institutional licence to practice. ‘Assessment’ is also used in its broadest sense to include a wide range of strategies not simply end-testing.

    Just as an aside….not many people know this…but I first met Graham when he was one of my students on an initial training course for adult education teachers. On the first day of that course, every year, I used to ask the students what things were likely to act as barriers to their learning. Almost without exception people used to say ‘worrying about whether I pass or fail’. The solution was really easy – the first thing I used to do on the first morning was to present them with their signed certificates and then tell them to stop worrying. It did actually work.

    I also told them they could give it back at the end if they thought they didn’t deserve it. (One woman did – and came back the next year. Coral…if you ever read this, that was awesome!)

    The moderators had a bit of a problem with it but couldn’t really argue their point unless they had ‘evidence’ that the students were not competent at the end of the course. They are probably still looking for Graham…..

Tweetbacks/Trackbacks/Pingbacks

  1. […] Reflecting on the possibility of initiating an Open Accreditation System, Jenny Hughes, Pontydysgu-Bridge to Learning, outlines the characteristic features of such a system. Snippets: Reliability- it should be based […]

  • Search Pontydysgu.org

    News Bites

    Teachers and overtime

    According to the TES teachers in the UK “are more likely to work unpaid overtime than staff in any other industry, with some working almost 13 extra hours per week, according to research.

    A study of official figures from the Trades Union Congress (TUC) found that 61.4 per cent of primary school teachers worked unpaid overtime in 2014, equating to 12.9 additional hours a week.

    Among secondary teachers, 57.5 per cent worked unpaid overtime, with an average of 12.5 extra hours.

    Across all education staff, including teachers, teaching assistants, playground staff, cleaners and caretakers, 37.6 per cent worked unpaid overtime – a figure higher than that for any other sector.”


    The future of English Further Education

    The UK Parliament Public Accounts Committee has warned  the declining financial health of many FE colleges has “potentially serious consequences for learners and local economies”.

    It finds funding and oversight bodies have been slow to address emerging financial and educational risks, with current oversight arrangements leading to confusion over who should intervene and when.

    The Report says the Department for Business, Innovation & Skills and the Skills Funding Agency “are not doing enough to help colleges address risks at an early stage”.


    Skills in Europe

    Cedefop is launching a new SKILLS PANORAMA website, online on 1 December at 11.00 (CET).

    Skills Panorama, they say,  turns labour market data and information into useful, accurate and timely intelligence that helps policy-makers decide on skills and jobs in Europe.

    The new website will provide with a more comprehensive and user-friendly central access point for information and intelligence on skill needs in occupations and sectors across Europe. You can register for the launch at Register now at http://skillspanorama.cedefop.europa.eu/launch/.


    Talking about ‘European’ MOOCs

    The European EMMA project is launching a  webinar series. The first is on Tuesday 17 November 2015 from 14:00 – 15:00 CET.

    They say: “In this first webinar we will explore new trends in European MOOCs. Rosanna de Rosa, from UNINA, will present the philosophy and challenges behind the EMMA EU project and MOOC platform developed with the idea of accommodating diversity through multilingualism. Darco Jansen, from EADTU (European Association of Distance Teaching Universities), will talk about Europe’s response to MOOC opportunities. His presentation will highlight the main difference with the U.S. and discuss the consequences for didactical and pedagogical approaches regarding the different contexts.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • RT @TheSocReview Lecturer in Digital Cultures at the University of Sydney buff.ly/29ToEiH

    Last week from Cristina Costa's Twitter via Twitter for iPad

  • Sounds of the Bazaar AudioBoo

  • Recent Posts

  • Archives

  • Meta

  • Upcoming Events

      There are no events.
  • Categories