Archive for the ‘Evaluation’ Category

PLE2010 – reflections on the review process

April 25th, 2010 by Graham Attwell

A quick update in my series of posts on our experiences in organising the PLE2010 conference. We received 82 proposals for the conference – far more than we had expected. The strong response, I suspect, was due to three reasons: the interest in PLEs in the Technology Enhanced Learning community, the attraction of Barcelona as a venue and our success in using applications like Twitter for virally publicising the conference.

Having said that – in terms of format in seems to me that some of the submissions as full conference papers would have been better made under other formats. However, present university funding requirements demand full papers and inhibit applications for work in progress or developing ideas in more appropriate formats.

For the last two weeks I have been organising the review process. We promised that each submission would be blind reviewed by at least two reviewers. For this we are reliant on the freely given time and energy of our Academic Committee. And whilst reviewing can be a learning process in itself it is time consuming.

Submissions have been managed through th open source Easychair system, hosted by the University of Manchester. The system is powerful, but the interfaces are far from transparent and the help somewhat minimalist! I have struggled to get the settings in the system right and some functions seem buggy – for instance the function to show missing reviews seems not to be working.

Two lessons for the future seem immediately apparent. Firstly, we set the length of abstracts as a maximum of 350 words. Many of the reviewers have commented that this is too short to judge the quality of the submission.

Secondly is the fraught issue of criteria for the reviews. We produced detailed guidelines for submissions based on the Creative Commons licensed Alt-C guidelines.

The criteria were:

  • Relevance to the themes of the conference although this does not exclude other high quality proposals.
  • Contribution to scholarship and research into the use of PLEs for learning.
  • Reference to the characteristics and needs of learners.
  • Contribution to the development of learning technology policy or theory in education.
  • Links that are made between theory, evidence and practice.
  • Appropriate reflection and evaluation.
  • Clarity and coherence.
  • Usefulness to conference participants.

However, when I sent out the papers for review, whilst I provided a link to those guidelines, I failed to copy them into the text of the emails asking for reviews. In retrospect, I should have attempted to produce a review template in EasyChair incorporating the guidelines.

Even with such explicit guidelines, there is considerable room for different interpretation by reviewers. I am not sure that in our community we have a common understanding of what might be relevant to the themes of the conference or a contribution to scholarship and research into the use of PLEs for learning. I suspect this is the same for many conferences: however, the issue may be more problematic in an emergent area of education and technology practice.

We also set a scale for scoring proposals:

  • 3 – strong accept
  • 2 – accept
  • 1- weak accept
  • 0 – borderline
  • -1 – week reject
  • -2 – reject
  • – 3 – reject

In addition we asked reviewers to state their degree of confidence in their review ranging from 4, expert, to 0, null.

In over half the cases where we have received two reviews, the variation between the reviewers is no more that 1. But there are also a number of reviews with significant variation. This suggest significant differences in understandings by reviewers of the criteria – or the meaning of the criteria. it could also just be that different reviewers have different standards.

In any case, we will organise a further review procedure for those submissions where there are significant differences. But I wonder if the scoring process is the best approach. To have no scoring seems to be a way fo avoiding the issue. I wonder if we should have scoring for each criteria, although this would make the review process even more complicated.

I would welcome any comments on this. Whilst too late for this conference, as a community we are reliant on peer review as a quality process and collective learning and reflection may be a way of improving our work.

Formative self assessment (in English!)

October 27th, 2009 by Graham Attwell

evaltemplateng
Yesterday I published a self evaluation template, used by young children in a German school. It was interesting, I thought both in terms of the approach to formative evaluation – evaluation for learning rather than of learning – and in terms of the use of self evaluation as a tool for discussion between  students and teachers. A  number of people commented that they did not understand German and furthermore, because the file was uploaded as an image, they were unable to use online translation software.

Pekka Kamarainen noticed the queries on Twitter and kindly provided me with an English translation, reproduced above.

Evaluating e-Learning

November 26th, 2007 by Graham Attwell

We still have a substantial backlog of material to be published on this site. And we have a backlog of paper publications to go out. It will all sort out in time. But for the moment I am just trying to get things out in any way I can. So I have attached a PDF (1.1MB) version of a Guide to the Evaluation of e-Learning to this post.

This guide has been produced as a report on the work of the Models and Instruments for the evaluation of e-learning
and ICT supported learning (E-VAL) project. The project took place between 2002 and 2005 and was sponsored by the European Commission Leonardo da Vinci programme. The project was coordinated by Pontydysgu.

The following text is taken from the introduction to the guide.

The development of e-learning products and the provision of e-learning opportunities is one of the most rapidly expanding
areas of education and training.

Whether this is through an intranet, the internet, multimedia, interactive TV or computer based training, the growth of e-learning is accelerating. However, what is known about these innovative approaches to training has been limited by the shortage of scientifically credible evaluation. Is e-learning effective? In what contexts? For what groups of learners? How do different learners respond? Are there marked differences between different ICT platforms? Does the socio-cultural environment make a difference? Considering the costs of implementing ICT based training, is there a positive return on investment? What are the perceptions of VET professionals? What problems has it created for them?

E-learning is also one of the areas that attracts the most research and development funding. If this investment is to be maximised, it is imperative that we generate robust models for the evaluation of e-learning and tools which are flexible in use but consistent in results.

“Although recent attention has increased e-learning evaluation, the current research base for evaluating e-learning is inadequate … Due to the initial cost of implementing e-learning programs, it is important to conduct evaluation
studies.”
(American Society for Training and Development, 2001).

The Capitalisation report on the Leonardo da Vinci 1 programme, one of the biggest sponsors of innovative e-learning projects in European VET, also identified the lack of systematic evaluation as being the major weakness in e-learning projects.

However, whilst some have been desperately seeking answers to the question ‘What works and what doesn’t work?’ and looking for ways of improving the quality of e-learning, the response by a large sector of the community of e-learning developers and practitioners has been a growing preoccupation with software and platforms. There has been only limited
attention to pedagogy and learning. The development of models and tools for the evaluation of e-learning can help in improving the quality of e-learning and in informing and shaping future development in policy and practice.

The guide contains eleven sections:

  1. Introduction – why do we need new models and tools for the evaluation of e-learning
  2. Evaluating e-learning – what does the literature tell us?
  3. A Framework for the evaluation of e-learning
  4. Models and theories of evaluation
  5. Models and tools for the evaluation of e-learning – an overview
  6. The SPEAK Model and Tool
  7. Tool for the evaluation of the effectiveness of e-learning programmes in small- and medium sized
    enterprises (SMEs)
  8. Models and tools for evaluation of e-learning in higher vocational education
  9. Policy model and tool
  10. A management oriented approach to the evaluation of e-learning
  11. Individual learning model and tool

You can download the guide here: eval3

  • Search Pontydysgu.org

    News Bites

    Digital Literacy

    A National Survey fin Wales in 2017-18 showed that 15% of adults (aged 16 and over) in Wales do not regularly use the internet. However, this figure is much higher (26%) amongst people with a limiting long-standing illness, disability or infirmity.

    A new Welsh Government programme has been launched which will work with organisations across Wales, in order to help people increase their confidence using digital technology, with the aim of helping them improve and manage their health and well-being.

    Digital Communities Wales: Digital Confidence, Health and Well-being, follows on from the initial Digital Communities Wales (DCW) programme which enabled 62,500 people to reap the benefits of going online in the last two years.

    See here for more information


    Zero Hours Contracts

    Figures from the UK Higher Education Statistics Agency show that in total almost 11,500 people – both academics and support staff – working in universities on a standard basis were on a zero-hours contract in 2017-18, out of a total staff head count of about 430,000, reports the Times Higher Education.  Zero-hours contract means the employer is not obliged to provide any minimum working hours

    Separate figures that only look at the number of people who are employed on “atypical” academic contracts (such as people working on projects) show that 23 per cent of them, or just over 16,000, had a zero-hours contract.


    Resistance decreases over time

    Interesting research on student centered learning and student buy in, as picked up by an article in Inside Higher Ed. A new study published in PLOS ONE, called “Knowing Is Half the Battle: Assessments of Both Student Perception and Performance Are Necessary to Successfully Evaluate Curricular Transformation finds that student resistance to curriculum innovation decreases over time as it becomes the institutional norm, and that students increasingly link active learning to their learning gains over time


    Postgrad pressure

    Research published this year by Vitae and the Institute for Employment Studies (IES) and reported by the Guardian highlights the pressure on post graduate students.

    “They might suffer anxiety about whether they deserve their place at university,” says Sally Wilson, who led IES’s contribution to the research. “Postgraduates can feel as though they are in a vacuum. They don’t know how to structure their time. Many felt they didn’t get support from their supervisor.”

    Taught students tend to fare better than researchers – they enjoy more structure and contact, says Sian Duffin, student support manager at Arden University. But she believes anxiety is on the rise. “The pressure to gain distinction grades is immense,” she says. “Fear of failure can lead to perfectionism, anxiety and depression.”


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • @amiekey97 2/2 jogging Keeps me sane and helps me analyse what I wrote - in preparation for the next day. Repeat

    Yesterday from Cristina Costa's Twitter via Twitter for Android

  • Sounds of the Bazaar AudioBoo

  • Recent Posts

  • Archives

  • Meta

  • Upcoming Events

      There are no events.
  • Categories