Archive for the ‘Evaluation’ Category

PLE2010 – reflections on the review process

April 25th, 2010 by Graham Attwell

A quick update in my series of posts on our experiences in organising the PLE2010 conference. We received 82 proposals for the conference – far more than we had expected. The strong response, I suspect, was due to three reasons: the interest in PLEs in the Technology Enhanced Learning community, the attraction of Barcelona as a venue and our success in using applications like Twitter for virally publicising the conference.

Having said that – in terms of format in seems to me that some of the submissions as full conference papers would have been better made under other formats. However, present university funding requirements demand full papers and inhibit applications for work in progress or developing ideas in more appropriate formats.

For the last two weeks I have been organising the review process. We promised that each submission would be blind reviewed by at least two reviewers. For this we are reliant on the freely given time and energy of our Academic Committee. And whilst reviewing can be a learning process in itself it is time consuming.

Submissions have been managed through th open source Easychair system, hosted by the University of Manchester. The system is powerful, but the interfaces are far from transparent and the help somewhat minimalist! I have struggled to get the settings in the system right and some functions seem buggy – for instance the function to show missing reviews seems not to be working.

Two lessons for the future seem immediately apparent. Firstly, we set the length of abstracts as a maximum of 350 words. Many of the reviewers have commented that this is too short to judge the quality of the submission.

Secondly is the fraught issue of criteria for the reviews. We produced detailed guidelines for submissions based on the Creative Commons licensed Alt-C guidelines.

The criteria were:

  • Relevance to the themes of the conference although this does not exclude other high quality proposals.
  • Contribution to scholarship and research into the use of PLEs for learning.
  • Reference to the characteristics and needs of learners.
  • Contribution to the development of learning technology policy or theory in education.
  • Links that are made between theory, evidence and practice.
  • Appropriate reflection and evaluation.
  • Clarity and coherence.
  • Usefulness to conference participants.

However, when I sent out the papers for review, whilst I provided a link to those guidelines, I failed to copy them into the text of the emails asking for reviews. In retrospect, I should have attempted to produce a review template in EasyChair incorporating the guidelines.

Even with such explicit guidelines, there is considerable room for different interpretation by reviewers. I am not sure that in our community we have a common understanding of what might be relevant to the themes of the conference or a contribution to scholarship and research into the use of PLEs for learning. I suspect this is the same for many conferences: however, the issue may be more problematic in an emergent area of education and technology practice.

We also set a scale for scoring proposals:

  • 3 – strong accept
  • 2 – accept
  • 1- weak accept
  • 0 – borderline
  • -1 – week reject
  • -2 – reject
  • – 3 – reject

In addition we asked reviewers to state their degree of confidence in their review ranging from 4, expert, to 0, null.

In over half the cases where we have received two reviews, the variation between the reviewers is no more that 1. But there are also a number of reviews with significant variation. This suggest significant differences in understandings by reviewers of the criteria – or the meaning of the criteria. it could also just be that different reviewers have different standards.

In any case, we will organise a further review procedure for those submissions where there are significant differences. But I wonder if the scoring process is the best approach. To have no scoring seems to be a way fo avoiding the issue. I wonder if we should have scoring for each criteria, although this would make the review process even more complicated.

I would welcome any comments on this. Whilst too late for this conference, as a community we are reliant on peer review as a quality process and collective learning and reflection may be a way of improving our work.

Formative self assessment (in English!)

October 27th, 2009 by Graham Attwell

evaltemplateng
Yesterday I published a self evaluation template, used by young children in a German school. It was interesting, I thought both in terms of the approach to formative evaluation – evaluation for learning rather than of learning – and in terms of the use of self evaluation as a tool for discussion between  students and teachers. A  number of people commented that they did not understand German and furthermore, because the file was uploaded as an image, they were unable to use online translation software.

Pekka Kamarainen noticed the queries on Twitter and kindly provided me with an English translation, reproduced above.

Evaluating e-Learning

November 26th, 2007 by Graham Attwell

We still have a substantial backlog of material to be published on this site. And we have a backlog of paper publications to go out. It will all sort out in time. But for the moment I am just trying to get things out in any way I can. So I have attached a PDF (1.1MB) version of a Guide to the Evaluation of e-Learning to this post.

This guide has been produced as a report on the work of the Models and Instruments for the evaluation of e-learning
and ICT supported learning (E-VAL) project. The project took place between 2002 and 2005 and was sponsored by the European Commission Leonardo da Vinci programme. The project was coordinated by Pontydysgu.

The following text is taken from the introduction to the guide.

The development of e-learning products and the provision of e-learning opportunities is one of the most rapidly expanding
areas of education and training.

Whether this is through an intranet, the internet, multimedia, interactive TV or computer based training, the growth of e-learning is accelerating. However, what is known about these innovative approaches to training has been limited by the shortage of scientifically credible evaluation. Is e-learning effective? In what contexts? For what groups of learners? How do different learners respond? Are there marked differences between different ICT platforms? Does the socio-cultural environment make a difference? Considering the costs of implementing ICT based training, is there a positive return on investment? What are the perceptions of VET professionals? What problems has it created for them?

E-learning is also one of the areas that attracts the most research and development funding. If this investment is to be maximised, it is imperative that we generate robust models for the evaluation of e-learning and tools which are flexible in use but consistent in results.

“Although recent attention has increased e-learning evaluation, the current research base for evaluating e-learning is inadequate … Due to the initial cost of implementing e-learning programs, it is important to conduct evaluation
studies.”
(American Society for Training and Development, 2001).

The Capitalisation report on the Leonardo da Vinci 1 programme, one of the biggest sponsors of innovative e-learning projects in European VET, also identified the lack of systematic evaluation as being the major weakness in e-learning projects.

However, whilst some have been desperately seeking answers to the question ‘What works and what doesn’t work?’ and looking for ways of improving the quality of e-learning, the response by a large sector of the community of e-learning developers and practitioners has been a growing preoccupation with software and platforms. There has been only limited
attention to pedagogy and learning. The development of models and tools for the evaluation of e-learning can help in improving the quality of e-learning and in informing and shaping future development in policy and practice.

The guide contains eleven sections:

  1. Introduction – why do we need new models and tools for the evaluation of e-learning
  2. Evaluating e-learning – what does the literature tell us?
  3. A Framework for the evaluation of e-learning
  4. Models and theories of evaluation
  5. Models and tools for the evaluation of e-learning – an overview
  6. The SPEAK Model and Tool
  7. Tool for the evaluation of the effectiveness of e-learning programmes in small- and medium sized
    enterprises (SMEs)
  8. Models and tools for evaluation of e-learning in higher vocational education
  9. Policy model and tool
  10. A management oriented approach to the evaluation of e-learning
  11. Individual learning model and tool

You can download the guide here: eval3

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories