We still have a substantial backlog of material to be published on this site. And we have a backlog of paper publications to go out. It will all sort out in time. But for the moment I am just trying to get things out in any way I can. So I have attached a PDF (1.1MB) version of a Guide to the Evaluation of e-Learning to this post.
This guide has been produced as a report on the work of the Models and Instruments for the evaluation of e-learning
and ICT supported learning (E-VAL) project. The project took place between 2002 and 2005 and was sponsored by the European Commission Leonardo da Vinci programme. The project was coordinated by Pontydysgu.
The following text is taken from the introduction to the guide.
The development of e-learning products and the provision of e-learning opportunities is one of the most rapidly expanding
areas of education and training.
Whether this is through an intranet, the internet, multimedia, interactive TV or computer based training, the growth of e-learning is accelerating. However, what is known about these innovative approaches to training has been limited by the shortage of scientifically credible evaluation. Is e-learning effective? In what contexts? For what groups of learners? How do different learners respond? Are there marked differences between different ICT platforms? Does the socio-cultural environment make a difference? Considering the costs of implementing ICT based training, is there a positive return on investment? What are the perceptions of VET professionals? What problems has it created for them?
E-learning is also one of the areas that attracts the most research and development funding. If this investment is to be maximised, it is imperative that we generate robust models for the evaluation of e-learning and tools which are flexible in use but consistent in results.
“Although recent attention has increased e-learning evaluation, the current research base for evaluating e-learning is inadequate … Due to the initial cost of implementing e-learning programs, it is important to conduct evaluation
(American Society for Training and Development, 2001).
The Capitalisation report on the Leonardo da Vinci 1 programme, one of the biggest sponsors of innovative e-learning projects in European VET, also identified the lack of systematic evaluation as being the major weakness in e-learning projects.
However, whilst some have been desperately seeking answers to the question ‘What works and what doesn’t work?’ and looking for ways of improving the quality of e-learning, the response by a large sector of the community of e-learning developers and practitioners has been a growing preoccupation with software and platforms. There has been only limited
attention to pedagogy and learning. The development of models and tools for the evaluation of e-learning can help in improving the quality of e-learning and in informing and shaping future development in policy and practice.
The guide contains eleven sections:
- Introduction – why do we need new models and tools for the evaluation of e-learning
- Evaluating e-learning – what does the literature tell us?
- A Framework for the evaluation of e-learning
- Models and theories of evaluation
- Models and tools for the evaluation of e-learning – an overview
- The SPEAK Model and Tool
- Tool for the evaluation of the effectiveness of e-learning programmes in small- and medium sized
- Models and tools for evaluation of e-learning in higher vocational education
- Policy model and tool
- A management oriented approach to the evaluation of e-learning
- Individual learning model and tool
You can download the guide here: eval3