Archive for the ‘Evaluation’ Category

Evaluation 2.0: How do we progress it?

October 11th, 2011 by Jenny Hughes

Have been in Brussels for the last two days – speaking at 9th European Week of Regions and Cities organized by DG Regio and also taking the opportunity to join other sessions. My topic was Evaluation 2.0. Very encouraged by the positive feedback I’ve been getting all day both face-to-face and through twitter. I thought people would be generally resistant to the idea as it was fairly hard-hitting (and in fairness, some were horrified!) but far more have been interested and very positive, including quite a lot of Commission staff. However, the question now being asked by a number of them of them is “How do we progress this?” – meaning, specifically, in the context of the evaluation of Regional Policy and DG Regio intervention.

Evaluation 2.0 in Regional Policy evaluation
I don’t have any answers to this – in some ways, that’s not for me to decide! I have mostly used Evaluation 2.0 stuff in the evaluation of education projects not regional policy. And my recent experience of the Cohesion Fund, ERDF, IPA or any of the structural funds is minimal. However, the ideas are generic and if people think that there are some they could work with, that’s fine!

That said, here are some suggestions for moving things forward – some of them are mine, most have been mooted by various people who have come to talk to me today (and bought me lots of coffee!)

Suggestions for taking it forward

  • Set up a twitter hashtag #evaluation2.0. Well that’s easy but I don’t know how much traffic there would be as yet!
  • Set up a webpage providing information and discussion around Evaluation 2.0. More difficult – who does that and who keeps it updated? Maybe, instead, it is worth feeding in to the Evalsed site that DG Regio maintain, which currently provides information and support for their evaluators. I gather it is under the process of review – a good opportunity to make it more interactive, to make more use of multimedia and with space for users to create content as well as DG Regio!
  • Form a small working group or interest group – this could be formal or informal, stand alone or tied to their existing evaluation network. Either way, it needs to be open and accessible to people who are interested in developing new ideas and trying some stuff out rather than a representative ‘committee’.
  • Alternatively, set up an expert group to move some ideas forward.
  • Or how about a Diigo group?
  • Undertake some small-scale trials with specific tools – to see whether the ideas do cross over from the areas I work in to Regional Policy.
  • Run a couple of one-day training events on Evaluation 2.0 focusing on some real hands-on workshops for evaluators and evaluation unit staff rather than just on information giving.
  • Check out with people responsible for evaluation in other DGs whether there is an opportunity for some joint development (a novel idea!) Unlike other ‘perspectives’ it is not tied to content or any particular theoretical approach.
  • Think about developing some mobile phone apps for evaluators and stakeholders around content specific issues – I can easily think of 5 or 6 possibilities to support both counterfactual, quantitative approaches and theory-based qualitative approaches. Although the ideas are generic, customizing the content means evaluators would have something concrete to work with rather than just ideas.
  • Produce an easy-to-use handbook on evaluation 2.0 for evaluators / evaluation units who want practical information on how to do it.
  • Ring fence a small amount of funding to support one-off explorations into innovative practice and new ideas around evaluation.
  • Encourage the evaluation unit to demonstrate leadership in new approaches – for example, try streaming a live internet radio programme around the theme of evaluation (cheap and easy!); set up a multi-user blog for people to post work in progress and interesting observations of ongoing projects using a range of media as well as text-based major reports; make some podcasts of interviews with key players in the evaluation of Regional Policy; set up a wiki around evaluation rather than having to drill down through the various Commission websites; try locating projects using GPS data so that we can all see where the action is taking place! Keep a twitter stream going around questions and issues – make use of crowd sourcing!
  • Advertise the next European Evaluation Society biennial conference, in Helsinki, October 1st – 5th 2012 “Evaluation in the networked society: New concepts, New challenges, New solutions” (There you go Bob, I just did!)
  • Broaden the idea of Evaluation 2.0 and maybe get rid of the catchphrase! We are already using the power of the semantic web in evaluation to mash open and linked data, for example. Should we be now be talking about Evaluation 3.0?? Or should we find another name – Technology Enhanced Evaluation? We could have TEE parties instead of conferences – Europe’s answer to the American far right ; )

P.S. Message to the large numbers of English delegates at the conference

When you left Heathrow yesterday to come to Brussels, I do hope you waved to the English Rugby team arriving home from the Rugby World Cup in New Zealand.

(Just as well this conference was not a week later or I’d have leave a similar message for the French delegates…..)

What is Evaluation 2.0?

October 4th, 2011 by Graham Attwell

Graham Attwell interviews Jenny Hughes about Evaluation 2.0

Just what is Evaluation 2.0?

Evaluation 2.0 is a set of ideas about evaluation that Pontydysgu are developing. At its simplest, it’s about using social software at all stages of the evaluation process in order to make evaluation more open, more transparent and more accessible to a wider range of stakeholders. At a theoretical level, we are trying to push forward and build on Guba and Lincoln’s ideas around 4th generation evaluation which is a constructivist approach incorporating key ideas around negotiation, multiple realities and stakeholder engagement. But this is the first part of the journey – ultimately, I believe that e-technologies are going to revolutionise the way we think about and practice evaluation.

In what way do you think this is going to happen?

Firstly, the use of social media gives stakeholders a real voice – irrespective of where they are located.  Stakeholders can create and publish evaluation content. For example, in the past I might carry out some interviews as part of an evaluation. Sometimes I recorded it, sometimes I just made notes. Then I would try and interpret it and draw some conclusions about what it meant.  Now I set up a web page for each evaluation and I podcast the interviews using audio or video and put them on the site. (Obviously this has to be negotiated with the interviewee but so far, no one has raised any objections.) There is the usual comment box so any stakeholder with access to the site can respond to the interview, add their interpretations, agree or disagree with my conclusions and so on.

Secondly, I think it is challenging our perceptions of who are evaluators.  Everyone is now an evaluator. Think of the software that you use every day for on-line shopping from Amazon or Ebay or any big chain store. If I want to buy a particular product I check out what other people have said about it, how many of them said it and how many stars it has been given. These are called recommender systems and I think they will have a big impact in evaluation. We have moved from the paradigm of the ‘expert’ collecting and analyzing data into a world of crowd sourcing – harnessing the potential of mass data collection and interpretation.

Thirdly, the explosion of Web 2.0 applications has provided us with a whole new range of evaluation tools that open up new methodological possibilities or make the old ones more efficient.  For example, if I am at the stage of formulating and refining the evaluation questions – I put it out as a call on Twitter.  It’s amazing how restricting evaluation questions to 140 characters can sharpen them up!

I did an evaluation of a community capacity building project in an inner city area recently and spent quite a long time before I went to the first meeting walking around the streets, checking out the community facilities, the state of the housing, local amenities and so on, to get a ‘feel’ for the area – except I did it on Google Earth and with street-view on Google maps.  There are about 20 or so other applications I use a lot in evaluation but maybe they will have to wait for another edition!

Fourthly, I think the potential of Web 2.0 changes the way we can visualize and present data.  Why are we still writing long and indigestible text-based evaluation reports? Increasingly clients are preferring short, sharp evaluation ‘articles’ on maybe one outcome of an evaluation which they can find on a ‘newsy’ evaluation webpage – with hyperlinks to more detailed information or raw data or back up evidence if they want to check it out.  We can also create ‘chunks’ of evaluation reporting and repurpose them in different ways for different stakeholders or they can be localized for different cultures – for example, I have started doing executive summaries as downloadable podcasts.  I think evaluation 2.0 is about creating a much wider range of evaluation products.

Following on from that, I think Evaluation 2.0 breaks down the formative-summative divide and notions of ‘the mid-term report’ or ‘the ex-ante report’.  Evaluation 2.0 is continuous, it is dynamic and it is interactive.  For example, I use Googledocs with all my clients – I add them as readers and editors on all the folders that relate to their evaluations. At any time of the day or night they can see work in progress and add their comments. I keep their evaluation website up to date so they get evaluation information as soon as it is available.

So do you think all evaluators will have to move down this road or will there always be a place for evaluators using more established methods?

Personally, I think massive change is inevitable. Apart from anything else, our clients of the future will be the digital natives – they will expect it.

There will always be a role for the evaluator but that role will be transformed and the skills will be different. I think a key job for the specialist evaluator will be designing the algorithms that underpin the evaluation. The evaluator will also need to be the creative director – they will need skills in informatics, in visualizing and presenting information, the creative skills to write blogs and wikis. They will need networking skills to set up and facilitate online communities of practice around different stakeholder groups and the ability to repurpose evaluation objects.

The rules of engagement are also changing – in the past you engaged with a client, now you engage with a community.  We also have to think how stakeholder created content might change our ideas about copyright, confidentiality, ownership, authorship.

So do you think evaluators as we know them will become extinct!!

Well, as Mark Halper said

“Dinosaurs were highly successful and lasted a long time. They never went away. They became smaller, faster, and more agile, and now we call them birds.”

Evaluation 2.0 – the Slidecast

August 2nd, 2011 by Graham Attwell

Late last year Jenny Hughes made a keynote presentation on Evaluation 2.0 for the UK Evaluation society. And pretty quickly we were getting requests for the paper of the presentation and the presentation slides. The problem is that we have not yet got round to writing the paper. And Jen, like me uses most of her canvas space for pictures not bullet points on her slides. This makes the presentation much more attractive but it is difficult sometimes to gleam the meaning from the pictures alone.

So we decided we would make a slidecast of the presentation. But, half way through, we realised it wasn’t working. Lacking an audience and just speaking to the slides, it was coming over as stilted and horribly dry. So we started again and changed the format. rather than seeing it as a straightforward presentation, Jen and I just chatted about the central ideas. I think it works pretty well.

We started from the question of what is Web2.0.Jen says “At its simplest, it’s about using social software at all stages of the evaluation process in order to make it more open, more transparent and more accessible to a wider range of stakeholder.” But editing the slidecast I realised we had talked a lot more than about evaluation. This chat really deals with Web 2.0 and the different ways we are developing and sharing knowledge, the differences between expert knowledge and crows sourced knowledge and new roles for teachers, trainers and evaluators resulting from the changing uses of social media.

The impact of new technologies on teaching and learning

September 20th, 2010 by Graham Attwell

For a report that I am working on, I have been asked to assess the impact of new technologies on teaching and learning in the vocational education sector in the UK.

One major problem in judging the impact of new technologies on teaching and learning and on pedagogical approaches to teaching and learning is the need for metrics for judging such impact. it is relatively simple to survey the number of computers in a school, or the speed of an internet connection. It is also not impossible to count how many teachers are using a particular piece of technology. It is far harder to judge pedagogic change. One tool which could prove useful in this respect is the iCurriculum Framework (Barajas et al, 2004), developed by the European project of the same name.The framework was intended as a tool that can be used by educators to record the effects of their learners activities. It is based on seeing pedagogic and curricula activities along three dimensions – an Operational Curriculum, an Integrating Curriculum and a Transformational curriculum. It is possible to approach pedagogies for using technologies for learning for the same subject and for the same intended outcomes on any one of those three dimensions.

  • Operational Curriculum is learning to use the tools and technology effectively. Knowing how to word-process, how to edit a picture, enter data and make simple queries of an information system, save and load files and so on.
  • Integrating Curriculum is where the uses of technology are applied to current curricula and organisation of teaching and learning. This might be using an online library of visual material, using a virtual learning environment to deliver a course or part of a course. The nature of the subject and institution of learning is essentially the same, but technology is used for efficiency, motivation and effectiveness.
  • Transformational Curriculum is based on the notion that what we might know, and how, and when we come to know it is changed by the existence of the technologies we use and therefore the curriculum and organisation of teaching and learning needs to change to reflect this. (p 8)

In terms of general approaches suggested by research literature, most Further Education colleges in the UK are still approaching pedagogy and curriculum design from the standpoint of an operational curriculum, and although there are some examples of an integrating curriculum, there is little evidence of using technology for transformation.

Reference:

Barajas, M., Heinemann, L., Higueras, E., Kikis-Papakadis, K., Logofatu, B., Owen, M. et al. (2004). Guidelines for Emergent Competences at Schools, http://promitheas.iacm.forth.gr/i-curriculum/outputs.html

#PLE2010 update – the outcomes of the review process

May 2nd, 2010 by Graham Attwell

A further update on planning and preparations for the PLE2010 conference. We received 81 proposals, far more than we had expected. And whilst very welcome, this has generated a lot fo work. Each proposal was assigned two reviewers from the conference Academic Committee. This has meant some members of the Committee being asked to review six papers which is quite an effort for which we are truly grateful

One of the main points made in feedback to us from the reviewers was that a 360 word abstract is too short to make a proper judgement. And indeed some submissions did not make full use of the 360 words. We produced criteria for the submissions which were used by some reviewers. Others disagreed with this approach. Stephen Downes, commenting on my last blog post about the conference, said:

  • the stated criteria, as listed in the post above, are actually longer than many of the abstract submissions. As such, the criteria were overkill for what was actually being evaluated.
  • the criteria do not reflect academic merit. They are more like a check-off list that a a non-skilled intake worker could complete. The purpose of having academics do the review is that the academics can evaluate the work on its own merit, not against a check-off list.
  • the criteria reflect a specific theoretical perspective on the subject matter which is at odds with the subject matter. They reflect an instructivist perspective, and a theory-based (universalists, abstractivist) perspective. Personal learning environments are exactly the opposite of that.
  • In other words, it is not appropriate to ask academic reviewers to bring their expertise the material, and to then neuter that expertise with overly perspective statement of criteria.

On the whole I think I agree with Stephen. But I am still concerned with how we reach some common understandings or standards for reviewing, especially in a multi-disciplinary and multi national context.

Following the completion of the reviews, the conference organising committee met (via Skype) to discuss the outcomes of the process. We did not have time to properly consider the results of all 166 reviews and in the end accepted to unconditionally accept any paper with an average score of two or more (reviewers were asked to score each submission on a scale ranging from plus to minus three). That accounted for twenty six of the proposals. Each of the remaining proposals was reconsidered by the seven members of the organising committee in the light of the feedback from the reviewers. In many of the  cases we agreed with their reviews, in some cases we did not. 30 of the proposals were accepted but we have asked the proposers to resubmit their abstract, feeling that improvements could be made in clarity and in explaining their ideas to potential participants at the conference.

We referred nine of the proposals, in the main case because whilst they seemed interesting proposals we did not feel they has sufficiently addressed the theme of the conference ie Personal Learning Environments. These we have asked to resubmit the abstract and we will review the proposals for a second time. In a small number of cases we have recommended a change of format, particularly for research which is still at a conceptual stage which we felt would be better presented as a short paper, rather than a full proceedings paper. And, following the reviews, we did not accept five of the proposals. Once more the main reason was their failing to9 address the themes of the conference.

I am sure we will have upset some people through this process. But the review process was if nothing else rigorous. the meeting to discuss the outcome lasted late into the evening the we were concerned wherever possible to be inclusive in our approach. We also decided not to use the automatic functionality of the EasyChair system for providing feedback on the proposals. the main reason for this was that we were very concerned that feedback should be helpful and constructive for all proposers. Whilst many of the reviews were very helpful in that respect, some were less so and thus we have edited those reviews.

Four quick thoughts on all this:

  • I am not sure that people spend enough time thinking about the calls for papers. What are the themes a conference is trying to address? How does my work contribute towards those themes.
  • I wonder if many academics struggle with writing abstracts. I was surprised how many did not use their full 360 words in their proposals. Abstracts are difficult to write (at least I find them hard) and perhaps our 360 word limit constrained many. However, it was surprising how many were not really clear in focus.
  • I am still concerned with how we can develop common understandings and standards between reviewers. Maybe we need some sort of discourse process between reviewers.
  • The task of providing clear feedback and judgement about proposals whilst still proving constructive and helpful feedback to proposers is not easy. Once more, this maybe something which needs to be addressed at a community level.

PLE2010 – reflections on the review process

April 25th, 2010 by Graham Attwell

A quick update in my series of posts on our experiences in organising the PLE2010 conference. We received 82 proposals for the conference – far more than we had expected. The strong response, I suspect, was due to three reasons: the interest in PLEs in the Technology Enhanced Learning community, the attraction of Barcelona as a venue and our success in using applications like Twitter for virally publicising the conference.

Having said that – in terms of format in seems to me that some of the submissions as full conference papers would have been better made under other formats. However, present university funding requirements demand full papers and inhibit applications for work in progress or developing ideas in more appropriate formats.

For the last two weeks I have been organising the review process. We promised that each submission would be blind reviewed by at least two reviewers. For this we are reliant on the freely given time and energy of our Academic Committee. And whilst reviewing can be a learning process in itself it is time consuming.

Submissions have been managed through th open source Easychair system, hosted by the University of Manchester. The system is powerful, but the interfaces are far from transparent and the help somewhat minimalist! I have struggled to get the settings in the system right and some functions seem buggy – for instance the function to show missing reviews seems not to be working.

Two lessons for the future seem immediately apparent. Firstly, we set the length of abstracts as a maximum of 350 words. Many of the reviewers have commented that this is too short to judge the quality of the submission.

Secondly is the fraught issue of criteria for the reviews. We produced detailed guidelines for submissions based on the Creative Commons licensed Alt-C guidelines.

The criteria were:

  • Relevance to the themes of the conference although this does not exclude other high quality proposals.
  • Contribution to scholarship and research into the use of PLEs for learning.
  • Reference to the characteristics and needs of learners.
  • Contribution to the development of learning technology policy or theory in education.
  • Links that are made between theory, evidence and practice.
  • Appropriate reflection and evaluation.
  • Clarity and coherence.
  • Usefulness to conference participants.

However, when I sent out the papers for review, whilst I provided a link to those guidelines, I failed to copy them into the text of the emails asking for reviews. In retrospect, I should have attempted to produce a review template in EasyChair incorporating the guidelines.

Even with such explicit guidelines, there is considerable room for different interpretation by reviewers. I am not sure that in our community we have a common understanding of what might be relevant to the themes of the conference or a contribution to scholarship and research into the use of PLEs for learning. I suspect this is the same for many conferences: however, the issue may be more problematic in an emergent area of education and technology practice.

We also set a scale for scoring proposals:

  • 3 – strong accept
  • 2 – accept
  • 1- weak accept
  • 0 – borderline
  • -1 – week reject
  • -2 – reject
  • - 3 – reject

In addition we asked reviewers to state their degree of confidence in their review ranging from 4, expert, to 0, null.

In over half the cases where we have received two reviews, the variation between the reviewers is no more that 1. But there are also a number of reviews with significant variation. This suggest significant differences in understandings by reviewers of the criteria – or the meaning of the criteria. it could also just be that different reviewers have different standards.

In any case, we will organise a further review procedure for those submissions where there are significant differences. But I wonder if the scoring process is the best approach. To have no scoring seems to be a way fo avoiding the issue. I wonder if we should have scoring for each criteria, although this would make the review process even more complicated.

I would welcome any comments on this. Whilst too late for this conference, as a community we are reliant on peer review as a quality process and collective learning and reflection may be a way of improving our work.

Formative self assessment (in English!)

October 27th, 2009 by Graham Attwell

evaltemplateng
Yesterday I published a self evaluation template, used by young children in a German school. It was interesting, I thought both in terms of the approach to formative evaluation – evaluation for learning rather than of learning – and in terms of the use of self evaluation as a tool for discussion between  students and teachers. A  number of people commented that they did not understand German and furthermore, because the file was uploaded as an image, they were unable to use online translation software.

Pekka Kamarainen noticed the queries on Twitter and kindly provided me with an English translation, reproduced above.

Evaluating e-Learning

November 26th, 2007 by Graham Attwell

We still have a substantial backlog of material to be published on this site. And we have a backlog of paper publications to go out. It will all sort out in time. But for the moment I am just trying to get things out in any way I can. So I have attached a PDF (1.1MB) version of a Guide to the Evaluation of e-Learning to this post.

This guide has been produced as a report on the work of the Models and Instruments for the evaluation of e-learning
and ICT supported learning (E-VAL) project. The project took place between 2002 and 2005 and was sponsored by the European Commission Leonardo da Vinci programme. The project was coordinated by Pontydysgu.

The following text is taken from the introduction to the guide.

The development of e-learning products and the provision of e-learning opportunities is one of the most rapidly expanding
areas of education and training.

Whether this is through an intranet, the internet, multimedia, interactive TV or computer based training, the growth of e-learning is accelerating. However, what is known about these innovative approaches to training has been limited by the shortage of scientifically credible evaluation. Is e-learning effective? In what contexts? For what groups of learners? How do different learners respond? Are there marked differences between different ICT platforms? Does the socio-cultural environment make a difference? Considering the costs of implementing ICT based training, is there a positive return on investment? What are the perceptions of VET professionals? What problems has it created for them?

E-learning is also one of the areas that attracts the most research and development funding. If this investment is to be maximised, it is imperative that we generate robust models for the evaluation of e-learning and tools which are flexible in use but consistent in results.

“Although recent attention has increased e-learning evaluation, the current research base for evaluating e-learning is inadequate … Due to the initial cost of implementing e-learning programs, it is important to conduct evaluation
studies.”
(American Society for Training and Development, 2001).

The Capitalisation report on the Leonardo da Vinci 1 programme, one of the biggest sponsors of innovative e-learning projects in European VET, also identified the lack of systematic evaluation as being the major weakness in e-learning projects.

However, whilst some have been desperately seeking answers to the question ‘What works and what doesn’t work?’ and looking for ways of improving the quality of e-learning, the response by a large sector of the community of e-learning developers and practitioners has been a growing preoccupation with software and platforms. There has been only limited
attention to pedagogy and learning. The development of models and tools for the evaluation of e-learning can help in improving the quality of e-learning and in informing and shaping future development in policy and practice.

The guide contains eleven sections:

  1. Introduction – why do we need new models and tools for the evaluation of e-learning
  2. Evaluating e-learning – what does the literature tell us?
  3. A Framework for the evaluation of e-learning
  4. Models and theories of evaluation
  5. Models and tools for the evaluation of e-learning – an overview
  6. The SPEAK Model and Tool
  7. Tool for the evaluation of the effectiveness of e-learning programmes in small- and medium sized
    enterprises (SMEs)
  8. Models and tools for evaluation of e-learning in higher vocational education
  9. Policy model and tool
  10. A management oriented approach to the evaluation of e-learning
  11. Individual learning model and tool

You can download the guide here: eval3

  • Search Pontydysgu.org

    Sounds of the Bazaar LIVE from the Online EDUCA Berlin 2013

    We will broadcast from Berlin on the 5th and the 6th of December. Both times it will start at 11.00 CET and will go on for about 40 minutes.

    Go here to listen to the radio stream: SoB Online EDUCA 2013 LIVE Radio.

    The podcast of the first show on the 5th is here: SoB Online EDUCA 2013 Podcast 5th Dec.

    Here is the second show as a podcast: SoB Online EDUCA 2013 Podcast 6th Dec.

    News Bites

    Open online STEM conference

    The Global 2013 STEMx Education Conference claims to be the world’s first massively open online conference for educators focusing on Science, Technology, Engineering, Math, and more. The conference is being held over the course of three days, September 19-21, 2013, and is free to attend!
    STEMxCon is a highly inclusive event designed to engage students and educators around the globe and we encourage primary, secondary, and tertiary (K-16) educators around the world to share and learn about innovative approaches to STEMx learning and teaching.

    To find out about different sessions and to login to events go to http://bit.ly/1enFDFB


    Open Badges

    A new nationwide Open Badges initiative has been launched by DigitalMe in the UK. Badge the UK has been developed to help organisations and businesses recognise young people’s skills and achievements online.

    Supported by the Nominet Trust, the Badge the UK initiative is designed to support young people in successfully making the transition between schools and employment using Mozilla Open Badges as a new way to capture and share skills across the web.

    At the recent launch event at Mozilla’s London HQ Lord Knight emphasised the “disruptive potential” of Open Badges within the current Education system. At a time of record levels of skills shortages and unemployment amongst young people all speakers stressed need for a new way to encourage and recognise learning which lead to further training and ultimately employment opportunities. Badge the UK is designed to help organisations and businesses see the value in using Mozilla Open Badges as a new way to recognise skills and achievement and and connect them to real world training and employment opportunities.

    You can find more information on the DigitalMe web site.


    Twitter feed

    Apologies for the broken Twitter feeds on this page. It seems Twitter have once more changed their APi, breaking our WordPress plug-in. It isn’t the first time and we will have to find another work around. Super tech, Dirk is on the case and we hope normal service will be resumed soon.


    MOOCs and beyond

    A special issue of the online journal eLearning Papers has been released entitled MOOCs and beyond. Editors Yishay Mor and Tapio Koshkinen say the issue brings together in-depth research and examples from the field to generate debate within this emerging research area.

    They continue: “Many of us seem to believe that MOOCs are finally delivering some of the technology-enabled change in education that we have been waiting nearly two decades for.

    This issue aims to shed light on the way MOOCs affect education institutions and learners. Which teaching and learning strategies can be used to improve the MOOC learning experience? How do MOOCs fit into today’s pedagogical landscape; and could they provide a viable model for developing countries?

    We must also look closely at their potential impact on education structures. With the expansion of xMOOC platforms connected to different university networks—like Coursera, Udacity, edX, or the newly launched European Futurelearn—a central question is: what is their role in the education system and especially in higher education?”


    Other Pontydysgu Spaces

  • Twitter

  • RT @lindacq My post for the #pleconf: Why coming to PLE Conference? - go.shr.lc/1ls2al0

    About 5 hours ago from Cristina Costa's Twitter via Twitter for iPad

  • Sounds of the Bazaar AudioBoo

  • Recent Posts

  • Archives

  • Meta

  • Upcoming Events

      There are no events.
  • Categories