Archive for the ‘PLE2010’ Category

Working, learning and playing in Personal Learning Environments

May 31st, 2010 by Graham Attwell

I have been invited to deliver a keynote presentation at the PLE 2010 conference in July in Barcelona. And the organising committee has asked each of the keynote speakers – the others are Alec Couros, Ismael Peña Lopez and Jordi  Adell to make a short video or slidecast about their presentation. So here is my contribution – hope you like it.

#PLE2010 update – the outcomes of the review process

May 2nd, 2010 by Graham Attwell

A further update on planning and preparations for the PLE2010 conference. We received 81 proposals, far more than we had expected. And whilst very welcome, this has generated a lot fo work. Each proposal was assigned two reviewers from the conference Academic Committee. This has meant some members of the Committee being asked to review six papers which is quite an effort for which we are truly grateful

One of the main points made in feedback to us from the reviewers was that a 360 word abstract is too short to make a proper judgement. And indeed some submissions did not make full use of the 360 words. We produced criteria for the submissions which were used by some reviewers. Others disagreed with this approach. Stephen Downes, commenting on my last blog post about the conference, said:

  • the stated criteria, as listed in the post above, are actually longer than many of the abstract submissions. As such, the criteria were overkill for what was actually being evaluated.
  • the criteria do not reflect academic merit. They are more like a check-off list that a a non-skilled intake worker could complete. The purpose of having academics do the review is that the academics can evaluate the work on its own merit, not against a check-off list.
  • the criteria reflect a specific theoretical perspective on the subject matter which is at odds with the subject matter. They reflect an instructivist perspective, and a theory-based (universalists, abstractivist) perspective. Personal learning environments are exactly the opposite of that.
  • In other words, it is not appropriate to ask academic reviewers to bring their expertise the material, and to then neuter that expertise with overly perspective statement of criteria.

On the whole I think I agree with Stephen. But I am still concerned with how we reach some common understandings or standards for reviewing, especially in a multi-disciplinary and multi national context.

Following the completion of the reviews, the conference organising committee met (via Skype) to discuss the outcomes of the process. We did not have time to properly consider the results of all 166 reviews and in the end accepted to unconditionally accept any paper with an average score of two or more (reviewers were asked to score each submission on a scale ranging from plus to minus three). That accounted for twenty six of the proposals. Each of the remaining proposals was reconsidered by the seven members of the organising committee in the light of the feedback from the reviewers. In many of the  cases we agreed with their reviews, in some cases we did not. 30 of the proposals were accepted but we have asked the proposers to resubmit their abstract, feeling that improvements could be made in clarity and in explaining their ideas to potential participants at the conference.

We referred nine of the proposals, in the main case because whilst they seemed interesting proposals we did not feel they has sufficiently addressed the theme of the conference ie Personal Learning Environments. These we have asked to resubmit the abstract and we will review the proposals for a second time. In a small number of cases we have recommended a change of format, particularly for research which is still at a conceptual stage which we felt would be better presented as a short paper, rather than a full proceedings paper. And, following the reviews, we did not accept five of the proposals. Once more the main reason was their failing to9 address the themes of the conference.

I am sure we will have upset some people through this process. But the review process was if nothing else rigorous. the meeting to discuss the outcome lasted late into the evening the we were concerned wherever possible to be inclusive in our approach. We also decided not to use the automatic functionality of the EasyChair system for providing feedback on the proposals. the main reason for this was that we were very concerned that feedback should be helpful and constructive for all proposers. Whilst many of the reviews were very helpful in that respect, some were less so and thus we have edited those reviews.

Four quick thoughts on all this:

  • I am not sure that people spend enough time thinking about the calls for papers. What are the themes a conference is trying to address? How does my work contribute towards those themes.
  • I wonder if many academics struggle with writing abstracts. I was surprised how many did not use their full 360 words in their proposals. Abstracts are difficult to write (at least I find them hard) and perhaps our 360 word limit constrained many. However, it was surprising how many were not really clear in focus.
  • I am still concerned with how we can develop common understandings and standards between reviewers. Maybe we need some sort of discourse process between reviewers.
  • The task of providing clear feedback and judgement about proposals whilst still proving constructive and helpful feedback to proposers is not easy. Once more, this maybe something which needs to be addressed at a community level.

PLE2010 – reflections on the review process

April 25th, 2010 by Graham Attwell

A quick update in my series of posts on our experiences in organising the PLE2010 conference. We received 82 proposals for the conference – far more than we had expected. The strong response, I suspect, was due to three reasons: the interest in PLEs in the Technology Enhanced Learning community, the attraction of Barcelona as a venue and our success in using applications like Twitter for virally publicising the conference.

Having said that – in terms of format in seems to me that some of the submissions as full conference papers would have been better made under other formats. However, present university funding requirements demand full papers and inhibit applications for work in progress or developing ideas in more appropriate formats.

For the last two weeks I have been organising the review process. We promised that each submission would be blind reviewed by at least two reviewers. For this we are reliant on the freely given time and energy of our Academic Committee. And whilst reviewing can be a learning process in itself it is time consuming.

Submissions have been managed through th open source Easychair system, hosted by the University of Manchester. The system is powerful, but the interfaces are far from transparent and the help somewhat minimalist! I have struggled to get the settings in the system right and some functions seem buggy – for instance the function to show missing reviews seems not to be working.

Two lessons for the future seem immediately apparent. Firstly, we set the length of abstracts as a maximum of 350 words. Many of the reviewers have commented that this is too short to judge the quality of the submission.

Secondly is the fraught issue of criteria for the reviews. We produced detailed guidelines for submissions based on the Creative Commons licensed Alt-C guidelines.

The criteria were:

  • Relevance to the themes of the conference although this does not exclude other high quality proposals.
  • Contribution to scholarship and research into the use of PLEs for learning.
  • Reference to the characteristics and needs of learners.
  • Contribution to the development of learning technology policy or theory in education.
  • Links that are made between theory, evidence and practice.
  • Appropriate reflection and evaluation.
  • Clarity and coherence.
  • Usefulness to conference participants.

However, when I sent out the papers for review, whilst I provided a link to those guidelines, I failed to copy them into the text of the emails asking for reviews. In retrospect, I should have attempted to produce a review template in EasyChair incorporating the guidelines.

Even with such explicit guidelines, there is considerable room for different interpretation by reviewers. I am not sure that in our community we have a common understanding of what might be relevant to the themes of the conference or a contribution to scholarship and research into the use of PLEs for learning. I suspect this is the same for many conferences: however, the issue may be more problematic in an emergent area of education and technology practice.

We also set a scale for scoring proposals:

  • 3 – strong accept
  • 2 – accept
  • 1- weak accept
  • 0 – borderline
  • -1 – week reject
  • -2 – reject
  • – 3 – reject

In addition we asked reviewers to state their degree of confidence in their review ranging from 4, expert, to 0, null.

In over half the cases where we have received two reviews, the variation between the reviewers is no more that 1. But there are also a number of reviews with significant variation. This suggest significant differences in understandings by reviewers of the criteria – or the meaning of the criteria. it could also just be that different reviewers have different standards.

In any case, we will organise a further review procedure for those submissions where there are significant differences. But I wonder if the scoring process is the best approach. To have no scoring seems to be a way fo avoiding the issue. I wonder if we should have scoring for each criteria, although this would make the review process even more complicated.

I would welcome any comments on this. Whilst too late for this conference, as a community we are reliant on peer review as a quality process and collective learning and reflection may be a way of improving our work.

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories