Social software and academic reviews
I don’t really know why, but I seem to be spending a lot of time at the moment reviewing proposals and contributions for conferences and publications. And whilst there is much to be learned from all the ideas being put forward it is time consuming and sometimes feels a very isolated and perhaps archaic process.
I fond it difficult to decide the standards or criteria I am reviewing against. How important is clarity of thinking, originality, creativity? How important is it that the author includes copious references to previous work? Are we looking for depth or breadth? How important is the standard of English, particularly for those writing in a second or third language?
In this world of social software the whole review process seems somewhat archaic. It relies very much on individuals, all working in isolation. People write an abstract according to a call for proposals (and I am well aware of how difficult it is to write such calls – unless of course it is one of these multi track conferences which just include everything!). The reviews are allocated to a series of individuals for blind review. They do their work in isolation and then according to often subjective criteria, the proposal is accepted or rejected.
OK, sometimes there is the opportunity to make a conditional acceptance based on changes to the proposal. and of course, you are encouraged to provide feedback to the author. But all too often feedback is limited and pressure of time prevents organisers allowing a conditional acceptance.
How could social software help with this? As usual I think it is a socio technical solution we need to look for, rather than an adoption of technologies per se. Most conferences have adopted software to help with the conference organising and review procedures but as happens all to often that software has been developed to manage existing processes more efficiently with no thought into how we could transform practices.
One big issue is the anonymity of the review procedure. I can see many reasons to support this, but it is a big barrier to providing support in improving submissions. If we move to non blind reviewing, then we could develop systems to support a discourse between submitters and reviewers, where both become part of the knowledge creation process. and in added benefit of such a discourse could be to clarify and make transparent the criteria being used for reviews. reviewers would have more of a role as mentors rather than assessors or gatekeepers.
This would not really require sophisticated technological development. It would really just need a simple booking system to arrange for a review and feedback session, together with video, audio or text conferencing functionality. More importantly perhaps it might help us in rethinking the role of individual and collective work in the academic and scholarly forms of publishing and knowledge development. I suspect a considerable barrier is the idea of the ‘Doctor Father’ – that such a process would challenge the authority of professors and doctorate supervisors. My experience, based on talking to many PhD students, is that the supervisory role does not work particularly well. It was developed when the principle role of universities was research and was designed to induct students into a community of practice as a researcher. With the changing role of universities plus the fact that many students are no longer committed to a long term career in academia (even if they could get a job) such processes have become less than functional. Better I think to develop processes of support based on wider communities than the narrow confines of a single university department.
I have reviewed articles for conference presentation and proceedings, and I generally agree with what you have said. I will further note that once I found that one of my three assigned articles was largely plagiarized from another paper! That was an unpleasant surprise for me.
I think that getting “blind reviews” could be accomplished with software that shows selected usernames rather than real names. The same software could provide a collaborative space to discuss the paper, ask additional questions, or (if appropriate) challenge aspects of the paper. I thought the real reason we did blind reviews was to protect the AUTHOR’S identity, not the reviewer. Did I misunderstand you?
I don’t know how familiar you are with Moodle, but something like an expanded workshop activity might be an interesting way to solve this problem. The author uploads the paper, reviewers are assigned randomly from a pool with expertise in the subject, a rubric is provided for the initial assessment, and then discussion space is linked to the content. (This is where the expansion comes in–Moodle allows for commenting, but not true discussion or webconferencing). Ratings are pulled from the reviewers, you could add in reviewers looking at other aspects of the work, and you rank order the results prior to tendering invitations to present.
I think you are right! This process could be much improved.
Hi – yes meant authors not reviewers! Just on that anonymity question, I reckon half the time at least I know who has written the paper and probably that holds true for others too. Yes the system you describe could work well – and hacked in Moodle. My main point is that noone really seems to have through about it. Once more we get ever better management software – but nothing really working at improving the pedagogic or learning process.