Archive for the ‘open access’ Category

BBC recipes and the battle for open

May 18th, 2016 by Graham Attwell

I found yesterdays protests about the BBC plans to archive their recipe site fascinating. After over 120000 people signed a petition protesting against the move and after the government culture minister (somewhat disingenuously) distanced himself from the plan, the BBC backed down and said they would move the recipes to their commercial web site. Now those into conspiracy theory might suggest this was what the BBC were after all the time and others point to huge protests from the middle class over the potential restriction on access to the Great British Bake off etc. whilst cutbacks to welfare quietly proceed. But I think this misses the point.

The major pressures for the BBC to restrict access to free recipes was that they are competing with private businesses including paid for newspapers, subscription websites, commercial publishers and so on. And that public funding should not be allowed to so this. People didn’t buy in to that argument, largely because of a conciousness that the BBC is a publicly owned organisation and that we have teh right to free content paid for by a license fee (ie taxes). I seem to remember the same argument coming from publishers in the early days – some ten or twelve years ago – against Open Educational Resources. Resources created by university staff, so they said, were paid for by public funding and that was unfair competition.  Today despite the government’s same disdain for publicly funded education as for the BBC, Open Educational Resources have become seen as a Good Thing. And the debate over OERs has extended into a wider discussion on the meaning of open. In the same way the protests over the proposed archiving of a publicly owned archive of recipes could well extend into the meaning of open content in wider areas of the web and to an open digital infrastructure The battle for open goes on.

 

How Web 2.0 and Open APIs made it easy to create and share Open Educational Resources

October 6th, 2015 by Graham Attwell

Another post on Open Educational Resources. Last week I talked about the early days with the SIGOSEE project, seeking to build awareness of the possibilities of Open Educational Resources and Open Source in education and to start to change policy directions, especially at European Commission level.

In these early projects, we had three main lines of activity. The first was awareness about changing what Open educational Resources were and especially about Creative Commons Licenses. The second was talking with all manner of different stakeholders, including educational organisations and administration, developers and even the more enlightened publishers about the advantage of OERs and pushing for policy changes. But by far the most time consuming work was with practitioners, organising workshops to show them how they could produce Open Educational resources themselves.

And whilst primary school teachers were long used to developing their own learning materials, with the help of sticky back paper, glue, paint and the like, teachers in secondary schools and higher education were much more used to using bought in materials. True, the photocopier had replaced the Banda machines, and data projectors were well on the way to spelling redundancy for overhead projectors. But teachers had little or no experience in producing ICT based learning materials themselves.

With the value of hindsight is was the development of reasonably easy to use content creation applications and even more the advent of Web 2.0 which changed this situation. I can’t quite remember the different work flows we originally created but I think most involved using Open Office to make materials and then using various work arounds to somehow get them into the different VLEs in use at that time (I also seem to remember considerable debates about whether we should allow the use of proprietary software in our workflows).

Interestingly at that time we say standards and metadata as the key answer, especially to allow materials to be played in any Virtual Learning Environment. But it was Web 2.0 and Open APIs allowed not only easy content creation but provided easy means of distribution. Video was expensive and difficult even 10 or so years ago. Even if you had a powerful enough computer to edit and render raw video (I used to leave my computer running overnight to render 30 minutes videos) the issue was how to distribute it. Now with YouTube and a basic WordPress site anyone can make an distribute their own videos (and add a Creative Commons License). Ditto for photos, audio cartoons etc.

Over the last few years the emphasis has shifted from how to create and share Open Educational Resources to how to use them for teaching and learning. And whist there seems to be progress that issue is not yet overcome.

Open Source and Open Educational Resources in Europe – a look back to ten years ago

September 30th, 2015 by Graham Attwell

As promised the first in a mini series about Open Education. Pontydysgu originally got into educational technology through using closed and proprietary software. The first ‘educational technology’ I can remember using was FirstClass running on an Open University / BBC server (accessed through I think, the Mosaic browser). Ironically it was a print book which stimulated our move into Open Source technologies – Eric Raymond’s The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, first published as a book in 1999.

In 2003 we submitted the SIGOSSEE project to the European Commission. SIGOSSEE stood for Special Interest Group on Open Source software in Education. Essentially we were exploring the potential uses of open source software and holding a series of workshops all over Europe, whilst building a Special Interest Group. Whilst the Special Interest Group failed to survive beyond the period of funding, it did kick off a flurry of activities, including a later spin out project on Open Education Resources. At the time the European Union has an ambivalent attitude towards OSS and OERs. Whilst there was strong support from a number of enlightened officials and programme administrators, the EU was being heavily lobbied by publishers and by the software industry not to endorse open source.

As part of their cautious move towards Open Source Software and Open Educational Resources, in 2004 the EU Directorate responsible for education, held a seminar entitled Creating, Sharing and Reusing e-Learning Content : Access Rights for e-learning Content. They invited a wide range of participants including from teh publishing industry and asked for the pre-submission of position papers. Below I publish the SIGOSSEE position paper, written by myself and Raymond Elferink. In the next post I will look at some of our recommendations and consider to what extent (if at all) we got it right.

Overview

This short position paper is addressed to both consultation workshops as we feel the issue of access rights to e-learning content and the more technical issues around reusable content are intrinsically interlinked. Whilst the position paper is presented by Graham Attwell and Raymond Elferink, it represents the position of the steering committee of the Special Interest Group for Open Source Software for Education in Europe.

The lack of easy access to attractive and compelling educational content is one of the major barriers to the development and implementation of e-learning in Europe. Most educational content is pedagogically poor, consisting overwhelmingly of sequenced text based materials and exercises. Furthermore, the subject and topic range is limited. This is particularly so for vocational and occupational subjects and in lesser-used languages.

Time and cost of production are major barriers to the production of quality learning content leading to the present interest in standards based, reusable content and to the sharing of content between institutions. In many areas content developers require not only technical and pedagogic skills but also deep subject knowledge.

Publishers have an important role to play in the development of content. However, as with traditional learning materials, much content in the future will of necessity be produced by teachers. There are also intriguing possibilities for learner developed content and there is great potential from public content repositories especially from cultural heritage and media organisations. It could be argued that there is already a wealth of rich learning materials available through the web. The problem lies in how these materials can be described and accessed and pedagogically deployed.

Key issues

Pedagogy and content

Pedagogy remains the key issue in terms of delivering content. As with any new technology, there has been a tendency on implementing ICT for learning to imitate previous paradigms – the ‘electronic classroom’ for example. There is some evidence to suggest we are now beginning to move beyond such paradigms and develop new scenarios for learning. However, the monolithic nature of much educational software and the need to implement ‘whole systems’ are barriers to developers seeking to pilot innovative pedagogic applications. The development of standards based content repositories and of Service Oriented Approaches (SOA) or modular approaches to learning architectures (see below) promises to allow far more advanced pedagogic innovation

Reuse of content

The potential reuse of content is a critical issue. Central to this is the development and adoption of standards. There remain problems in this area. Standards are being developed and adopted and the new Learning Design standard promises a major step forward in terms of recognising pedagogy, but the software engines and support are still in a development phase. There remain issues over defining metadata schemas and over who will (and should) enter metadata classifications. In the longer term the use of distributed metadata may provide some answers to these issues. Nevertheless the standards should be supported in order to allow reuse.

In pedagogic and technical terms there is still much work to do in developing tools and engines for content sequencing and assembly. Equally, more work is needed on how to base content on activity.

Licensing, property rights and open content

We believe a key issue is to involve the wider educational community in the development and sharing of learning content. One issue raised here is the question of licences. Traditional copyright licences are far too restrictive to develop an ecology of e-learning content. The Creative Commons Licence provides an effective answer to this issue providing an easy way of indicating possibilities for reuse. The OKI development by MIT and the Connexions project by the University of Rice in Texas – based on different open content models – have shown the potential of open content repositories.

There remain many issues to be resolved – not the least is the question of quality assurance. The difficulty in using content production tools is still a barrier for many to producing their own content.

Software and architectures and content

Monolithic architectures for learning and learning management have held back content production and deployment. Migration and reuse of content is often difficult due to lack of interoperability. Services Oriented Approaches and modular software designs can allow the development of standards based component architectures. Content would be either contained in a repository or accessed through distributed systems. Developers – open source and proprietary – could focus on particular components based on need and on their skills and interests. Content could then be easily reused between systems. The implementation of DRM systems should allow easy access to both proprietary and open content in centralised and distributed resource repositories (see for example the Canadian edu-source initiative).

Culture change and content

Implementing of this vision will require culture change at both institutional and individual level. Whilst much of the discussion has focused on teachers and trainers producing content, more important may be the ability and willingness to search for content and to develop coherent learning and activity plans from content produced elsewhere.

Recommendations to the e-learning community and to the European Commission

These recommendations are addressed to the e-learning community as a whole. However, the European Commission could play an important role in supporting pilot developments and implementations.

  1. Further develop standards and the implementation of standards. At the very least, funded projects should be required to consider and report on standards implications of any content development. Further work is needed in disseminating information of standards and their use. In this respect it may be worth considering European links to the UK based CETIS service on educational standards. Further research and development on standards and standard implementation related to educational content should be supported by the European Commission.

  2. Support the Creative Commons License. There seems little reason why education content produced with public funding – national or European – should not be required to be released under a Creative Commons Licence.

  3. Initiate and develop pilot implementations based on open content in institutions and networks. These pilots will be invaluable in exposing and testing many of the issues raised in this position paper.

  4. Explore the potential of a framework for e-learning based on a Service Oriented Approach. Work in this is already being developed by the UK based JISC in conjunction with Industry Canada and DEST in Australia. At a European level, an initiative to encourage developers to focus on services oriented or modular approaches and to share in the development of software, rather than continuing to reinvent the VLE wheel, is needed.

  5. Support the development of tools for content production, distribution, sequencing and deployment. Access to easy to use tools is more important at present than is directly subsidising the production of content itself.

  6. Support experiments in different pedagogical implementations of content including content from cultural and media organisations.

Proudly Announcing the People’s Educational big Jam Mix

September 7th, 2015 by Graham Attwell

P,O.E.J.A.M is the People’s Open Educational JAM Mix. And its taking place in Portugal, at the TEEM conference in Porto at the final plenary session on Thursday October 8th, 2015.

Graham Attwell from Pontydysgu and Jim Groom from the University of Mary Washington in Fredericksburg, Virginia are hosting what they call an unkeynote session. Why unkeynote? Because instead of standing up and delivering a lecture to the conference they want to hold a dialogue with participants using slides, pictures, videos, quotations, metaphors or even better animated gifs from the education community. There will be the chance for participants in the conference to contribute on the day. But the JAM is open to everyone.

The theme (as the title suggests) is Open Education. Open Education is big news these days. Its a buzzword being embraced by publishers, universities and even governments, as well as the European Union. MOOC providers have leapt on the meme. But what does it mean? The idea that education should be open to everyone seems fine. But even as they talk of open journals, publishers are charging authors a fee, in the so called gold model of open open journals. And whilst universities and governments talk about open education, austerity is leading to cuts in funding and increasing student fees. However open it may or may not be, in The UK many young people simply cannot afford to go to university.

Its time for the educational community to have their say on what open education means. We hope this event can help build a dialogue around a European vision of Open Education.

We’ve tried to make it easy for you to contribute. Just add your ideas to the form on the front page of the POEJAM website. We promise your contributions will turn up somewhere in the JAM event and afterwards on the internet.

Predicting mid and long term skills needs in the UK

June 24th, 2015 by Graham Attwell

Labour Market Information (LMI)  is not perhaps the most popular subject to talk about. But with the advent of open and linked data, LMI  is increasingly being open up to wider audiences and has considerable potential for helping people choose and plan future careers and plan education programmes, as well as for use in research, exploring future skills needs and for social and economic planning.

This is a video version of a presentation by Graham Attwell at the Slovenian ZRSZ Analytical Office conference on “Short-term Skills Anticipations and Mismatch in the Labour Market. Graham Attwell examines ongoing work on mid and long term skills anticipation in the UK. He will bases on work being undertaken by the UK Commission for Employment and Skills and the European EmployID project looking, in the mid term, at future skills needs and in the longer term at the future of work. He explains the motivation for undertaking these studies and their potential uses. He also explains briefly the data sources and statistical background and barriers to the wok on skills projections, whilst emphasising that they are not the only possible futures and can best serve as a a benchmark for debate and reflection that can be used to inform policy development and other choices and decisions. He goes on to look at how open and linked data is opening up more academic research to wider user groups, and presents the work of the UKCES LMI for All project, which has developed an open API allowing the development of applications for different user groups concerned with future jobs and future skills. Finally he briefly discusses the policy implications of this work and the choices and influence of policymakers in influencing different futures.

 

Preparing for the LL Design Conference – Part 3: Paying attention to “Datenschutz”

March 6th, 2015 by Pekka Kamarainen

My previous posts have focused on the forthcoming Y3 Design Conference of the Learning Layers (LL) project. In the first post I compared the situation of the project before the Y1 Design conference (two years ago) and before the Y3 Design Conference (currently).  In the second post I gave an overview how we have been working with the Learning Toolbox (LTB). This third post focuses (with reference to the LTB) to a general issue for the whole LL project and for all pilot activities. In English there several parallel concepts that refer to this problem context – data protection, data privacy, confidentiality …. I German this is all covered with one word: “Datenschutz”.

When discussing the next steps in the piloting with the Learning Toolbox we noticed that we do not have a coherent policy for Datenschutz. In this respect I wrote the following paragraph in the input on LTB to a wiki page of the LL Design Conference:

” (…) we need to develop a clear policy for data protection/ data security (Datenschutz) that covers the Layers Box (LB), the Learning Toolbox (LTB), the social semantic server (SSS) and linked platforms (Baubildung.net). Firstly, we need urgently a brief Users’ Guide for the pilot phase. This is necessary to assure our pilot partners that they have control of their own data when using the LL tools and related services. Secondly, we need to develop the policy for the continuation phase beyond the funding period of the LL project. This is an essential element of the exploitation plans.”

As a response to this apparent need Graham Attwell started looking for documents that could be helpful for us. We all agreed that we need to pay attention to the legal aspects (with sufficiently detailed documents) but we also need to prepare short user-friendly documents that we can use with our pilot partners. From this perspective we started looking at the  Datenschutz policy documents of FutureLearn (a consortium of British universities that organises MOOCs). FutureLearn has developed the following set of documents:

  • Terms – the overarching framework agreement that covers in detail all possible policy issues.
  • Openness – A short list of openness principles.
  • Privacy – Privacy policy declared by the organisation FutureLearn.
  • Cookies – Policy for using different type of cookies.
  • Accessibility – Accessibility policy (including the responsibilities of different parties).
  • Code of Conduct – A short list of principles as the commitments of users to which they agree when signing up.
  • Data protection – A short document declaring the policy for data control, data collection and responsibilities of different parties.

We are most certainly aware of the fact that the policies of a provider of MOOCs are different from the ones that the LL project needs to develop. So, there is no prospect for easy one-to-one translations. BUT what inspires us is the nicely differentiated set of coherent documents – some for ordinary users, some for experts – within a common framework. Also, what inspires us, is the fact that FutureLearn – like the LL project – is committed to Open Source software. So, there is a lot in common to work with.

More blogs to come …

 

 

Gold… for sale!

October 12th, 2013 by Cristina Costa

The current discussions surrounding Open Access have left me somehow perplexed, mainly because of the turn the debate has taken and which is, in my opinion, a major setback.

So to start with, I think it is useful to remind ourselves of the original purpose of Open Access and where it all started… because sometimes we lose sight of that initial purpose which, in this case, is so, so important.

The Open Access term was first used by the Budapest Open Access Initiative (BOAI) in 2001 to campaign for the accessibility of knowledge for a wider community. And in their website they explain the need for Open Access (OA) by stating that

By “open access” to [peer-reviewed research literature], we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

This because:

“scientists and scholars…publish the fruits of their research in scholarly journals without payment” and “without expectation of payment.” In addition, scholars typically participate in peer review as referees and editors without expectation of payment. Yet more often than not, access barriers to peer-reviewed research literature remain firmly in place, for the benefit of intermediaries rather than authors, referees, or editors, and at the expense of research, researchers, and research institutions.

So, in part OA came to value researchers’ work by giving it the potential of a much larger audience, and in part it came to do what is morally expected of public funded institutions, i.e, that the outcomes they produce benefit the public good.  But we all know that these ideas (Or are they ideals?) cannot be materialised from day to night when there are other (commercial) players involved in the game.

Around the 1960s/70s academic journals started to gain the attention of commercial scholarly publishers who began acquiring the already established, high-quality journals run by non-profit scholarly societies. With research journals published by commercial publishers, dissemination of academic work is inevitably impacted by the provision of knowledge as a commodity for sale. And this has become even more visible now with the struggle to implement OA and the different interpretations different players have of it.

Academia not only yielded the monopoly of knowledge dissemination to publishing houses, but they also supported, even if implicitly, the rather atypical business that publishing houses grew from it. If academic publishing was already a peculiar business before the emergence of the web, the fact that it persists now is even more extraordinary. Simply put, the business model of academic publications is one in which one pays to work, not only once, but twice, and now apparently perhaps even thrice! Institutions pay academics to write research papers that are published in journals which institutions also pay to have access to!! And now apparently there is also the added option of paying an additional fee to have the work of their academics made free online. And this is what the publishing houses are currently calling the Gold Route to Open Access.

This is not my interpretation of what the Gold Route OA option is, nor what BOIA’s statement hints at. However, I do recognise that the language used can lead to different interpretations. When  BOIA put forward two primary strategies for OA:

OA through repositories (also called “green OA”) and OA through journals (also called “gold OA”)

they did not specify what an OA journal should be. It is unclear from their statement if it should include a no-fee policy for authors or not. That has given publishing houses room to play. As such, their interpretation of Gold Route to OA includes a fee. It’s another gold mine for them; one I am not sure academia will be able to afford. And this is where I see institutions and researchers backing down from the OA Movement because it is costing them even more.

Maybe it is high time that academic institutions regained control of knowledge publication. Research funding bodies and researchers may want to support and campaign for no-fee open access journals (there are quite a few out there already, so why not exploit the web in that way and use our own time to free our own knowledge). Otherwise, I fear that the push for the current interpretation of “Gold Route OA” will generate a even wider gap between different research institutions given that their economic power is already so uneven.

Is all data research data?

September 30th, 2013 by Cristina Costa

In times when academia debates about the openness and transparency of academic work, and the processes associated with it, more questions than answers tend to be formulated.  And as it usually happens, different approaches often co-exist. There is nothing wrong with that. It is only natural when trying to make sense of the affordances that the web, as a space of participation and socialisation, provides. Nonetheless, the latests news on the openness of research publications leave me slightly concerned regarding the misunderstanding between the Open Access Gold Route and the Gold Mine publishing houses are trying to maintain with their take on Open Access. This will however be content for another post as this post relates to some still unripe reflections of mine on the topic of online research ethics. …although one could argue that paying to access publicly funded knowledge also touches on some ethics…!

The gate's unlocked!!!One of the big questions we are currently debating on twitter deals with the use of  data publicly available online. Given the current events regarding how governments are using information published online by their own citizens, we could conclude that no data is, or for that matter ever was, private! But as researchers interested in the prosumer phenomenon (online users who consume and produce content available on the web) a key issue arises when doing research online:

  • Can we use information that is publicly available online, i.e., that can be accessed without a password by any individual, for research purposes?
  • What ethics should we observe in this case?

Researching online data that is produced and made available online voluntarily, or involuntarily, by the regular citizen is so new, even to those in Technology Enhanced Learning (TEL), that we are still trying to make sense of how to go about this. Publications regarding this matter are still scare, and even those that I have had access to deal mainly with the issues of anonymity (Here is a recent example). A big issue indeed, especially in a time where participating online means to create an online digital footprint that not only allows information to be traced back to an individual; but also creates, to a certain extent, a (re)presentation of a given ‘self’, voluntarily or involuntarily… In TEL we call this “Digital Identities” (more on this in a forthcoming post).  Digital identities disclose personal, and sometimes private, information within the online social environments in which individuals interact, we assume, on their own accord. But are they always conscious of the publicness of the information they publish? Do they perceive their participation online as an form of creating and publishing information?

This then begs the question:

  • Is, can or should such information be automatically converted into research data? In other words, are we, as researchers, entitled to use any data that is publicly available, even if we claim that it is only being used for research purposes?

My gut feeling is that no, we cannot! Just because the information is publicly available online and therefore accessible to use, I, as a researcher, am not entitled to use it without previously seeking and obtaining consent from its creator. This, obviously, generates more obstacles than those researchers would probably like to experience, but the truth is that collecting publicly available information without the consent of its producers does not seem right to me for the following reasons:

  1. We (people engaged in TEL) need to step outside of our own taken-for-granted understanding of online participation, and note that many people don’t realise, or at least have not given it considerate thought, that online communication can be public and that interactions online, by its very nature of written speech are more durable than equivalent forms of face-to-face interaction.
  2. Anonymity and confidentiality are topics that need to be discussed with the research participant independently of the type of information we want to use. Just because the content is there available to the world, it doesn’t mean it’s available for the take. That is invading the public sphere of an individual, if that makes any sense(!), because as it becomes research data we will be exposing (transferring even!) it to other public spaces.
  3. I truly believe that every research participants has the right to know he/she is one. This is not merely a courtesy on the part of the researcher, it is a right that they have! Content produced online, unless stated otherwise, belongs to its producer and should therefore be treated as such. (….maybe what we need is a creative commons license for research purposes! ]

Niessenbaum, and Zimmer, amongst others, talk about contextual integrity, a theory that rejects the notion that information types fit into a rigid dichotomy of public or private.’ “Instead, there is potentially an indefinite variety of types of information that could feature in the informational norms of a given context, and whose categorization might shift from one context to another.” (Zimmer, 2008,  p.116)

Although I am very amenable to the argument of context, and that nothing is absolute, and everything is relative [getting philosophical now!], I think that the context of research practice begs for informed consent independently of the research data being publicly and privately available. In my opinion, converting online information into research data should always be an opt in, and not an opt out, activity involving those to whom the information belongs.

…but as usual, this is only my 2 cents and I look forward to other people’s views on this.

Is all data research data?

September 30th, 2013 by Cristina Costa

In times when academia debates about the openness and transparency of academic work, and the processes associated with it, more questions than answers tend to be formulated.  And as it usually happens, different approaches often co-exist. There is nothing wrong with that. It is only natural when trying to make sense of the affordances that the web, as a space of participation and socialisation, provides. Nonetheless, the latests news on the openness of research publications leave me slightly concerned regarding the misunderstanding between the Open Access Gold Route and the Gold Mine publishing houses are trying to maintain with their take on Open Access. This will however be content for another post as this post relates to some still unripe reflections of mine on the topic of online research ethics. …although one could argue that paying to access publicly funded knowledge also touches on some ethics…!

The gate's unlocked!!!One of the big questions we are currently debating on twitter deals with the use of  data publicly available online. Given the current events regarding how governments are using information published online by their own citizens, we could conclude that no data is, or for that matter ever was, private! But as researchers interested in the prosumer phenomenon (online users who consume and produce content available on the web) a key issue arises when doing research online:

  • Can we use information that is publicly available online, i.e., that can be accessed without a password by any individual, for research purposes?
  • What ethics should we observe in this case?

Researching online data that is produced and made available online voluntarily, or involuntarily, by the regular citizen is so new, even to those in Technology Enhanced Learning (TEL), that we are still trying to make sense of how to go about this. Publications regarding this matter are still scare, and even those that I have had access to deal mainly with the issues of anonymity (Here is a recent example). A big issue indeed, especially in a time where participating online means to create an online digital footprint that not only allows information to be traced back to an individual; it also creates, to a certain extent, a (re)presentation of a given ‘self’, voluntarily or involuntarily… In TEL we call this “Digital Identities” (more on this in a forthcoming post).  Digital identities disclose personal, and sometimes private, information within the social environments that are created online and in which individuals interact, we assume, on their own will. But do they always do consciously of the publicness of the information they publish? Do they perceive their participation online as an form of creating and publishing information?

This then begs the question:

  • Is, can or should such information be automatically converted into research data? In other words are we, as researchers, entitled to use any data that is publicly available, even if we claim that it is only being used for research purposes?

My gut feeling is that no, we cannot! Just because the information is made publicly online and therefore accessible to use, I, as a researcher, am not entitled to use it without previously seeking and obtaining consent from its creator. This, obviously, generates more obstacles than those researchers would probably like to experience, but the truth is that collecting publicly available information without the consent of its producers does not seem right to me for the following reasons:

  1. We (people engaged in TEL) need to step outside of our own taken-for-granted understanding of online participation, and note that many people don’t realise, or at least have not given it considerate thought, that online communication can be public and that interaction online, by its very nature of written speech are more durable than equivalent forms of face-to-face interaction.
  2. Anonymity and confidentiality are topics that need to be discussed with the research participant independently of the type of information we want to use. Just because the content is there available to the world, it doesn’t mean it’s available for the take. That is invading the public sphere of an individual, if that makes any sense(!), because as it becomes research data we will be exposing (transferring even!) it to other public spaces.   of its owner.
  3. I truly believe that every research participants has the right to know he/she is one. This is not merely a courtesy on the part of the researcher, it is a right that they have! Content produced online, unless stated otherwise, belongs to its producer and should therefore be treated as such. (….maybe what we need is a creative commons license for research purposes! ]

Niessenbaum and Zimmer, amongst others, talk about contextual integrity, a theory that rejects the notion that information types fit into a rigid dichotomy of public or private.’ “Instead, there is potentially an indefinite variety of types of information that could feature in the informational norms of a given context, and whose categorization might shift from one context to another.” (Zimmer, 2008,  p.116)

Although I am very amenable to the argument of context, and that nothing is absolute, and everything is relative [getting philosophical now!], I think that the context of research practice begs for informed consent independently of the research data being publicly and privately available. In my opinion, converting online information into research data should always be an opt in, and not an opt out, activity involving those to whom the information belongs.

…but as usual, this is only my 2 cents and I look forward to other people’s views on this.

Peer review, open access, and transparency. The way it should be…!?

August 15th, 2013 by Cristina Costa

A couple of months ago David Walker asked me to review a paper for a new Academic Journal of which he is one of the editors. He told me it was an open access journal focused on practice and I immediately said yes! I just cannot say no to open access or perspectives on practice. I just can’t! So I became a reviewer of the Journal of Perspectives in Applied Academic Practice.

As I started going through the first paper I had to review, I noticed that the author’s name was disclosed. There was even a short biography about his academic career. I was intrigued, almost shocked, I must confess. I immediately emailed David reporting the “tragedy” of having learnt the name and background of the author whose article I was about to review. David’s answer was something like this:

You didn’t read the guidelines, did you?! :-)

"Autoretrato" Photo by Flickr ID Sebastian Delmont  (CC BY-NC-SA 2.0)

“Autoretrato” Photo by Flickr ID Sebastian Delmont (CC BY-NC-SA 2.0)

Cough… well, actually I did, but I went directly to “conducting the review” section, ignoring the opening paragraph of the Reviewers Guidelines. How scholastic of me. So here it is:

Journal of Perspectives in Applied Academic Practice (JPAAP) journal uses open peer review process, meaning that the identities of the authors and those of the reviewers will be made known to each other during the review process in the following way: the reviewers will be fully aware of the name, position and institution of the authors of the manuscript they currently review, and the authors will be given a signed review with the name, position and institution of the reviewer.

 

I got the shock of my life at first, but then I decided to give it a go. This was the first time I got involved in open peer reviewing and, as I think it’s important to put into practice what we preach, I went for it. Here are some reflective points about the process of conducting an open peer review. I aimed to:

  • be on my best behaviour – I made sure I allocated plenty of time to read and digest the paper. I read it several times. I tried to understand and deconstruct it the best way I could, before I submitted the review
  • give thorough feedback  - I tried to justify every point I made (I think I achieved that better in the second paper I reviewed)
  • provide constructive and also friendly feedback – Nothing annoys me more than reviews  that are dry and harsh in their comments. I tried to use language that aimed to provide suggestions and stimulate new thinking. I think I still have some work to do in this area, but I hope I’m getting there
  • read the paper as it is written, not as I would write it – I think that’s a crucial point in any type of review, but one that is often forgotten. I have felt many times that reviews were made on the assumption that the article should be written in the style and perspective of the reviewer rather than that of the author’s

You may think that this doesn’t differ at all from any review process, and in fact it should not. But the feelings and the thoughts that go through your mind as you disclose your identity to the author are both of vulnerability and commitment to do a good job. [Not much different from the feelings of the author of the article who submits his/her work to you, hey?! So we are on the same boat!]

There is something about the open peer review process. With transparency comes visibility, a more acute sense of responsibility of your role as reviewer and maybe the fact that your reputation is on the line matters too!  And that can only be good.

However, I still have unanswered questions that in a sense do show my vulnerability as a reviewer.

  • would I feel the same if I knew/have worked with the author whose paper I was reviewing?
    • Would I feel comfortable giving them my feedback?
    • Would I be influenced or even intimidated by my knowledge of their practice/research?
    • Would I be too tough, or too soft, on them?

I guess you can always refuse to review someone’s paper if those feelings arise and you are not comfortable with it… but these questions did come to mind.

As we move towards more open peer reviewing processes, and I hope more initiatives like this start to emerge, I’d like to see more dialogue between the reviewer and the author. So far it’s still a monologue, and since we disclosed identities, could we also open up the discussion? ~ just a thought.

  • Search Pontydysgu.org

    News Bites

    Teachers and overtime

    According to the TES teachers in the UK “are more likely to work unpaid overtime than staff in any other industry, with some working almost 13 extra hours per week, according to research.

    A study of official figures from the Trades Union Congress (TUC) found that 61.4 per cent of primary school teachers worked unpaid overtime in 2014, equating to 12.9 additional hours a week.

    Among secondary teachers, 57.5 per cent worked unpaid overtime, with an average of 12.5 extra hours.

    Across all education staff, including teachers, teaching assistants, playground staff, cleaners and caretakers, 37.6 per cent worked unpaid overtime – a figure higher than that for any other sector.”


    The future of English Further Education

    The UK Parliament Public Accounts Committee has warned  the declining financial health of many FE colleges has “potentially serious consequences for learners and local economies”.

    It finds funding and oversight bodies have been slow to address emerging financial and educational risks, with current oversight arrangements leading to confusion over who should intervene and when.

    The Report says the Department for Business, Innovation & Skills and the Skills Funding Agency “are not doing enough to help colleges address risks at an early stage”.


    Skills in Europe

    Cedefop is launching a new SKILLS PANORAMA website, online on 1 December at 11.00 (CET).

    Skills Panorama, they say,  turns labour market data and information into useful, accurate and timely intelligence that helps policy-makers decide on skills and jobs in Europe.

    The new website will provide with a more comprehensive and user-friendly central access point for information and intelligence on skill needs in occupations and sectors across Europe. You can register for the launch at Register now at http://skillspanorama.cedefop.europa.eu/launch/.


    Talking about ‘European’ MOOCs

    The European EMMA project is launching a  webinar series. The first is on Tuesday 17 November 2015 from 14:00 – 15:00 CET.

    They say: “In this first webinar we will explore new trends in European MOOCs. Rosanna de Rosa, from UNINA, will present the philosophy and challenges behind the EMMA EU project and MOOC platform developed with the idea of accommodating diversity through multilingualism. Darco Jansen, from EADTU (European Association of Distance Teaching Universities), will talk about Europe’s response to MOOC opportunities. His presentation will highlight the main difference with the U.S. and discuss the consequences for didactical and pedagogical approaches regarding the different contexts.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • RT @cristinacost #SMSociety - might be of interest: JAST Special Edition: Theorising digital scholarship socialtheoryapplied.com/journ… cc @socialtheoryapp

    Yesterday from Cristina Costa's Twitter via TweetDeck

  • Sounds of the Bazaar AudioBoo

  • Recent Posts

  • Archives

  • Meta

  • Upcoming Events

      There are no events.
  • Categories