Archive for the ‘Evaluation’ Category

Technology and pedagogic models for training teachers in developing countries

January 22nd, 2019 by Graham Attwell

My new year intentions to post more regularly here got disrupted quickly by a bad cold and a week of travel. But I’m back in the saddle. There are two major themes running through my work at the moment (and overlapping to an extent: initial teacher training and continuous professional development for teachers and trainers and the impact of new technologies, especially Artificial Intelligence of both education and employment. So, here is the first of a series of posts on those subjects (though probably not in any particular order).

I’ve been doing some research into the training of teachers in Sub Saharan Africa. The major issue is the shortage of qualified teachers, which is of such a scale that there seems little no hope of overcoming by tradition pre service teacher training institutions. Also scaling up provision through teacher training colleges is problematic due to the size of many African countries and the rural  nature of much of those countries. Part of the problem in many countries in Sub Saharan African countries is the lack of prestige in which teaching is held and the low pay for teachers. That being said, there still remains a major challenge in terms of training new teachers and in providing continuing professional development for existing teachers.

In this situation, it is little wonder that attention is focused on the use of ICT for teacher education. It is probably fair to say that despite the issues of connectivity and access to technology, Technology Enhanced Learning is seen as the only real answer for the shortage of teachers in many countries in the region. This is despite Infodev’s findings in its Knowledge Bank on the effective uses of Information and Communication Technology in education in developing countries that:

While much of the rhetoric (and rationale) for using ICTs to benefit education has focused on ICTs’ potential for bringing about changes in the teaching-learning paradigm, in practice, ICTs are most often used in education in LDCs to support existing teaching and learning practices with new (and, it should be noted, often quite expensive!) tools.

Infodev goes on to say:

While impact on student achievement is still a matter of reasonable debate, a consensus seems to argue that the introduction and use of ICTs in education can be a useful tool to help promote and enable educational reform, and that ICTs are both important motivational tools for learning and can promote greater efficiencies in education systems and practices.

Firstly, I must say my research is limited. But I have read the literature and reports and undertaken about 30 interviews with people in Africa working on various projects for developing capacity in teacher education. And it seems that possibly understandably the emergent model is blended learning combining short face to face training programmes with longer periods of online learning, whilst based in the school. Its not a bad model, especially if support for teachers while learning in the workplace (i.e. the school) is well designed and well supported. My worry is with the training for people supporting the school based learning. Essentially the projects appear to be adopting a cascade model. And although cascade models are attractive in terms of quickly scaling up learning, they can be ‘leaky’, breaking down at the weakest point in the cascade train.

I don’t think there are any immediate answers to this problem. I think we need more south-north dialogue and interchange if only that northern countries including in Europe face huge problems in providing professional development for teachers in the use of technology in the classroom. I also think we need to examine the different models more carefully,  especially in understanding the assumptions we are miking in designing new training and professional development provision. Without understanding the assumptions we cannot evaluate the success (or otherwise).

 

 

 

 

Demonstrating the Value of Community Development Approaches

May 2nd, 2018 by Graham Attwell

This is a video of a conference I spoke at in Dublin in April organised by the Clondalkin Community Alcohol and Drugs Task Force. The conference followed the publication of a research report which said said power has been removed from affected areas and centralised at government level, where the system is “utterly disconnected” from the needs of people and communities. The research team was led by Aileen O’Gorman, a senior lecturer in Alcohol and Drug Studies at the University of West Scotland, and formerly of UCD.

The report said austerity “exacerbated” the problem by cutting funding to education, health, housing and welfare supports, local drug task- forces as well as community and voluntary groups.

A article about the report in the Irish Examiner newspaper said:

The study, commissioned by the Clondalkin Drug and Alcohol Task Force, said that drug-related ‘harms’ consistently cluster in communities marked by poverty and social inequality.

“The origins of poverty and inequality do not arise from the actions of people or communities, they derive from the politics, policies and structural violence of the state,” said the report.

It said drug policy in Ireland has become focused on addressing “individual drug using behaviour” and drug-related crime rather than the underlying issues of poverty and inequality and even less attention is paid to the outcomes of policy.

“The austerity policies introduced in the wake of the great recession have exacerbated the existing structural deficiencies in our society by cutting funding to education, health, housing, and welfare supports and to the Drug and Alcohol Task Forces and community, voluntary and statutory services that support vulnerable groups,” the study said.

It said that policies have resulted in a “drawing back of power from communities” and a recentralisation of power within government administration.

The conference focused on Reclaiming Community Development as an Effective Response to Drug Harms, Policy Harms, Poverty and Inequality and my presentation was entitled ‘Measuring Outcomes and Demonstrating
the Value of Community Development Approaches’.

Final Review of Learning Layers – Part One: The Event and the Arrangements

January 21st, 2017 by Pekka Kamarainen

This week we had in our program the concluding event of our EU-funded Learning Layers (LL) project – the Final Review. Normally such an event is organised at the premises of the respective Directorate General of the European Commission – in our case the DG Research which is located in Luxembourg. However, after our Year 2 Review Meeting the said building has been demolished and the DG Research has moved to temporary building. Therefore, also the review meetings have bee organised  in such a building or elsewhere. This gave us the rise to propose that our final review would be organised at the premises of one of our application partner organisations – to give the Project Officer and the review panel a chance to get a more lively picture of the impact of our work. This proposal was accepted and we had a brief discussion on the remaining options. In general, the construction sector training centre Bau-ABC Rostrup would have liked to host such an event, but it was not possible, because in January their meeting rooms are fully booked for continuing vocational training courses. Therefore, our best option was to organise the event primarily at the Norddeutsches Zentrum für Nachhaltiges Bauen ((NZNB) – North-German Centre for Ecological Construction Work in Verden, near Bremen). Below I try to give a picture of the arrangements and the agenda of Review Meeting and how we made use of the spaces provided by the NZNB to present our work in a more dynamic and dialogue-oriented way.

Making appropriate use of the spaces of the NZNB

We came to the conclusion that we should organise the first day of the review meeting around two ‘exhibition spaces’ that portray our two sectoral pilots. In addition, we would present the work of the host organisation. Therefore, we located our activities into a workshop hall (“Panzerhalle”) and into the meeting rooms above the clay and strawbale construction hall. There we had a large meeting room, part of which we then used for the two exhibition spaces. Having structured the main part of the agenda for these internal exhibitions and supporting presentations, we arranged that during the lunch break the review panel could have a chance to visit briefly the permanent exhibition of NZNB on ecological construction work in their main building. Also, we wanted to give them a brief presentation on the clay and strawbale building techniques and the courses organised in the workshop building.

Presenting our work with visual images, tool demonstrations and coniverations

For the exhibition spaces of the two sectoral pilots we had some common content and then somewhat different settings:

a) As the common content we had a Mini-Poster Wall that presented all the Learning Toolbox (LTB) stacks that had been prepared for piloting or demonstration purposes.

b) For the Healthcare exhibition space we had following contents and activities that were offered for free explorations:

  • Posters that had been used at Online Educa Berlin (2015) to present the tools piloted in the Healthcare sector;
  • Posters that had been used at AMEE 2015 conference to demonstrate the usability of Learning Toolbox in Healthcare Education and in related conferences;
  • Games table to demonstrate further uses of the tools of the Healthcare sector in their original and spin-off contexts.

c) For the Construction exhibition space we had the following contents and spots that were offered as ‘guided tour’:

  • Poster wall that portrayed the mutual realations of Learning Layers pilots activities with 9+1 posters (and an additional poster for the spin-off project DigiProB in Continuing Vocational Training.
  • Spin-out table to present the (emerging) start-up companies that will take over the responsibility of some LL tools after the funding period (Learning Toolbox, AchSo, ZoP-tool).
  • Exploitation table for presenting follow-up projects (including LTB-pilots in Germany, Estonia, Spain, UK).

Giving visibility to our application partners and to the use of LTB

One of our major points was to engage our application partners in the ‘exhibition spaces’ and in the supporting presentation sessions. For this purpose we had made arrangements to Thomas Isselhard from the network for ecological construction worj (Netzwerk Nachhaltiges Bauen) to present his ways for using Learning Toolbox in construction work. Likewise, we had invites two full-time trainers (Lehrwerkmeister) from Bau-ABC to present their initiatives for using LTB and their experiences on using it in apprentice training.

During the two preparatory days we inserted most of the content to the Learning Toolbox to make the two ‘exhibition areas accessible via LTB-stacks.

– – –

I think this is enough of the advance planning and of the preparatory measures that we took during the two preparatory days (Monday and Tuesday) this week. It is worthwhile to note that we had arranged the accommodation of our guests in Bremen (and transports between Verden and Bremen) so that the guests could also explore Bremen in the evenings. On the final day of the event we had relocated the meeting to Bremen to make the travel arrangements easier. So, this was a brief overview on our preparations. In my three following blogs I will give more information on our presentations and on the discussions.

More blogs to come …

Learning Layers in Leeds – Part One: Paving the way for the final run

September 27th, 2016 by Pekka Kamarainen

Last week our EU-funded Learning Layers (LL) project had its last joint project consortium meeting (before the final review meeting) in Leeds, hosted by Leeds University, NHS and our software partner PinBellCom (latterly merged to EMIS group). This consortium meeting differed from many earlier ones because most of the work of the project has already been done. Also, quite a lot of strategic decisions concerning the final reporting had already been done. Therefore, we could concentrate on harvesting the most recent results and coordinating some preparatory processes for the final reporting. Yet, this meeting also had its salt and spices as well. In the first post I will give a brief overview on the meeting on the whole. In the second post I will focus on the picture that I/we gave on the construction sector pilot in some of the sessions.

Overview on the main sessions

After a quick situation assessment on the current phase of the project we started working in groups and in interim plenaries to be followed by group work:

  1. With the sessions on evaluation studies we had parallel groups working with the evaluation studies that had been adjusted to the progress in construction pilot and healthcare pilot. Concerning the construction pilot, our colleagues from the UIBK presented quantitative data and summarised the qualitative findings that have been discussed earlier on this blog. We had some discussions, whether we can enrich that material with some last minute interviews but that remains to be decided at the local level.
  2. Regarding the integrated deliverable (result-oriented website) we had common discussions on the structure, on the current phase of the main sections and on the technical implementation. Then we had parallel groups on the impact cards, ‘learning scenarios’ (or instances of change) and on the ‘research, development and evaluation approaches’. In the group work we focused on the situation in the sectoral pilots and on the complementary relations between impact cards (demonstrating particular impact), the scenarios or instances (in interpreting the findings in a conceptual and future-oriented way) and the research approaches (in presenting the contribution of the main research approaches represented in the project work).
  3. In a joint demonstration session Tamsin Treasure-Jones informed us, how the Learning Toolbox had been used in an adapted participative “Barcamp” session that was implemented in the AMEE (Association for Medical Education in Europe) conference in Barcelona. This example served as an inspiration and can be adapted for other research and development communities as well.
  4. In a practicing session we rotated between different topic tables to prepare ‘marketing pitches’ to convey the key messages of our tools/infrastructures/impact cases/research approaches. Each table was managed by moderator and the participants could take the role of presenter or listener. This helped us to get an overview and to concentrate on the core message of our presentations.
  5. In the Elevator pitches session we then presented the pitches (20 second pitch to qualify as presenter and a 3 minute pitch to convey the message). In this session Pablo served as real-time rapporteur and colleagues from Leeds had invited ‘critical friends’ to give feedback. This session helped us to shift us from project-internal reporting to speaking to new audiences.
  6. In the concluding session we discussed the organisation of the review meeting, the time plan for remaining activities and some final dissemination activities.

Altogether we made good progress in getting a common picture, what all we have achieved and how to present it. To be sure, we have several points to be settled in a number of working meetings during the coming weeks. But the main thing is that we set the course to achieving common results in the time that is available – and we are fully engaged to make it. In the next post I will take a closer look at the work with the construction pilot in the Leeds meeting.

More blogs to come …

 

Interim reports on LL fieldwork in Bau-ABC – Part One: Evaluation talks and plans for field testing

September 22nd, 2015 by Pekka Kamarainen

In the beginning of September we made an important field visit in the context of our EU-funded Learning Layers (LL) project to our application partner organisation – the training centre Bau-ABC (see my blog post of 13.9.2015). On Friday some LL colleagues had a chance to make a follow-up visit to Bau-ABC, while the others were having a meeting in ITB with the visiting delegation from Singapore Workforce Development Agency. Since I was involved in the meeting in ITB, I can only report on meeting on the basis of the information from my colleague Lars Heinemann.

Update 2.10.2015: I published this post some time ago as a single blog entry. Now that I got the chance to listen to the recordings of the interviews in Bau-ABC, I came to the conclusion that it is worthwhile to discuss some points of the Bau-ABC trainers in greater detail. Here again, I am also relying on the first-hand information from Lars Heinemann.

The aim of the visit

The visit was planned quite some time ago as a field visit to get feedback data on the ongoing pilot testing with the Learning Toolbox (LTB). Since the LL teams of ITB and Bau-ABC could send only one participant to the LL consortium meeting in Toledo, our LL colleagues from the University of Innsbruck (UIBK), Stefan Thalmann and Markus Manhart, came to Bremen have planning meetings with us and to make field visits. However, given the very recent field visit (with the newly published Beta version of LTB), we felt that the evaluation talks were somewhat rushed. After all, the trainers had only made their first experiences  in making their own stacks, pages and tiles in the LTB (to be used by other users).

Talks in Bau-ABC

The visitors (Lars, Stefan and Markus) were pleased to see that their talks with the Bau-ABC trainers Markus Pape (Zimmerer = carpenter) and Lothar Schoka (Brunnenbauer = borehole builder) were well-timed and informative. Both trainers had made further efforts to familiarise themselves with the LTB Beta version. They had also made concrete plans for engaging their apprentices later in the autumn as users of LTB in their training projects. According to their information, the amount of apprentices to be involved in such pilots would be ca. 100 in both trades. As advance measure they had collected a list of volunteered users to start testing with LTB before that actual pilot.

In this respect they both could give informative reports on what is going on and what is to be expected in the near future. (We expect the UIBK colleagues to share recordings of theses talks with ITB soon.)

In addition to their own experiences and plans for piloting they had some urgent requests for the LTB developers. Some of these points have already been discussed with the developers, but now we got the points of the trainers from the pilot site:

1) For the trainers it is important that they can send messages to groups and individuals.

2) For trainers and apprentices it is important to have a notification function that alerts the apprentices when new learning materials have been made accessible and informs the trainers when apprentices have accessed the information. Moreover, both parties should be notified of replies or questions on further information.

3) For trainers and apprentices it is important to have a commentary function that makes it possible to add questions or comments to texts that are used for instruction and/or documentation of learning processes.

4) At the moment the LTB has been designed for Android phones and tablets – which are mostly used by the apprentices. Yet, about one third is using iOS-phones, so it is essential to proceed to iOS-versions or find alternative solutions to involve them in the pilot testing.

Update 2.10.2015: I have let my initial blog post stand as it was written before listening the recordings – with one amendment. Now that I have got access to the recordings, it is interesting to have a a glimpse at some of the points made by the trainers and to relate them to our earlier interviews and discussions with them. As I see it, via such examination we learn a lot, how the fieldwork of the LL project has made progress during the years of co-design and pilot activities.

More blogs to come …

Evaluation 2.0: How do we progress it?

October 11th, 2011 by Jenny Hughes

Have been in Brussels for the last two days – speaking at 9th European Week of Regions and Cities organized by DG Regio and also taking the opportunity to join other sessions. My topic was Evaluation 2.0. Very encouraged by the positive feedback I’ve been getting all day both face-to-face and through twitter. I thought people would be generally resistant to the idea as it was fairly hard-hitting (and in fairness, some were horrified!) but far more have been interested and very positive, including quite a lot of Commission staff. However, the question now being asked by a number of them of them is “How do we progress this?” – meaning, specifically, in the context of the evaluation of Regional Policy and DG Regio intervention.

Evaluation 2.0 in Regional Policy evaluation
I don’t have any answers to this – in some ways, that’s not for me to decide! I have mostly used Evaluation 2.0 stuff in the evaluation of education projects not regional policy. And my recent experience of the Cohesion Fund, ERDF, IPA or any of the structural funds is minimal. However, the ideas are generic and if people think that there are some they could work with, that’s fine!

That said, here are some suggestions for moving things forward – some of them are mine, most have been mooted by various people who have come to talk to me today (and bought me lots of coffee!)

Suggestions for taking it forward

  • Set up a twitter hashtag #evaluation2.0. Well that’s easy but I don’t know how much traffic there would be as yet!
  • Set up a webpage providing information and discussion around Evaluation 2.0. More difficult – who does that and who keeps it updated? Maybe, instead, it is worth feeding in to the Evalsed site that DG Regio maintain, which currently provides information and support for their evaluators. I gather it is under the process of review – a good opportunity to make it more interactive, to make more use of multimedia and with space for users to create content as well as DG Regio!
  • Form a small working group or interest group – this could be formal or informal, stand alone or tied to their existing evaluation network. Either way, it needs to be open and accessible to people who are interested in developing new ideas and trying some stuff out rather than a representative ‘committee’.
  • Alternatively, set up an expert group to move some ideas forward.
  • Or how about a Diigo group?
  • Undertake some small-scale trials with specific tools – to see whether the ideas do cross over from the areas I work in to Regional Policy.
  • Run a couple of one-day training events on Evaluation 2.0 focusing on some real hands-on workshops for evaluators and evaluation unit staff rather than just on information giving.
  • Check out with people responsible for evaluation in other DGs whether there is an opportunity for some joint development (a novel idea!) Unlike other ‘perspectives’ it is not tied to content or any particular theoretical approach.
  • Think about developing some mobile phone apps for evaluators and stakeholders around content specific issues – I can easily think of 5 or 6 possibilities to support both counterfactual, quantitative approaches and theory-based qualitative approaches. Although the ideas are generic, customizing the content means evaluators would have something concrete to work with rather than just ideas.
  • Produce an easy-to-use handbook on evaluation 2.0 for evaluators / evaluation units who want practical information on how to do it.
  • Ring fence a small amount of funding to support one-off explorations into innovative practice and new ideas around evaluation.
  • Encourage the evaluation unit to demonstrate leadership in new approaches – for example, try streaming a live internet radio programme around the theme of evaluation (cheap and easy!); set up a multi-user blog for people to post work in progress and interesting observations of ongoing projects using a range of media as well as text-based major reports; make some podcasts of interviews with key players in the evaluation of Regional Policy; set up a wiki around evaluation rather than having to drill down through the various Commission websites; try locating projects using GPS data so that we can all see where the action is taking place! Keep a twitter stream going around questions and issues – make use of crowd sourcing!
  • Advertise the next European Evaluation Society biennial conference, in Helsinki, October 1st – 5th 2012 “Evaluation in the networked society: New concepts, New challenges, New solutions” (There you go Bob, I just did!)
  • Broaden the idea of Evaluation 2.0 and maybe get rid of the catchphrase! We are already using the power of the semantic web in evaluation to mash open and linked data, for example. Should we be now be talking about Evaluation 3.0?? Or should we find another name – Technology Enhanced Evaluation? We could have TEE parties instead of conferences – Europe’s answer to the American far right ; )

P.S. Message to the large numbers of English delegates at the conference

When you left Heathrow yesterday to come to Brussels, I do hope you waved to the English Rugby team arriving home from the Rugby World Cup in New Zealand.

(Just as well this conference was not a week later or I’d have leave a similar message for the French delegates…..)

What is Evaluation 2.0?

October 4th, 2011 by Graham Attwell

Graham Attwell interviews Jenny Hughes about Evaluation 2.0

Just what is Evaluation 2.0?

Evaluation 2.0 is a set of ideas about evaluation that Pontydysgu are developing. At its simplest, it’s about using social software at all stages of the evaluation process in order to make evaluation more open, more transparent and more accessible to a wider range of stakeholders. At a theoretical level, we are trying to push forward and build on Guba and Lincoln’s ideas around 4th generation evaluation which is a constructivist approach incorporating key ideas around negotiation, multiple realities and stakeholder engagement. But this is the first part of the journey – ultimately, I believe that e-technologies are going to revolutionise the way we think about and practice evaluation.

In what way do you think this is going to happen?

Firstly, the use of social media gives stakeholders a real voice – irrespective of where they are located.  Stakeholders can create and publish evaluation content. For example, in the past I might carry out some interviews as part of an evaluation. Sometimes I recorded it, sometimes I just made notes. Then I would try and interpret it and draw some conclusions about what it meant.  Now I set up a web page for each evaluation and I podcast the interviews using audio or video and put them on the site. (Obviously this has to be negotiated with the interviewee but so far, no one has raised any objections.) There is the usual comment box so any stakeholder with access to the site can respond to the interview, add their interpretations, agree or disagree with my conclusions and so on.

Secondly, I think it is challenging our perceptions of who are evaluators.  Everyone is now an evaluator. Think of the software that you use every day for on-line shopping from Amazon or Ebay or any big chain store. If I want to buy a particular product I check out what other people have said about it, how many of them said it and how many stars it has been given. These are called recommender systems and I think they will have a big impact in evaluation. We have moved from the paradigm of the ‘expert’ collecting and analyzing data into a world of crowd sourcing – harnessing the potential of mass data collection and interpretation.

Thirdly, the explosion of Web 2.0 applications has provided us with a whole new range of evaluation tools that open up new methodological possibilities or make the old ones more efficient.  For example, if I am at the stage of formulating and refining the evaluation questions – I put it out as a call on Twitter.  It’s amazing how restricting evaluation questions to 140 characters can sharpen them up!

I did an evaluation of a community capacity building project in an inner city area recently and spent quite a long time before I went to the first meeting walking around the streets, checking out the community facilities, the state of the housing, local amenities and so on, to get a ‘feel’ for the area – except I did it on Google Earth and with street-view on Google maps.  There are about 20 or so other applications I use a lot in evaluation but maybe they will have to wait for another edition!

Fourthly, I think the potential of Web 2.0 changes the way we can visualize and present data.  Why are we still writing long and indigestible text-based evaluation reports? Increasingly clients are preferring short, sharp evaluation ‘articles’ on maybe one outcome of an evaluation which they can find on a ‘newsy’ evaluation webpage – with hyperlinks to more detailed information or raw data or back up evidence if they want to check it out.  We can also create ‘chunks’ of evaluation reporting and repurpose them in different ways for different stakeholders or they can be localized for different cultures – for example, I have started doing executive summaries as downloadable podcasts.  I think evaluation 2.0 is about creating a much wider range of evaluation products.

Following on from that, I think Evaluation 2.0 breaks down the formative-summative divide and notions of ‘the mid-term report’ or ‘the ex-ante report’.  Evaluation 2.0 is continuous, it is dynamic and it is interactive.  For example, I use Googledocs with all my clients – I add them as readers and editors on all the folders that relate to their evaluations. At any time of the day or night they can see work in progress and add their comments. I keep their evaluation website up to date so they get evaluation information as soon as it is available.

So do you think all evaluators will have to move down this road or will there always be a place for evaluators using more established methods?

Personally, I think massive change is inevitable. Apart from anything else, our clients of the future will be the digital natives – they will expect it.

There will always be a role for the evaluator but that role will be transformed and the skills will be different. I think a key job for the specialist evaluator will be designing the algorithms that underpin the evaluation. The evaluator will also need to be the creative director – they will need skills in informatics, in visualizing and presenting information, the creative skills to write blogs and wikis. They will need networking skills to set up and facilitate online communities of practice around different stakeholder groups and the ability to repurpose evaluation objects.

The rules of engagement are also changing – in the past you engaged with a client, now you engage with a community.  We also have to think how stakeholder created content might change our ideas about copyright, confidentiality, ownership, authorship.

So do you think evaluators as we know them will become extinct!!

Well, as Mark Halper said

“Dinosaurs were highly successful and lasted a long time. They never went away. They became smaller, faster, and more agile, and now we call them birds.”

Evaluation 2.0 – the Slidecast

August 2nd, 2011 by Graham Attwell

Late last year Jenny Hughes made a keynote presentation on Evaluation 2.0 for the UK Evaluation society. And pretty quickly we were getting requests for the paper of the presentation and the presentation slides. The problem is that we have not yet got round to writing the paper. And Jen, like me uses most of her canvas space for pictures not bullet points on her slides. This makes the presentation much more attractive but it is difficult sometimes to gleam the meaning from the pictures alone.

So we decided we would make a slidecast of the presentation. But, half way through, we realised it wasn’t working. Lacking an audience and just speaking to the slides, it was coming over as stilted and horribly dry. So we started again and changed the format. rather than seeing it as a straightforward presentation, Jen and I just chatted about the central ideas. I think it works pretty well.

We started from the question of what is Web2.0.Jen says “At its simplest, it’s about using social software at all stages of the evaluation process in order to make it more open, more transparent and more accessible to a wider range of stakeholder.” But editing the slidecast I realised we had talked a lot more than about evaluation. This chat really deals with Web 2.0 and the different ways we are developing and sharing knowledge, the differences between expert knowledge and crows sourced knowledge and new roles for teachers, trainers and evaluators resulting from the changing uses of social media.

The impact of new technologies on teaching and learning

September 20th, 2010 by Graham Attwell

For a report that I am working on, I have been asked to assess the impact of new technologies on teaching and learning in the vocational education sector in the UK.

One major problem in judging the impact of new technologies on teaching and learning and on pedagogical approaches to teaching and learning is the need for metrics for judging such impact. it is relatively simple to survey the number of computers in a school, or the speed of an internet connection. It is also not impossible to count how many teachers are using a particular piece of technology. It is far harder to judge pedagogic change. One tool which could prove useful in this respect is the iCurriculum Framework (Barajas et al, 2004), developed by the European project of the same name.The framework was intended as a tool that can be used by educators to record the effects of their learners activities. It is based on seeing pedagogic and curricula activities along three dimensions – an Operational Curriculum, an Integrating Curriculum and a Transformational curriculum. It is possible to approach pedagogies for using technologies for learning for the same subject and for the same intended outcomes on any one of those three dimensions.

  • Operational Curriculum is learning to use the tools and technology effectively. Knowing how to word-process, how to edit a picture, enter data and make simple queries of an information system, save and load files and so on.
  • Integrating Curriculum is where the uses of technology are applied to current curricula and organisation of teaching and learning. This might be using an online library of visual material, using a virtual learning environment to deliver a course or part of a course. The nature of the subject and institution of learning is essentially the same, but technology is used for efficiency, motivation and effectiveness.
  • Transformational Curriculum is based on the notion that what we might know, and how, and when we come to know it is changed by the existence of the technologies we use and therefore the curriculum and organisation of teaching and learning needs to change to reflect this. (p 8)

In terms of general approaches suggested by research literature, most Further Education colleges in the UK are still approaching pedagogy and curriculum design from the standpoint of an operational curriculum, and although there are some examples of an integrating curriculum, there is little evidence of using technology for transformation.

Reference:

Barajas, M., Heinemann, L., Higueras, E., Kikis-Papakadis, K., Logofatu, B., Owen, M. et al. (2004). Guidelines for Emergent Competences at Schools, http://promitheas.iacm.forth.gr/i-curriculum/outputs.html

#PLE2010 update – the outcomes of the review process

May 2nd, 2010 by Graham Attwell

A further update on planning and preparations for the PLE2010 conference. We received 81 proposals, far more than we had expected. And whilst very welcome, this has generated a lot fo work. Each proposal was assigned two reviewers from the conference Academic Committee. This has meant some members of the Committee being asked to review six papers which is quite an effort for which we are truly grateful

One of the main points made in feedback to us from the reviewers was that a 360 word abstract is too short to make a proper judgement. And indeed some submissions did not make full use of the 360 words. We produced criteria for the submissions which were used by some reviewers. Others disagreed with this approach. Stephen Downes, commenting on my last blog post about the conference, said:

  • the stated criteria, as listed in the post above, are actually longer than many of the abstract submissions. As such, the criteria were overkill for what was actually being evaluated.
  • the criteria do not reflect academic merit. They are more like a check-off list that a a non-skilled intake worker could complete. The purpose of having academics do the review is that the academics can evaluate the work on its own merit, not against a check-off list.
  • the criteria reflect a specific theoretical perspective on the subject matter which is at odds with the subject matter. They reflect an instructivist perspective, and a theory-based (universalists, abstractivist) perspective. Personal learning environments are exactly the opposite of that.
  • In other words, it is not appropriate to ask academic reviewers to bring their expertise the material, and to then neuter that expertise with overly perspective statement of criteria.

On the whole I think I agree with Stephen. But I am still concerned with how we reach some common understandings or standards for reviewing, especially in a multi-disciplinary and multi national context.

Following the completion of the reviews, the conference organising committee met (via Skype) to discuss the outcomes of the process. We did not have time to properly consider the results of all 166 reviews and in the end accepted to unconditionally accept any paper with an average score of two or more (reviewers were asked to score each submission on a scale ranging from plus to minus three). That accounted for twenty six of the proposals. Each of the remaining proposals was reconsidered by the seven members of the organising committee in the light of the feedback from the reviewers. In many of the  cases we agreed with their reviews, in some cases we did not. 30 of the proposals were accepted but we have asked the proposers to resubmit their abstract, feeling that improvements could be made in clarity and in explaining their ideas to potential participants at the conference.

We referred nine of the proposals, in the main case because whilst they seemed interesting proposals we did not feel they has sufficiently addressed the theme of the conference ie Personal Learning Environments. These we have asked to resubmit the abstract and we will review the proposals for a second time. In a small number of cases we have recommended a change of format, particularly for research which is still at a conceptual stage which we felt would be better presented as a short paper, rather than a full proceedings paper. And, following the reviews, we did not accept five of the proposals. Once more the main reason was their failing to9 address the themes of the conference.

I am sure we will have upset some people through this process. But the review process was if nothing else rigorous. the meeting to discuss the outcome lasted late into the evening the we were concerned wherever possible to be inclusive in our approach. We also decided not to use the automatic functionality of the EasyChair system for providing feedback on the proposals. the main reason for this was that we were very concerned that feedback should be helpful and constructive for all proposers. Whilst many of the reviews were very helpful in that respect, some were less so and thus we have edited those reviews.

Four quick thoughts on all this:

  • I am not sure that people spend enough time thinking about the calls for papers. What are the themes a conference is trying to address? How does my work contribute towards those themes.
  • I wonder if many academics struggle with writing abstracts. I was surprised how many did not use their full 360 words in their proposals. Abstracts are difficult to write (at least I find them hard) and perhaps our 360 word limit constrained many. However, it was surprising how many were not really clear in focus.
  • I am still concerned with how we can develop common understandings and standards between reviewers. Maybe we need some sort of discourse process between reviewers.
  • The task of providing clear feedback and judgement about proposals whilst still proving constructive and helpful feedback to proposers is not easy. Once more, this maybe something which needs to be addressed at a community level.
  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories