Archive for the ‘uncategorized’ Category

Reflections on the impact of the Learning Layers project – Part Three: The use of Learning Toolbox in new contexts

April 29th, 2020 by Pekka Kamarainen

With my two latest posts I have started a series of blogs that report on the discussions of former partners of the Learning Layers (LL) project on the impact of our work. As I have told earlier, the discussion started, when I published a blog post on the use of Learning Toolbox (LTB)  in the training centre Bau-ABC to support independent learning while the centre is closed. This triggered a discussion, how the digital toolset Learning Toolbox – a key result from our EU-funded R&D project – is being used in other contexts. And – as I also told earlier – this gave rise to the initiative  of the leader of the Learning Layers consortium to collect such experiences and to start a joint reflection on the impact of our work. In the first post I gave an overview of this process of preparing a joint paper. In the second post I presented the main points that I and my co-author Gilbert Peffer presented on the use of LTB to support vocational and workplace-based learning in the construction sector. In this post I try to give insights into the use of LTB in other contexts based on spin-off innovations and on refocusing the use of the toolset. Firstly I will focus on the development of ePosters (powered by LTB) in different conferences. Secondly I will give a brief picture on the use of LTB for knowledge sharing in the healthcare sector.

Insights into the development of ePosters powered by LTB

Here I do not wish to repeat the picture of the evolution of the ePosters – as a spin-off innovation of the LTB as it has been delivered by the responsible co-authors. Instead, I try to give firstly my impressions of the initial phase of this innovative use of LTB to support poster presenters in conferences. Then, I will give a glimpse, how we tried to present the ePoster approach to the European Conference on Educational Research and to the VETNET network. Here I can refer to my blog posts of that time. Then I will add some information on the current phase of developing the work with ePosters – as presented by the responsible authors for the joint paper on the impact of LL tools.

  • In October 2017 I became familiar with the breakthrough experience that the developers of the LTB and the coordinator of the healthcare pilot of the LL project had had with the development of ePosters for conferences. In the annual conference of medical educators (AMEE 2017) they had introduced the ePosters (prepared as LTB stacks) as alternatives for traditional paper posters and for expensive digital posters. At that time I published an introductory blog post – mainly based on their texts  and pictures. Foe me, this was a great start to be followed by others. Especially the use of poster cubicles to present  mini-posters that provided links to the full ePosters was very impressive. Another interesting format was the use of ePosters attached to Round Tables or Poster Arenas was interesting.

  • In the year 2018 we from ITB together with the LTB-developers and with the coordinator of the VETNET network took the initiative to bring the use of ePosters into the European Conference on Educational Research 2018 in Bolzano/ Bozen, Italy. We initiated a network project of the VETNET network (for research in vocational education and training) to serve as a pioneering showcase for the entire ECER community. In this context we invited all poster presenters of the VETNET program to prepare ePosters and the LTB-developers provided instructions and tutoring for them. Finally, at the conference, we had the ePoster session and a special session to e approach for other networks. This process was documented by two blog posts – on September 2nd and on September 11th – and by a detailed report for the European Educational Reseaarch Association. The LTB-stacks stacks for the ePosters can be found here, below you have screenshots of the respective web page.

  • In the light of the above the picture that the promoters of ePosters have presented now is amazing.  The first pilot was with a large, international medical education conference in 2017. In 2018 it was used at 6 conferences across Europe. In 2019 this number grew to 14 and also included US conferences. The forecast for 2020 is that it will be used by more than 30 conferences with growth in the US being particularly strong.  The  feedback from users and the number of returning customers  suggest that the solution is valued by the stakeholders.

Insights into the use of LTB in the healthcare sector

Here I am relying on the information that has been provided by the coordinator of the healthcare pilot of the Learning Layers and by the former partners from the healthcare sector. Therefore, I do not want to go into details. However, it is interesting to see, how the use of LTB has been repurposed to support knowledge sharing between the healthcare services across a wide region. This is what the colleagues have told us of the use of LTB:

“LTB has been used to create stacks for each practice and thereby improve the accessibility of the practice reports as well as to enable the sharing of additional resources which could not be included in the main report due to space. The app has thus improved the range of information that can be shared, and links are also shared which allow users to read more in-depth into the topic areas. The use of LTB has also enabled the spread of information more widely, as the team suggested that the stack poster (a paper-based poster displaying the link to the stack and a QR code) should be displayed in the practice to allow any interested staff to access the stack and resources. The use of the stack also allows for all the information to be kept by interested staff in one central place, so previous reports and resources can be referred back to at any point. It can also be accessed via a personal mobile device, so gives the opportunity for users to access the information at the most convenient time for them, and without the need to have the paper report or to log in to a system.”

I guess this is enough of the parallel developments in using the LTB after the end of the LL project and alongside the follow-up in the construction sector. In the final post of this series I will discuss some points that have supported the sustainability of the innovation and contributed to the wider use of the LTB.

More blogs to come …

 

AI, automation, the future of work and vocational education and training

February 17th, 2020 by Graham Attwell

Regular readers will know I am working on a project on AI and Vocational Education and Training (VET). We are looking both at the impact of AI and automation on work and occupations and the use of AI for teaching and learning. Later in the year we will be organizing a MOOC around this: at the moment we are undertaking interviews with teachers, trainers , managers and developers (among others) in Italy, Greece, Lithuania, Germany and the UK.

The interviews are loosely structured around five questions:

  • What influence do you think AI and automation is going to have on occupations that you or your institution provide training for?
  • Do you think AI is going to effect approaches to teaching and learning? If so could you tell us how?
  • Have you or your institution any projects based around AI. If so could you tell us about them?
  • How can curricula be updated quickly enough to respond to the introduction of AI?
  • Do you think AI and automation will result in less jobs in the future or will it generate new jobs? If so what do you think the content of those jobs will be?

Of course it depends on the work role and interests of the interviewee as to which questions are most discussed. And rather than an interview, with the people I have talked with it tends to be more of a discussion.

while the outcomes of this work will be published in a report later this spring, I will publish here some of the issues which have been come up.

Last week I talked with Chris Percy, who describes himself as a Business strategy consultant and economist.

Chris sees AI and technology as driving an increasing pace of change in how work is done. He says the model for vocational education is to attend college to get skills and enter a trade for ten or twenty years – albeit with refreshers and licenses to update knowledge. This, he says, has been the model for the last 50 years but it may not hold if knowledge is so fast changing. He is not an AI evangelist and thinks changes feed through more slowly. With this change new models for vocational education and training are needed, although what that model might be is open. It could be e to spend one year learning in every seven years or one day a week for three months every year.

The main issue for VET is not how to apply AI but how we structure jobs, Lifelong Learning and pedagogy.

One problem, at least in the UK. has been a reduction in the provision of Life Long Learning has gone down in the UK. In this he sees a disconnect between policy and the needs of the economy.  But it may also be that if change is slower than in the discourse it just has just not impacted yet. Tasks within a job are changing rather than jobs as a whole. We need to update knowledge  for practices we do not yet have. A third possible explanation is that although there are benefits from new technologies and work processes the benefits from learning are not important enough for providing new skills.

New ways of learning are needed – a responsive learning based on AI could help here – but there is not enough demand to overcome inertia. The underpinning technologies are there but have not yet translated into schools to benefit retraining.

Relatively few jobs will disappear in their entirety – but a lot of logistics, front of store jobs, restaurants etc. will be transformed. It could be there will be a lower tier of services based on AI and automation and a higher tier with human provision. Regulators can inhibit the pace of change – which is uneven in different countries and cities e.g. Self driving cars.

In most of the rest of the economy people will change as tasks change. For example the use of digital search in the legal industry  has been done by students, interns and paralegals because someone has to do it – now with AI supporting due diligence students can progress faster to more interesting parts of the work. Due diligence is now AI enabled.

Chris thinks that although AI and automation will impact on jobs, global economic developments will still be a bigger influence on the future of work.

More from the interviews later this week. In the meantime if you would like to contribute to the research – or just would like to contribute your ideas – please et in touch.

 

 

Exploring my Personal Learning Environment

March 3rd, 2019 by Graham Attwell

vle ple

It has been a bit busy lately. I seem to be writing one report after another. Anyway I got asked ages ago if a I would make a contribution on Personal Learning Environments for a book to be published in Portugal. I couldn’t resist but of course didn’t get my act together until one week past the final deadline. But then, reading the instructions, I forgot that what I had been invited to contribute was a short concept paper. Instead I wrote and activity sheet. Never mind. Here is the activity and you can have the concept piece tomorrow.

 

Title: Exploring my Personal Learning Environment

Objective

The objective of this activity is for participants to explore their own Personal Learning Environment and to reflect on how they learn.

Participants are encouraged to consider:

  • The different contexts in which they learn
  • The people from whom they learn – their Personal Learning network
  • The ways they use technology in their learning
  • The objects which support their learning
  • The links between what they learn and how they use this learning in their practice
  • The reasons they participate in learning activities
  • What inspires them to reflect on learning in their everyday life
  • How they record their learning, who they share that learning with and why?

Target audience

The main target audience is adults. This includes those in full or part time education, those working or those presently unemployed.

Activity time

The activity can be customised according to available time. It could be undertaken in and hour but could also be extended as part of a half day workshop.

Required features

Flexible space for people to work together and to draw posters. Large sheets of flipchart paper. A flipchart. Felt tip pens. A smartphone camera to record the results.


Schematic sequence of steps for activity

  1. An introduction from the facilitator to the idea of Personal Learning Environments, followed by a short discussion.
  2. An introduction to the activity to be undertaken.
  3. Working individually participants draw a view of their own Personal Learning Environments, including institutions, people, social networks and objects from which they learn.
  4. Short presentations of the posters by participants and questions by colleagues.
  5. Discussion and reflections on the outcomes.

Detailed description of steps – up to 800 characters including spaces

The introduction is critical in setting the context for the activity. Many people will conflate learning with formal education: the introduction needs to make clear we are thinking about all kinds of learning and all of the contexts in which learning takes place.

There needs to be no prescription on how they choose to illustrate their PLE. Some may draw elaborate pictures or diagrammes, others may produce a more traditional list or tree diagramme. In one workshop a participant chose to ‘play his PLE’ on a piano! A variety of different presentations enriches the activity.
While drawing the PLE is is an individual activity it is helpful if the working space encourages conversation and co-refection during the activity.

In my experience, most participants are eager to explain their posters – however this can be time consuming. Sometimes I have introduced voting for the best poster – with a small prize.

The final refection and discussion is perhaps the most important part of the activity in drawing out understandings on how we learn and how we can further develop our PLEs.

 

The problems of assessing competence

February 12th, 2018 by Graham Attwell

It was interesting to read Simon Reddy’s article in FE News,  The Problem with Further Education and Apprenticeship Qualifications, lamenting the low standard of training in plumbing the UK and the problems with the assessment of National Vocational Qualifications.

Simon reported from his research saying:

There were structural pressures on tutors to meet externally-imposed targets and, judging from the majority of tutors’ responses, the credibility of the assessment process was highly questionable.

Indeed, teachers across the three college sites in my study were equally sceptical about the quality of practical plumbing assessments.

Tutors in the study were unanimous in their judgements about college-based training and assessments failing to adequately represent the reality, problems and experiences of plumbers operating in the workplace.

In order to assess the deviation away from the original NVQ rules, he said, “it is important to understand the work of Gilbert Jessup, who was the Architect of UK competence-based qualifications.

Jessup (1991: 27) emphasised ‘the need for work experience to be a valid component of most training which leads to occupational competence’. Moreover, he asserted that occupational competence ‘leads to increased demands for demonstrations of competence in the workplace in order to collect valid evidence for assessment’.

As a representative of the Wesh Joint Education Committee, I worked closely with Gilbert Jessop in the early days of NVQs. Much (probably too much) of our time was taken with debates on the nature of competence and how assessment could be organised. I even wrote several papers about it – sadly in the pre digital age.

But I dug out some of that debate in a paper I wrote with Jenny Hughes for the European ICOVET project which as looking at the accreditation of informal learning. In the paper – with the snappy title ‘The role and importance of informal competences in the process of acquisition and transfer of work skills. Validation of competencies – a review of reference models in the light of youth research: United Kingdom.’

In the introduction we explained the background:

Firstly, in contrast to most countries in continental Europe, the UK has long had a competence based education and training system. The competence based National Vocational Qualifications were introduced in the late 1980s in an attempt to reform and rationalise the myriad of different vocational qualifications on offer. NVQs were seen as separate from delivery systems – from courses and routes to attain competence. Accreditation regulations focused on sufficiency and validity of evidence. From the very early days of the NVQ system, accreditation of prior learning and achievement has been recognised as a legitimate route towards recognition of competence, although implementation of APL programmes has been more problematic. Thus, there are few formal barriers to access to assessment and accreditation of competences. That is not to say the process is unproblematic and this paper will explore some of the issues which have arisen through the implementation of competence based qualifications.

We went on to look at the issue of assessment:

The NVQ framework was based on the notion of occupational competence. The concept of competence has been a prominent, organising principle of the reformed system, but has been much criticised (see, for example, Raggatt & Williams 1999). The competence-based approach replaced the traditional vocational training that was based on the time served on skill formation to the required standard (such as apprenticeships). However, devising a satisfactory method of assessing occupational competence proved to be a contentious and challenging task.

Adults in employment who are seeking to gain an NVQ will need a trained and appointed NVQ assessor. Assessors are appointed by an approved Assessment Centre, and can be in-house employees or external. The assessor will usually help the candidate to identify their current competences, agree on the NVQ level they are aiming for, analyse what they need to learn, and choose activities which will allow them to learn what they need. The activities may include taking a course, or changing their work in some way in order to gain the required evidence of competence. The opportunity to participate in open or distance learning while continuing to work is also an option.

Assessment is normally through on-the-job observation and questioning. Candidates must have evidence of competence in the workplace to meet the NVQ standards, which can include the Accreditation of Prior Learning (APL). Assessors will test the candidates’ underpinning knowledge, understanding and work-based performance. The system is now intended to be flexible, enabling new ways of learning to be used immediately without having to take courses.

The system is characterised by modular-based components and criterion-referenced assessment. Bjornavald also argues that the NVQ framework is output-oriented and performance-based.

We outlined criticisms of the NVQ assessment process

The NCVQ methods of assessing competence within the workplace were criticised for being too narrow and job-specific (Raggatt & Williams 1999). The initial NVQs were also derided for applying ‘task analysis’ methods of assessment that relied on observation of specific, job-related task performance. Critics of NVQs argued that assessment should not just focus on the specific skills that employers need, but should also encompass knowledge and understanding, and be more broadly based and flexible. As Bjornavald argues, ‘the UK experiences identify some of these difficulties balancing between too general and too specific descriptions and definitions of competence’. The NVQs were also widely perceived to be inferior qualifications within the ‘triple-track’ system, particularly in relation to academic qualifications (Wolf 1995; Raffe et al 2001; Raggatt 1999).

The initial problems with the NVQ framework were exacerbated by the lack of regulatory powers the NCVQ held (Evans, 2001). The system was criticized early on for inadequate accountability and supervision in implementation (Williams 1999), as well as appearing complex and poorly structured (Raffe et al 2001).

We later looked at systems for the Accreditation of Prior Learning (APL).

Currently the system relies heavily on the following basic assumptions: legitimacy is to be assured through the assumed match between the national vocational standards and competences gained at work. The involvement of industry in defining and setting up standards has been a crucial part of this struggle for acceptance, Validity is supposed to be assured through the linking and location of both training and assessment, to the workplace. The intention is to strengthen the authenticity of both processes, avoiding simulated training and assessment situations where validity is threatened. Reliability is assured through detailed specifications of each single qualification (and module). Together with extensive training of the assessors, this is supposed to secure the consistency of assessments and eventually lead to an acceptable level of reliability.

A number of observers have argued that these assumptions are difficult to defend. When it comes to legitimacy, it is true that employers are represented in the above-mentioned leading bodies and standards councils, but several weaknesses of both a practical and fundamental character have appeared. Firstly, there are limits to what a relatively small group of employer representatives can contribute, often on the basis of scarce resources and limited time. Secondly, the more powerful and more technically knowledgeable organisations usually represent large companies with good training records and wield the greatest influence. Smaller, less influential organisations obtain less relevant results. Thirdly, disagreements in committees, irrespective of who is represented, are more easily resolved by inclusion than exclusion, inflating the scope of the qualifications. Generally speaking, there is a conflict of interest built into the national standards between the commitment to describe competences valid on a universal level and the commitment to create as specific and precise standards as possible. As to the questions of validity and reliability, our discussion touches upon drawing up the boundaries of the domain to be assessed and tested. High quality assessments depend on the existence of clear competence domains; validity and reliability depend on clear-cut definitions, domain-boundaries, domain-content and ways whereby this content can be expressed.

It’s a long time since I have looked at the evolution of National Vocational Qualifications and the issues of assessment. My guess is that the original focus on the validity of assessment was too difficult to implementing practice, especially given the number of competences. And the distinction between assessing competence and assessing underpinning knowledge was also problematic. Easier to move to multiple choice computerized testing, administered through colleges. If there was a need to assess practical competences, then once more it would be much simpler to assess this in a ‘simulated’ workshop environment than the original idea that competence would be assessed in the real workplace.  At the same time the system was too complicated. Instead of trusting workplace trainers to know whether an apprentice was competent, assessors were themselves required to follow a (competence based) assessors course. That was never going to work in the real world and neither was visiting external assessors going to deliver the validity Gilbert Jessop dreamed of.

If anyone would like a copy the paper this comes from just email me (or add a request in the comments below). Meanwhile I am going to try to find another paper I wrote with Jenny Hughes, looking at some of the more theoretical issues around assessment.

 

 

 

 

 

 

Data, expenditure and the quality of Higher Education

September 12th, 2017 by Graham Attwell

oecd student spendingIn this brave new data world, we seem to get daily reports on the latest statistics about education. It si not easy making sense of it all.

Times Higher Education reports on OECD’s latest Education at a Glance report, an annual snapshot of the state of education across the developed world, published on 12 September.

It shows spending per higher education student significantly falling behind the OECD average in a number of European countries such as Spain, Italy, Slovenia and Portugal, while even countries with reputations for strong university systems, such as Germany and Finland, are failing to keep pace with the US and UK.

But what does all this mean? Germany has significantly increased University places as a response to the crisis, seemingly without spending per student keeping pace. The UK has increased spedning er student. The different of course is that while higher education is basically free in Germany, the UK has some of the highest university tuition fees in the world. Andreas Schleicher from OECD said that since there were no comparable data on learning outcomes for different countries, it was difficult to pinpoint whether the large per-student spends in some nations were actually improving quality.

However, according to the THE report, he added that results from the OECD’s international school testing programme – the Programme for International Student Assessment (Pisa) – showed “that there is essentially no relationship between spending per student and school performance once you get beyond a certain threshold in spending”, a point that most OECD countries had already passed.

Of course school performance does not necessarity equate with quality of teaching and learning. But it does suggest that even with the deluge of data we still do not understand how to judge quality – still ess hwo to improve it.

Learning Analytics and the Peak of Inflated Expectations

January 15th, 2017 by Graham Attwell

hypecycleHas Learning Analytics dropped of the peak of inflated expectations in Gartner’s hype cycle?  According to Educause ‘Understanding the power of data’ is still there as a major trend in higher education and Ed Tech reports a KPMG survey which found that 41 percent of universities were using data for forecasting and predictive analytics.

But whilst many universities are exploring how data can be used to improve retention and prevent drop outs, there seems little pretence any more that Learning Analytics has much to do with learning. The power of data has somehow got muddled up with Management Analytics, Performance Analytics and all kinds of other analytics – but the learning seems to have been lost. Data mining is great but it needs a perspective on just what we are trying to find out.

I don’t think Learning analytics will go into the trough of despair. But i think that there are very real problems in working out how best we can use data – and particularly how we can use  data to support learning. Learning analytics need to be more solidly grounded in what is already known about teaching and learning. Stakeholders, including teachers, learners and the wider community, need to be involved in the development and implementation of learning analytics tools. Overall, more evidence is needed to show which approaches work in practice and which do not.

Finally, we already know a great deal about formal learning in institutions, or at least by now we should do. Of course we need to work at making it better. But we know far less about informal learning and learning which takes place in everyday living and working environments. And that is where I ultimately see Learning analytics making a big difference. Learning Analytics could potentially help us all to self directed learners and to achieve the learning goals that we set ourselves. But that is a long way off. Perhaps if Learning analytics is falling off the peak of expectations that will provide the space for longer term more clearly focused research and development.

 

Online disinhibition and the ethics of researching groups on Facebook

April 19th, 2016 by Graham Attwell

There seems to be a whole spate of papers, blogs and reports published lately around MOOCs, Learning Analytics and the use of Labour Market Information. One possibly reason is that it takes some time for more considered research to be commissioned, written and published around emerging themes and technologies in teaching and learning. Anyway I’ve spent an interesting time reading at least some of these latest offerings and will try to write up some notes on what (I think) they are saying and mean.

One report I particular liked is ‘A murky business: navigating the ethics of educational
research in Facebook groups” by Tony Coughlan and Leigh-Anne Perryman. The article, published in the European Journal of Open, Distance and e-Learning, is based on a reflection of their own experiences of researching in Facebook. And as they point out any consideration of ethical practices will almost inevitably run foul of Facebook’s Terms and Condition of Service.

Not withstanding that issue, they summarise the problems as “whether/how to gain informed consent in a public setting; the need to navigate online disinhibition and confessional activity; the need to address the ethical challenges involved in triangulating data collected from social media settings with data available from other sources; the need to consider the potential impact on individual research participants and entire online communities of reporting research findings, especially when published reports are open access; and, finally, the use of visual evidence and its anonymisation.”

Although obviously the use of social networks and Facebook in particular raise their own issues, many of the considerations are more widely applicable to Learning Analytics approaches, especially to using discourse analysis and Social Network Analytics> This discussion came up at the recent EmployID project review meeting. The project is developing a  number of tools and approaches to Workplace Learning Analytics and one idea was that we should attempt to develop a Code of Practice for Learning Analytics in the workplace, similar to the work by Jisc who have published a Code of Practice for Learning Analytics in UK educational institutions.

As an aside, I particularly liked the section on “confessional’ activity’ and ‘online disinhibition’ based on work by Suler (2004) who identified six factors as prompting people to self-disclose online more frequently or intensely than they would in person:

  • Dissociative anonymity – the fact that ‘when people have the opportunity to separate their actions online from their in-person lifestyle and identity, they feel less vulnerable about self-disclosing and acting out’;

  • Invisibility – overlapping, but extending beyond anonymity, physical invisibility ‘amplifies the disinhibition effect’ as ‘people don’t have to worry about how they look or sound when they type a message’ nor about ‘howothers look or sound in response to what they say’;

  • Asynchronicity – not having to immediately deal with someone else’s reaction to something you’ve said online;

  • Solipsistic introjection – the sense that one’s mind has become merged with the mind of the person with whom one is communicating online, leading to the creation of imagined ‘characters’ for these people and a consequent feeling that online communication is taking place in one’s head, again leading to disinhibition;

  • Dissociative imagination – a consciously or unconscious feeling that the imaginary characters “created” through solipsistic interjection exist in a‘make-believe dimension, separate and apart from the demands and responsibilities of the real world’ (Suler, 2004 p.323).

  • The minimization of authority (for people who do actually have some) due to the absence of visual cues such as dress, body language and environmental context, which can lead people to misbehave online.

Suler, J. (2004). The Online Disinhibition Effect. In CyberPsychology & Behaviour,
7(3), (pp. 321-326). Available from http://www.academia.edu/3658367/The_online_disinhibition_effect. [Accessed 10 September 2014]

Rethinking blogging

November 12th, 2014 by Graham Attwell

 

I used to post on this blog almost every day. Lately I haven’t been posting much. I am not worrying too much about it, but have been thinking about why.

I think it is largely to do with changes in my work. In the past, I was primarily a researcher, working on all manner of reports and projects, mostly in the field of elearning and knowledge development. My primary mode of work was desk research: in other words I read a lot. I can remember twenty years ago when I first moved to Bremen in Germany I used to travel about once every four months to the University of Surrey at Guildford (which was the easiest UK university to get to from Gatwick airport). I would spend thirty pounds on a photocopying card, spend an entire day in the university archives and travel back with photocopies of 60 or 70 research papers. I kept these for years before I realised I never looked at them. By 2000 of course, access to research was moving to the web. One of the big changes this heralded was the arrival of grey literature. Interestingly this term which was much used at the time, seems to have gone out of fashion, as it has slowly become accepted that web based materials of all kinds have at least some validity in the research process. So called grey literature gave access to a wider range of thinking and ideas than could be gained from official journal papers alone, although the debate over how to measure quality is far from resolved.

And to bring this up to date, the emergence of Open Educational Resources, Open Journals and specialist networks like the excellent ResearchGate, have increased the discoverability of research ideas and findings.

I used to enjoy the research work. And it was easy to blog. There would always be something in a paper, on a web site, in a network to comment on. I wrote a lot about Personal Learning Networks, a popular subject at the time, and through speaking at conferences and seminars got new ideas for more blog posts. But there were some frustrations to this work. Although we talked a lot about PLEs and the like, it was hard to see much evidence in practice. Our ideas were often just that: ideas which had at best limited evaluation and implementation in the field. Most frustratingly, few of the projects were

A tale of two conferences

September 19th, 2014 by Graham Attwell

In the first week of September, I attended two conferences – the Association for Learning Technology Conference (ALT-C) at Warwick University in the UK and the European Conference on Educational Research (ECER) hosted by Porto University in Portugal.

I guess there were some 500 people at ALT-C. Most seemed to be juggling two devices online at most times. And there were literally thousands of tweets using the Alt-C hashtag. the ECER conference was mush bigger with over 2600 registered delegates. I didn’t see too many onine. And there were very few tweets using the ECER hashtag. It was suggested to me this was because a singly hashtag is too broad to encompass the woide range of topics covered in ECER’s different networks. But I don;t think that was the reason. Although for those of us working with technology, online immersion has become a way of life, the culture of educational researchers has not yet embraced such an idea. Of course most – if not all 0 educational researchers are computer literate and of course teh internet is a key tool for accessing documents and for communication. But for most that is it.

A personal reality check

The challenges of open data: emerging technology to support learner journeys

May 8th, 2014 by Graham Attwell

Its several years since I have been to the AltC conference in the UK. And I have missed the chance to catch up with friends and colleagues workings in Technology Enhanced Learning in the UK. One reason I have not been going to the conference is it usually takes place in September, the high season of conferences, and there always seem to be clashes with something else. The main reason is simple though – the cost. With a conference fee of something over £500 excluding accommodation and travel, without a sponsor it is pretty hard to justify so much expenditure. The saddest thing about that cost is I suspect it excludes many young and emerging researchers, unable to meet the fee from their own pocket and with institutions increasingly limiting conference expenditure. My daughter tells me that, albeit in a different field, her university provides her conferences fees of just £500 a year!

Anyway, this year I am lucky enough to have a project to pay and have submitted, together with my colleagues working on the project the following abstract. You can find out more about the LMIforAll project at www.lmiforall.ork.uk

The challenges of open data: emerging technology to support learner journeys

Authors: Attwell, G., Barnes, S-A., Bimrose, J., Elferink, R., Rustemeier, P. & Wilson, R.

Abstract 

People make important decisions about their participation in the labour market every year. This extends from pupils in schools, to students in Further and Higher education institutions and individuals at every stage of their career and learning journeys. Whether these individuals are in transition from education and/or training, in employment and wishing to up-skill, re-skill or change their career, or whether they are outside the labour market wishing to re-enter, high quality and impartial labour market information (LMI) is crucial to effective career decision-making. LMI is at the heart of UK Government reforms of careers service provision.

Linking and opening up careers focussed LMI to optimise access to, and use of, core national data sources is one approach to improving that provision as well as supporting the Open Data policy agenda (see HM Government, 2012). Careers focussed LMI can be used to support people make better decisions about learning and work and improve the efficiency of labour markets by helping match supply with demand, and helping institutions in planning future course provision. A major project, funded by the UK Commission for Employment and Skills, is underway led by a team of data experts at the Institute for Employment Research (University of Warwick) with developers and technologists from Pontydysgu and Raycom designing, developing and delivering a careers LMI webportal, known as LMI for All.

The presentation will focus on the challenge of collaborating and collecting evidence at scale between institutions and the social and technological design and development of the database. The database is accessed through an open API, which will be explored during the presentation. Through open competition developers, including students in FE, have been encouraged to develop their own applications based on the data. Early adopters and developers have developed targeted applications and websites that present LMI in a more engaging way, which are targeted at specific audiences with contrasting needs. The web portal is innovative, as it seeks to link and open up careers focused LMI with the intention of optimising access to, and use of, core national data sources that can be used to support individuals make better decisions about learning and work. It has already won an award from the Open Data Institute. The presentation will highlight some of the big data and technological challenges the project has addressed. It will also look at how to organise collaboration between institutions and organisations in sharing data to provide new services in education and training. Targeted participants include developers and stakeholders from a range of educational and learning settings. The session will be interactive with participants able to test out the API, provide feedback and view applications.

Reference

HM Government (2012). Open Data White Paper: Unleashing the Potential. Norwich: TSO.

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories