Archive for the ‘technology’ Category

The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system

March 2nd, 2020 by Graham Attwell


I am speaking at a seminar on Vocational Education and Training’s Role in Business Innovation at the Ramon Areces Foundation in Madrid tomorrow. The title of my presentation is ‘The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system in Valencia’ which is really much too long for a title and I have much too much to say for my allotted 20 minutes.

Any way, this is what I told them I was going to talk about:
The Presentation looks at the future of work, linked to the challenges of Artificial Intelligence, Automation and the new Green Economy. It considers and discusses the various predictions on future jobs and occupations from bodies including CEDEFOP, OECD and the World Bank. It concludes that although one jobs will be v=craeted and some occupations be displaced by new technologies. the greatest impact will be in terms of the tasks performed within jobs. It further discusses future skills needs, including the need for higher level cognitive competences as well as the demand for so called lower skilled work in services and caring professions.
It considers the significance of these changes for vocational education and training, including the need for new curricula, and increased provision of lifelong learning and retraining for those affected by the changing labour market.
Artificial Intelligence may also play an important role in the organisation and delivery of vocational education and training. This includes the use of technologies such as machine learning and Natural Language processing for Learner engagement, recruitment and support, Learning Analytics and ‘nudge learning’ through a Learning Record Store, and  the creation and delivery of learning content. It provides examples such as the use of Chatbots in vocation education and training schools and colleges. It is suggested that the use of AI technologies can allow a move from summary assessment to formative assessment. The use of these technologies will reduce the administrative load for teachers and trainers and allow them to focus on coaching, particularly benefiting those at the top and lower end of the student cohort.
To benefit from this potential will requite new and enhanced continuing professional development for teachers and trainers. Finally the presentation considers what this signifies for the future of the Dual VET system in Spain, looking at findings from both European projects and research undertaken into Dual training in Valencia.
And I will report back here after the event.

Artificial, Intelligence, ethics and education

January 2nd, 2020 by Graham Attwell

I guess we are going to be hearing a lot about AI in education in the next year. As regular readers will know, I am working on a European Commission Erasmus Plus project on Artificial Intelligence and Vocational Education and Training. One subject which is constantly appearing is the issue of ethics. Apart from the UK universities requirements for ethical approval of research projects (more about this in a future post), the issue of ethics rarely appears in education as a focus for debate. Yet it is all over the discussion of AI and how we can or should use AI in education.

There is an interesting and (long) blog post – ‘The Invention of “Ethical AI“‘ recently published by Rodrigo Ochigame on the Intercept web site.

Orchigame worked as a graduate student researcher in the former director of the MIT Media Lab, Joichi Ito’s group on AI ethics at the Media Lab. He left in August last year , immediately after Ito published his initial “apology” regarding his ties to Epstein, in which he acknowledged accepting money from the disgraced financier both for the Media Lab and for Ito’s outside venture funds.

The quotes below provide an outline of his argument although for anyone interested in this field the article merits a full read. the

The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics

The discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

This included working on

the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”

corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).

it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position.

the corporate lobby’s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.

I am not opposed to the emphasis being placed on ethics in AI and education and the debate and practice son Learning Analytics show the need to think clearly about how we use technology. But we have to be careful that we firstly do not just end up paying lip service to ethics and secondly that academic research does not become a cover for teh practices of the Ed tech industry. Moreover, I think we need a clearer understanding of just what we mean when we talk about ethics in the educational context. For me the two biggest ethical issues are the failure of provide education for all and the gross inequalities in educational provision based on things like class and gender.

 

Is this the right way to use machine learning in education?

September 2nd, 2019 by Graham Attwell

An article ‘Predicting Employment through Machine Learning‘ by Linsey S. Hugo on the National Association of Colleges and Employers web site,confirms some of my worries about the use of machine learning in education.

The article presents a scenario which it is said “illustrates the role that machine learning, a form of predictive analytics, can play in supporting student career outcomes.” It is based on a recent study at Ohio University (OHIO) which  leveraged machine learning to forecast successful job offers before graduation with 87 percent accuracy. “The study used data from first-destination surveys and registrar reports for undergraduate business school graduates from the 2016-2017 and 2017-2018 academic years. The study included data from 846 students for which outcomes were known; these data were then used in predicting outcomes for 212 students.”

A key step in the project was “identifying employability signals” based on the idea that “it is well-recognized that employers desire particular skills from undergraduate students, such as a strong work ethic, critical thinking, adept communication, and teamwork.” These signals were adapted as proxies for the “well recognised”skills.

The data were used to develop numerous machine learning models, from commonly recognized methodologies, such as logistic regression, to advanced, non-linear models, such as a support-vector machine. Following the development of the models, new student data points were added to determine if the model could predict those students’ employment status at graduation. It correctly predicted that 107 students would be employed at graduation and 78 students would not be employed at graduation—185 correct predictions out of 212 student records, an 87 percent accuracy rate.

Additionally, this research assessed sensitivity, identifying which input variables were most predictive. In this study, internships were the most predictive variable, followed by specific majors and then co-curricular activities.

As in many learning analytics applications the data could then be used as a basis for intervention to support students employability on gradation. If they has not already undertaken a summer internship then they could be supported in this and so on.

Now on the one hand this is an impressive development of learning analytics to support over worked careers advisers and to improve the chances of graduates finding a job. Also the detailed testing of different machine learning and AI approaches is both exemplary and unusually well documented.

However I still find myself uneasy with the project. Firstly it reduces the purpose of degree level education to employment. Secondly it accepts that employers call the shots through proxies based on unquestioned and unchallenged “well recognised skills” demanded by employers. It may be “well recognised” that employers are biased against certain social groups or have a preference for upper class students. Should this be incorporated in the algorithm. Thirdly it places responsibility for employability on the individual students, rather than looking more closely at societal factors in employment. It is also noted that participation in unpaid interneships is also an increasing factor in employment in the UK: fairly obviously the financial ability to undertake such unpaid work is the preserve of the more wealthy. And suppose that all students are assisted in achieving the “predictive input variable”. Does that mean they would all achieve employment on graduation? Graduate unemployment is not only predicated on individual student achievement (whatever variables are taken into account) but also on the availability of graduate jobs. In teh UK  many graduates are employed in what are classified as non graduate jobs (the classification system is something I will return to in another blog). But is this because they fail to develop their employability signals or simply because there simply are not enough jobs?

Having said all this, I remain optimistic about the role of learning analytics and AI in education and in careers guidance. But there are many issues to be discussed and pitfalls to overcome.

 

Reading on screen and on paper

September 1st, 2019 by Graham Attwell

Do you read books and papers on screen or do you prefer paper. I am conflicted. I used to have an old Kindle but gave it up because I am no fan of Amazon. And I used to read books on firstly an ipad and latterly an Tesco Huddle tablet – both now sadly deceased.

Like many (at least if the sales figures are to be believed) I have returned to reading books on paper, although I read a lot of papers and such like on my computer, only occasionally being bothered to print them out. But is preferring to physical books a cultural feel good factor or does it really make a difference to comprehension and learning?

An article in the Hechinger Report reports on research by Virginia Clinton, an Assistant Professor at the University of North Dakota who “compiled results from 33 high-quality studies that tested students’ comprehension after they were randomly assigned to read on a screen or on paper and found that her students might be right.”

The studies showed that students of all ages, from elementary school to college, tend to absorb more when they’re reading on paper than on screens, particularly when it comes to nonfiction material.

However the benefit was small – a little more than  a fifth of a standard deviation and there is an important caveat in that the studies that Clinton included in her analysis didn’t allow students to use the add on tools that digital texts can potentially offer.

My feeling is that this is a case of horses for courses. Work undertaken by Pontydysgu suggested that ebooks had an important motivational aspect for slow to learn readers in primary school. Not only could they look up the meaning fo different words but when they had read for a certain amount of time they were allowed to listen to the rest of teh story on the audio transcription. And there is little doubt that e-books offer a cost effective way of providing access to books for learners.

But it would be nice to see some further well designed research in this area.

 

AI and education

February 6th, 2019 by Graham Attwell

Fear you are going to be seeing this headline quite a bit in coming months. And like everyone else I am getting excited and worried about the possibilities of AI for learning – and less so for AI in education management.

Anyway here is the promise from an EU Horizon 2020 project looking mainly at ethics in AI. As an aside, while lots of people seem to be looking at ethics, which f course is very welcome, I see less research into the potentials and possibilities of AI (more to follow).

The SHERPA consortium – a group consisting of 11 members from six European countries – whose mission is to understand how the combination of artificial intelligence and big data analytics will impact ethics and human rights issues today, and in the future.

One of F-Secure’s (a partner in the project) first tasks will be to study security issues, dangers, and implications of the use of data analytics and artificial intelligence, including applications in the cyber security domain. This research project will examine:

  • ways in which machine learning systems are commonly mis-implemented (and recommendations on how to prevent this from happening)
  • ways in which machine learning models and algorithms can be adversarially attacked (and mitigations against such attacks)
  • how artificial intelligence and data analysis methodologies might be used for malicious purposes

What is an algorithm?

September 3rd, 2018 by Graham Attwell

There was an excellent article by Andrew Smith in the Guardian newspaper last week. ‘Franken-algorithms: the deadly consequences of unpredictable code’, examines issues with our “our new algorithmic reality and the “growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on.” “Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice”, Smith says.

I was particularly interested in the changing understandings of what an algorithm is.

In the original understanding of an algorithm, says Andrew Smith, “an algorithm is a small, simple thing; a rule used to automate the treatment of a piece of data. If a happens, then do b; if not, then do c. This is the “if/then/else” logic of classical computing. If a user claims to be 18, allow them into the website; if not, print “Sorry, you must be 18 to enter”. At core, computer programs are bundles of such algorithms.” However, “Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”).”

And this, of course is a problem, especially where algorithms, even if published, are not in the least transparent and with machine learning, constantly evolving.

NameCoach: software which doesn’t suck

August 14th, 2018 by Graham Attwell

It is remarkable how may software applications are released which seem a) over complicated and / or b) tp have little if any pupose. So this morning while idly browsing WONKHE while trying to wake myself up, I was mauch taken to stumble on an article singing the praises of NameCoach. Paul Geatrix, registar of the University of Nottingham says:

As anyone who has been involved in graduations knows though name reading is one of the more challenging elements of the ceremony and can cause some distress for graduates and their families if it goes horribly wrong (as it occasionally does). But now, for heads of school, deans, pro-vice-chancellors and other name readers, who have to work so hard to prepare for graduation (and to whom I remain eternally grateful) an end to pronunciation misery is at hand. ‘NameCoach’, which was developed by Stanford University graduates, provides a means of collecting correct pronunciations for name readers so they are sure to get it right first time.

When I went to university it was considered naff to collect your degree in person. But as I understand (from bitter experience of sitting through several of these things) it is now a central part of the student experience. True the software doesn’t do a lot.  But what it does fulfils a useful function for some people who can’t avoid academic ceremonies. And you can’t say that for many applications.

Robots to help learning

August 6th, 2018 by Graham Attwell

The TES reports on a project that uses robots to help children in hospital take part in lessons and return to school has received funding from the UK Department for Education.

TES says “The robot-based project will be led by medical AP provider Hospital and Outreach Education, backed by £544,143 of government money.

Under the scheme, 90 “tele-visual” robots will be placed in schools and AP providers around the country to allow virtual lessons.

The robot, called AV1, acts as an avatar for children with long-term illnesses so they can take part in class and communicate with friends.

Controlling the robot remotely via an iPad, the child can see and hear their teacher and classmates, rotating the robot’s head to get a 360-degree view of the class.

It is hoped the scheme will help children in hospital to feel less isolated and return to school more smoothly.”

Living in an Algorithmic World

May 4th, 2018 by Graham Attwell

This video is from Danah Boyd’s opening keynote for the re:publica 18 conference. Although it is an hour long it is well worth watching. Danah says “Algorithmic technologies that rely on data don’t necessarily support a social world that many of us want to live in. We must grapple with the biases embedded in and manipulation of these systems, particularly when so many parts of society are dependent on sociotechnical systems.” That goes for education just as much as any other part of the social world.

What happens when the optic fibre cable breaks?

January 31st, 2018 by Graham Attwell

Two weeks ago, I bought a new television. The old one had served well for 15 years but pre-dated the internet integration that we all take for granted these days. I carefully untangled the cables joining the various boxes with their flashing lights and plugged them back in again.

The result – nothing worked.  Long telephone calls to Moviestar – the internet provider -basically elicited the advice to unplug cables and plug them in again – to no avail. After two days a technician finally arrived and quickly diagnosed the problem. The cable connecting the fibre optic cable to the internet router had broken. Five minutes and it was repaired and he left warning us that these cables were very fragile and we should be careful when doing the cleaning.

Having only the previous week congratulated myself on my lack of addiction to mobile devices and social networks, I found myself at a complete loss without the internet. No radio, no music, no television, no email (and my mobile would not pick up the emails as I had not registered for two factor authentication which required a computer with an internet connection!), no web. I ended up sitting in a local café to read the online newspapers and get my emails.

The major point for me is how dependent we are becoming on an internet connection. Thank goodness that I have not been tempted by the internet of things. OK many of these devices run over the mobile and so would still work. Nevertheless, I am not convinced that the lights, heating, fridge, door lock, air con, coffee machine and toaster connected to the internet is a good thing. When I bought a new toaster last year the sales person tried hard to convince me of the benefits of a machine with an internet connection (the internet of things is the way of the future, sir, he said). It might be, but what happens when the optic fibre cable breaks?

  • Search Pontydysgu.org

    Social Media




    News Bites

    Erasmus+

    The European Commission has published an annual report of the Erasmus+ programme in 2018. During that time the programme funded more than 23,500 projects and supported the mobility of over 850,00 students, of which 28,247 were involved in UK higher education projects, though only one third of these were UK students studying abroad while the remainder were EU students studying in the UK. The UK also sent 3,439 HE staff to teach or train abroad and received 4,970 staff from elsewhere in the EU.


    Skills Gaps

    A new report by the Learning and Work Institute for the Local Government Association (LGA) finds that by 2030 there could be a deficit of 2.5 million highly-skilled workers. The report, Local Skills Deficits and Spare Capacity, models potential skills gaps in eight English localities, and forecasts an oversupply of low- and intermediate -skilled workers by 2030. The LGA is calling on the government to devolve the various national skills, retraining and employment schemes to local areas. (via WONKHE)


    Innovation is male dominated?

    Times Higher Education reports that in the UK only one in 10 university spin-out companies has a female founder, analysis suggests. And these companies are much less likely to attract investment too, raising concerns that innovation is becoming too male-dominated.


    Open Educational Resources

    BYU researcher John Hilton has published a new study on OER, student efficacy, and user perceptions – a synthesis of research published between 2015 and 2018. Looking at sixteen efficacy and twenty perception studies involving over 120,000 students or faculty, the study’s results suggest that students achieve the same or better learning outcomes when using OER while saving a significant amount of money, and that the majority of faculty and students who’ve used OER had a positive experience and would do so again.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

    RT @OwenJones84 Incredibly grim. Hope he recovers quickly, thoughts with his family at an unimaginably horrific time, and to everyone struck by this horrible illness. twitter.com/bbclaurak/stat…

    Yesterday from Graham Attwell's Twitter via Tweetbot for Mac

  • RT @mweller blog post - @OpenUniversity sector drop-in on student support blog.edtechie.net/onlinepivot… (this wednesday - Assessment)

    Yesterday from Cristina Costa's Twitter via Twitter for Android

  • Recent Posts

  • Archives

  • Meta

  • Categories