Archive for the ‘technologies’ Category

What is Machine Learning

January 20th, 2021 by Graham Attwell

What is machine learning header

I am copying this from Stephen Downes’ ever informative OLDaily newsletter digest. It features an article entitled What is machine learning? – A beginner’s guide posted on the FutureLearn website.

This is quite a good introduction to machine learning. If you don’t know what it is and would like a quick no-nonsense introduction, this is it. Machine learning is depicted “as the science of getting computers to learn automatically.” It’s a type of artificial intelligence, which means essentially that they are software systems that “operate in an intentional, intelligent, and adaptive manner.” The third point is the most important, because it means they can change their programming based on experience and changing circumstances. The article talks about some types of machine learning systems and outlines some application in the field. It’s FutureLearn, so at the end it recommends some course tracks for people interested in making this a career, and just to dangle a carrot, the web page lets you know the median base salary and number of job opening for the program in question.

Taccle AI project – Interviews

July 22nd, 2020 by Graham Attwell

As part  of the Taccle AI project, around the impact of AI on vocational education and training in Europe, we have undertaken interviews with managers, teachers, trainers and developers in five European countries (the report of the interviews, and of an accompanying literature review, will be published next week).  One of the interviews I made was with Aftab Hussein, the ILT manager at Bolton College in the north west  of Engand. Aftab describes himself on Twitter (@Aftab_Hussein) as “exploring the use of campus digital assistants and the computer assisted assessment of open-ended question.”

Ada, Bolton College’s campus digital assistant has been supporting student enquiries about college services and their studies since April 2017.In September 2020, the college is launching a new crowdsourcing project which seeks to teach Ada about subject topics. They are seeking the support of teachers to teach Ada about their subjects.

According to Aftab “Teachers will be able to set up questions that students typically ask about subject topics and they will have the opportunity to compose answers against each of these questions. No coding experience is required to set up questions and answers.Students of all ages will have access to a website where they will be able to select a subject chatbot and ask it questions. Ada will respond with answers that incorporate the use of text, images, links to resources and embedded videos.

The service will be free to use by teachers and students.”

If you are interested in supporting the project complete the online Google form.

Artificial Intelligence for and in Vocational Education and Training

June 30th, 2020 by Graham Attwell

Last week the Taccle AI project organised a workshop at the European Distance Education Network (EDEN) Annual Conference. The conference, which had been scheduled to be held in Romania, was moved online due to the Covid 19 pandemic.There were four short presentations followed by an online discussion.

Graham Attwell introduced the workshop and explained the aims of the Taccle AI project. In the next years, he said “AI will change learning, teaching, and education. The speed of technological change will be very fast, and it will create high pressure to transform educational practices, institutions, and policies.” He was followed by Vidmantas Tulys who focused on AI and human work. He put forward five scenarios for the future of work in the light of AI and the fourth industrial revolution. Ludger Deitmer looked at the changes in the mechatronics occupation due to the impact of AI. He examine dhow training was being redesigned to meet new curriculum and occupational needs and how AI was being introduced in the curriculum. Finally, Sofia Roppertz focused on AI for VET, exploring how AI can support access to education, collaborative environments and intelligent tutoring systems to support teachers and trainers.

Stray thoughts on teaching and learning in the COVID 19 lockdowns

June 10th, 2020 by Graham Attwell
covid, covid-2019, covid-19

artpolka (CC0), Pixabay 

 

I must be one of the few ed tech bloggers who has not published anything on the move to online during the COVID 19 lockdowns. Not that I haven’t thought about it (and I even started several posts). However it is difficult to gauge an overall impression of what has happened and what is happening (although I am sure there will be many, many research papers and reports in the future) and from talking with people in perhaps six or seven countries in the past few weeks, there seem to be contradictory messages.

So, instead of trying to write anything coherent here are a few stray and necessarily impressionistic thoughts (in no particular order) which I will update in the future.

Firstly, many teachers seem to have coped remarkably well in the great move to online. Perhaps we have over stressed the lack of training for teachers. Some I talked too were stressed but all seemed to cope in one way or another.

However digital exclusion has reared its ugly head in a big way. Lack of bandwidth and lack of computers have prevented many from participating in online learning. Surely it is time now that internet connectivity is recognized as a key public infrastructure (as the UK labour Party proposed in their 2019 manifesto). And it also needs recognising that access to a computer should be a key provision of schools and education services. Access to space in which to learn is another issue – and not so easy to solve in a lockdown. But after restrictions are lifted in needs remembering that libraries can play an important role for those whose liv9ng space is not conducive to learning.

One think that has become very clear is the economic and social role schools play in providing childcare. Hence the pressure from the UK government to open primary schools despite it being blatantly obvious that such a move was ill prepared and premature. I am not sure that the provision of childcare should not be a wider service than one of education. And maybe it has become such a big issue in the UK because children start in school at a very young age (compared to other countries in Europe) and also have a relatively long school day.

There is a big debate going on in most countries about what universities will look like in the autumn. I think this raises wider questions about the whole purpose and role of universities in society. At least in theory, it should be possible for universities to continue with online learning. But teaching and learning is just one role for universities. With the move to mass higher education in many countries going to university has become a rite of passage. Thus in the UK the weight attached to the student satisfaction survey and the emphasis placed on social activities, sports and so on. And this is a great deal of what the students are paying for. Fees in UK universities are now £9000 a year. The feeling is that many prospective students will not pay that without the full face to face student experience (although I doubt many will miss the full face to face lectures). I also wonder how many younger people will start to realise how it is possible to get an extremely good on line education for free and one thing during the lock down has been the blossoming of online seminar, symposia, conferences and to a lesser extent workshops).

Which brings me to the vexed subject of pedagogy. Of course it is easy to say that with the full affordances of Zoom (and whoever would have predicted its popularity and use as an educational technology platform) all we have seen is lectures being delivered online. Online teaching not online learning. I am not sure this is a good dichotomy to make. Of course a sudden unplanned forced rush to online provision is probably not the greatest way to do things. But there seems plenty of anecdotal evidence that ed-tech support facilitated some excellent online provision (mention also needs to be made on the many resources for teachers made available over the internet). Of course we need to stop thinking about how we can reproduce traditional face to face approaches to teaching and earning online and start designing for creative online learning. But hopefully there is enough impetus now for this to happen.

More thoughts to follow in another post and hopefully I can get some coherent ideas out of all of this

Ethics in AI and Education

June 10th, 2020 by Graham Attwell
industry, industry 4, web

geralt (CC0), Pixabay

The news that IBM is pulling out of the facial recognition market and is calling for “a national dialogue” on the technology’s use in law enforcement has highlighted the ethical concerns around AI powered technology. But the issue is not just confined to policing: it is also a growing concern in education. This post is based on a section in a forthcoming publication on the use of Artificial Intelligence in Vocational Education and Training, produced by the Taccle AI Erasmus Plus project.

Much concern has been expressed over the dangers and ethics of Artificial Intelligence both in general and specifically in education.

The European Commission (2020) has raised the following general issues (Naughton, 2020):

  • human agency and oversight
  • privacy and governance,
  • diversity,
  • non-discrimination and fairness,
  • societal wellbeing,
  • accountability,
  • transparency,
  • trustworthiness

However, John Naughton (2020), a technology journalist from the UK Open University, says “the discourse is invariably three parts generalities, two parts virtue-signalling.” He points to the work of David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society who in January 2020 published an article in the Harvard Data Science Review on the question “Should we trust algorithms?” saying that it is trustworthiness rather than trust we should be focusing on. He suggests a set of seven questions one should ask about any algorithm.

  1. Is it any good when tried in new parts of the real world?
  2. Would something simpler, and more transparent and robust, be just as good?
  3. Could I explain how it works (in general) to anyone who is interested?
  4. Could I explain to an individual how it reached its conclusion in their particular case?
  5. Does it know when it is on shaky ground, and can it acknowledge uncertainty?
  6. Do people use it appropriately, with the right level of scepticism?
  7. Does it actually help in practice?

Many of the concerns around the use of AI in education have already been aired in research around Learning Analytics. These include issues of bias, transparency and data ownership. They also include problematic questions around whether or not it is ethical that students should be told whether they are falling behind or indeed ahead in their work and surveillance of students.

The EU working group on AI in Education has identified the following issues:

  • AI can easily scale up and automate bad pedagogical practices
  • AI may generate stereotyped models of students profiles and behaviours and automatic grading
  • Need for big data on student learning (privacy, security and ownership of data are crucial)
  • Skills for AI and implications of AI for systems requirements
  • Need for policy makers to understand the basics of ethical AI.

Furthermore, it has been noted that AI for education is a spillover from other areas and not purpose built for education. Experts tend to be concentrated in the private sector and may not be sufficiently aware of the requirements in the education sector.

A further and even more troubling concern is the increasing influence and lobbying of large, often multinational, technology companies who are attempting to ‘disrupt’ public education systems. Audrey Waters (2019), who is publishing a book on the history of “teaching machines”, says her concern “is not that “artificial intelligence” will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans — intellectually, emotionally, occupationally — will be reduced to machines.” “Perhaps nothing,” she says, “has become quite as naturalized in education technology circles as stories about the inevitability of technology, about technology as salvation. She quotes the historian Robert Gordon who asserts that new technologies are incremental changes rather than whole-scale alterations to society we saw a century ago. Many new digital technologies, Gordon argues, are consumer technologies, and these will not — despite all the stories we hear – necessarily restructure our world.

There has been considerable debate and unease around the AI based “Smart Classroom Behaviour Management System” in use in schools in China since 2017. The system uses technology to monitor students’ facial expressions, scanning learners every 30 seconds and determining if they are happy, confused, angry, surprised, fearful or disgusted. It provides real time feedback to teachers about what emotions learners are experiencing. Facial monitoring systems are also being used in the USA. Some commentators have likened these systems to digital surveillance.

A publication entitled “Systematic review of research on artificial intelligence applications in higher education- where are the educators?” (Olaf Zawacki-Richter, Victoria I. Marín, Melissa Bond & Franziska Gouverneur (2019) which reviewed 146 out of 2656 identified publications concluded that there was a lack of critical reflection on risks and challenges. Furthermore, there was a weak connection to pedagogical theories and a need for an exploration of ethical and educational approaches. Martin Weller (2020) says educational technologists are increasingly questioning the impacts of technology on learner and scholarly practice, as well as the long-term implications for education in general. Neil Selwyn (2014) says “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique.”

Martin Weller (2020) is concerned at “the invasive uses of technologies, many of which are co-opted into education, which highlights the importance of developing an understanding of how data is used.”

Audrey Watters (2018) has compiled a list of the nefarious social and political uses or connections of educational technology, either technology designed for education specifically or co-opted into educational purposes. She draws particular attention to the use of AI to de-professionalise teachers. And Mike Caulfield (2016) in acknowledging the positive impact of the web and related technologies argues that “to do justice to the possibilities means we must take the downsides of these environments seriously and address them.”

References

Caulfield, M. (2016). Announcing the digital polarization initiative, an open pedagogy project [Blog post]. Hapgood. Retrieved from https://hapgood.us/2016/12/07/announcing-the-digital-polarization-initiative-an-open-pedagogy-joint/

European Commission (2020). White Paper on Artificial Intelligence – A European approach to excellence and trust. Luxembourg: Publications Office of the European Union.

Gordon, R. J. (2016). The Rise and Fall of American Growth – The U.S. Standard of Living Since the Civil War. Princeton University Press.

Naughton, J. (2020). The real test of an AI machine is when it can admit to not knowing something. Guardian. Retrieved from  https://www.theguardian.com/commentisfree/2020/feb/22/test-of-ai-is-when-machine-can-admit-to-not-knowing-something.

Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review. Retrieved from https://hdsr.mitpress.mit.edu/pub/56lnenzj, 27.02.2020.

Watters, A. (2019). Ed-Tech Agitprop. Retrieved from http://hackeducation.com/2019/11/28/ed-tech-agitprop,  27.02.2020

Weller, M (2020). 25 years of Ed Tech. Athabasca University: AU Press.

The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system

March 2nd, 2020 by Graham Attwell


I am speaking at a seminar on Vocational Education and Training’s Role in Business Innovation at the Ramon Areces Foundation in Madrid tomorrow. The title of my presentation is ‘The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system in Valencia’ which is really much too long for a title and I have much too much to say for my allotted 20 minutes.

Any way, this is what I told them I was going to talk about:
The Presentation looks at the future of work, linked to the challenges of Artificial Intelligence, Automation and the new Green Economy. It considers and discusses the various predictions on future jobs and occupations from bodies including CEDEFOP, OECD and the World Bank. It concludes that although one jobs will be v=craeted and some occupations be displaced by new technologies. the greatest impact will be in terms of the tasks performed within jobs. It further discusses future skills needs, including the need for higher level cognitive competences as well as the demand for so called lower skilled work in services and caring professions.
It considers the significance of these changes for vocational education and training, including the need for new curricula, and increased provision of lifelong learning and retraining for those affected by the changing labour market.
Artificial Intelligence may also play an important role in the organisation and delivery of vocational education and training. This includes the use of technologies such as machine learning and Natural Language processing for Learner engagement, recruitment and support, Learning Analytics and ‘nudge learning’ through a Learning Record Store, and  the creation and delivery of learning content. It provides examples such as the use of Chatbots in vocation education and training schools and colleges. It is suggested that the use of AI technologies can allow a move from summary assessment to formative assessment. The use of these technologies will reduce the administrative load for teachers and trainers and allow them to focus on coaching, particularly benefiting those at the top and lower end of the student cohort.
To benefit from this potential will requite new and enhanced continuing professional development for teachers and trainers. Finally the presentation considers what this signifies for the future of the Dual VET system in Spain, looking at findings from both European projects and research undertaken into Dual training in Valencia.
And I will report back here after the event.

Does AI mean we no longer need subject knowledge?

January 15th, 2020 by Graham Attwell

I am a little bemused by the approach of many of those writing about Artificial Intelligence in education to knowledge. The recently released Open University Innovation Report, Innovating Pedagogy, is typical in that respect.

“Helping students learn how to live effectively in a world increasingly impacted by AI also requires a pedagogy”, they say, “that, rather than focusing on what computers are good at (e.g. knowledge acquisition), puts more emphasis on the skills that make humans uniquely human (e.g. critical thinking, communication, collaboration and creativity) – skills in which computers remain weak.”

I have nothing against critical thinking, collaboration or creativity, although I think these are hard subjects to teach. But I find it curious that knowledge is being downplayed on the grounds that computers are good at it. Books have become very good at knowledge over the years but it doesn’t mean that humans have abandoned it to the books. What is striking though is the failure to distinguish between abstracted and applied knowledge. Computers are very good at producing (and using) information and data. But they are not nearly as good at applying that knowledge in real world interactions. Computers (in the form of robots) will struggle to open a door. Computers may know all about the latest hair styles but I very much doubt that we will be trusting them to cut our hair in the near future. But of course, the skills I am talking about here are vocational skills – not the skills that universities are used to teaching.

As opposed to the emergent Anglo Saxon discourse around “the skills that make humans uniquely human” in Germany the focus on Industry 4.0 is leading to an alternative idea. They are seeing AI and automation as requiring new and higher levels of vocational knowledge and skills in areas like, for example, the preventative maintenance of automated production machinery. This seems to me to be a far more promising area of development. The problem I suspect for education researchers in the UK is that they have to start thinking about education outside the sometimes rarified world of the university.

Equally I do not agree with the reports assertion that most AI applications for education are student-facing and are designed to replace some existing teacher tasks. “If this continues”, they say “while in the short run it might relieve some teacher burdens, it will inevitably lead to teachers becoming side-lined or deprofessionalised. In this possible AI-driven future, teachers will only be in classrooms to facilitate the AI to do the ‘actual’ teaching.”

The reality is that there are an increasing number of AI applications which assist tecahers rather than replace them – and that allow teachers to get on with their real job of teaching and supporting learning, rather than undertaking an onerous workload of admin. There is no evidence of the inevitability of teachers being either sidelined or deprofessionaised. And those experiments from Silicon Valley trying to ‘disrupy’ education by a move to purely online and algorithm driven learning have generally been a big failure.

 

 

Artificial, Intelligence, ethics and education

January 2nd, 2020 by Graham Attwell

I guess we are going to be hearing a lot about AI in education in the next year. As regular readers will know, I am working on a European Commission Erasmus Plus project on Artificial Intelligence and Vocational Education and Training. One subject which is constantly appearing is the issue of ethics. Apart from the UK universities requirements for ethical approval of research projects (more about this in a future post), the issue of ethics rarely appears in education as a focus for debate. Yet it is all over the discussion of AI and how we can or should use AI in education.

There is an interesting and (long) blog post – ‘The Invention of “Ethical AI“‘ recently published by Rodrigo Ochigame on the Intercept web site.

Orchigame worked as a graduate student researcher in the former director of the MIT Media Lab, Joichi Ito’s group on AI ethics at the Media Lab. He left in August last year , immediately after Ito published his initial “apology” regarding his ties to Epstein, in which he acknowledged accepting money from the disgraced financier both for the Media Lab and for Ito’s outside venture funds.

The quotes below provide an outline of his argument although for anyone interested in this field the article merits a full read. the

The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics

The discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

This included working on

the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”

corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).

it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position.

the corporate lobby’s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.

I am not opposed to the emphasis being placed on ethics in AI and education and the debate and practice son Learning Analytics show the need to think clearly about how we use technology. But we have to be careful that we firstly do not just end up paying lip service to ethics and secondly that academic research does not become a cover for teh practices of the Ed tech industry. Moreover, I think we need a clearer understanding of just what we mean when we talk about ethics in the educational context. For me the two biggest ethical issues are the failure of provide education for all and the gross inequalities in educational provision based on things like class and gender.

 

AI needs diversity

November 6th, 2019 by Graham Attwell

As promised another AI post. One of the issues we are looking at in our project on AI and education is that of ethics. It seems to me that the tech companies have set up all kinds of ethical frameworks but I am not sure about the ethics! they seem to be trying to allay fears that the robots will take over: this is not a fear I share. I am far ore worried about what humans will do with AI. In that respect I very much like this TEDxWarwick talk by Kriti Sharma.

She says AI algorithms make important decisions about you all the time — like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.I wonder too, how much the lack of diversity in educational technology is holding back opportunities for learning

AI, education and training and the future of work

November 5th, 2019 by Graham Attwell

Last week was the first meeting of a new Erasmus Plus project entitled ‘Improving skills and competences of VET teachers and trainers in the age of Artificial Intelligence’. The project, led by the University of Bremen has partners frm the UK (Pontydysgu), Lithuania, Greece and Italy.

Kick off meetings are usually rather dull – with an understandable emphasis on rules and regulation, reporting and so on. Not this one. Everyone came prepared with ideas of their own on how we can address such a broad and important subject. And to our collective surprise I think, we had a remarkable degree of agreement on ways forward. I will write more about this(much more) in the coming days. For the moment here is my opening presentation to the project. A lot of the ideas come from the excellent book, “Artificial Intelligence in Education, Promises and Implications for Teaching and Learning” by the Center for Curriculum Redesign which as the website promises, “immerses the reader in a discussion on what to teach students in the era of AI and examines how AI is already demanding much needed updates to the school curriculum, including modernizing its content, focusing on core concepts, and embedding interdisciplinary themes and competencies with the end goal of making learning more enjoyable and useful in students’ lives. The second part of the book dives into the history of AI in education, its techniques and applications –including the way AI can help teachers be more effective, and finishes on a reflection about the social aspects of AI. This book is a must-read for educators and policy-makers who want to prepare schools to face the uncertainties of the future and keep them relevant.”

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories