Archive for the ‘#AIinEd’ Category

Does only AI make a good university?

December 10th, 2020 by Graham Attwell
town musicians of bremen, cat, dog

Clker-Free-Vector-Images (CC0), Pixabay

Plans for a dedicated AI university campus in Bremen are causing a controversy. The plan is backed by a consortium  of German software giant SAP, Chinese IT firm Neusoft and the German Research Center for Artificial Intelligence.

It is intended to take over the Jacobs University, a private university in the north of the city founded in 1999. The university has been funded up to now by the food company Jacobs, but they have announced they are ceasing their financial support. The university has always been controversial with questions over the need for a private university competing withe the state funded Bremen University.

THE reports that the Bremen government hopes that a dedicated AI campus will boost the local economy, attracting skilled workers and companies, although the city and firms involved have not yet drawn up detailed plans.

But there are fears from academics at Jacobs that the university, which sells itself on its interdisciplinary campus, will end up with too narrow a focus.

“AI is not necessarily a bad thing,” said Professor Bau. “The big question is: only AI, does that actually make a good university? Universities and university education are something that does not have a small focus.”

Developing Competences for the AI Era

December 1st, 2020 by Graham Attwell

UNESCO are increasingly active in the development in the use of Artificial Intelligence in Education. Next week – 7 and 8 of December they are hosting an International Forum on AI and the Futures of Education (AIED). The theme of the Forum is  Developing Competencies for the AI Era. the Forum will bring togther education and technology experts from around the world to discuss AI skills for the futures of education and AI as a common good for education. UNESCO say participants will share policies and practices in defining the competencies required in the AI era, and examine strategies to prepare all people to live and work with AI effectively.

“The concept of futures in the plural is used to recognize that there is a rich diversity of ways of knowing and being around the world. The plural form also acknowledges that there are multiple dimensions to the future and that there will likely be various desirable and undesirable futures – all of which will vary greatly depending on who you are and where you stand. Rather than attempting to chart a single future, looking at futures in the plural validates multiple possible and desirable futures of humanity on our shared planet.

The Futures of Education initiative aims to generate discussion and action on the role of education, knowledge and learning in view of the predicted, possible and preferred futures. Such re-visioning of knowledge, education and learning is more relevant than ever. Accelerated technology transformations over recent years, in particular in the field of Artificial Intelligence, and their rapid deployment in work, life and learning have profound implications for the futures of education. The Forum will provide an opportunity to discuss these implications and the transformative potential of AI on education.”

Attendance is free and participants can register at https://aiedforum.org/#/home.

FutureLearn team up with Microsoft for online AI course

November 18th, 2020 by Graham Attwell

As many of you will know, FutureLearn is the UK Open Universities MOOC arm, run in conjunction with an international consortium of universities. But, I guess like everyone else, FutureLearn is under pressure to make some money. Their first go was offering paid for certificates for course completion. Another attempt has been to persuade people to sign up for an annual subscription, keeping courses open for a year if they pay.

The latest is to partner with industries for courses providing micro accreditation, in some cases industry recognised. So in December Future Learning is launching “Artificial Intelligence on Microsoft Azure: Machine Learning and Python Basics‘, created by CloudSwft and inc conjunction with Microsoft. “On this microcredential”, they say ” you’ll address this challenge by developing key AI skills that can serve as the first steps towards becoming an AI engineer, business analyst, or AI professional.” And, “Yes. If you successfully complete this microcredential, you’ll receive a voucher to sit a separate exam to earn the Microsoft Azure AI Fundamentals (AI-900) and Microsoft Azure AI Engineer Associate (AI-100) certification.”

Why would FutureLearn be giving away vouchers for sitting Microsoft exams. It could be because the 15 week course costs 584 Euros to enroll.  Much as I like microcredentially, this seems a long way from FutureLearn’s past MOOCs free for participation. And if as the course information claims, “artificial intelligence skills are frequently listed among the most in-demand workplace skills in the current and future job market, as organisations seek to harness AI to revolutionise their operations” and “employers are faced with a shortfall of qualified candidates” surely this is an area where public education and trainings services should be providing online course, rather than restricting access to those who can afford to pay for learning new skills.

 

Data Driven Science

September 29th, 2020 by Graham Attwell

This diagram is from a tweet by  Data Driven Science (@DrivenScience).

Artificial Intelligence they say, is the broad discipline of creating intelligent machines.

Machine Learning refers to systems that can learn from experience.

Deep Learning refers to experience on large data sets.

The State of Data 2020

September 28th, 2020 by Graham Attwell
social media, media, board

geralt (CC0), Pixabay

One result of the Covid 19 pandemic is it seems like every day now there are free events. This week is no exception and this conference looks great. I can’t make all of it – too many other meetings but I hope to dip in and out (another advantage of online conferences).

On Tuesday September 29 and Wednesday September 30, 2020 the State of Data event will bring together researchers, practitioners, and anyone with an interest in why data matters in state education in England.

You can choose to register if you want to use the calendar functionality and accept the privacy terms of Hopin, to see the events as they come live. Or simply watch in your own time without registering, after the event, via the links below.

Between algorithmic fairness in exam moderation and the rush to remote learning in response to the COVID-19 pandemic, 2020 has raised questions on children’s digital rights like never before in England’s education system. defenddigitalme is a call to action.

The conference has a vision of a rights’ respecting environment in the state education sector in England. We want to help build the future of safe, fair and transparent use of data across the public sector. This event will coincide with the launch of their report The State of Data 2020: mapping the data landscape in England’s state education system.

There is a range of content and discussion for practitioners in education and data protection, senior leadership and DPOs, local authority staff, developers, vendors and the edTech community, academics and activists, policy advisors and politicians —they say they want to create opportunities for questions and answers across silos. As the conference web site says: “We need to start a conversation about changing policy and practice when it comes to children’s data rights in education.”

Racial bias in algorithms

September 25th, 2020 by Graham Attwell

From the UK Open Data Institute’s Week in Data newsletter

This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.

Accountability and algorithmic systems

September 3rd, 2020 by Graham Attwell
programming, computer language, program

geralt (CC0), Pixabay

There seems to be a growing awareness of the use and problems with algorithms – at least in the UK with what Boris Johnson called “a rogue algorithm” caused chaos in students exam results. It is becoming very apparent that there needs to be far more transparency in what algorithms are being designed to do.

Writing in Social Europe says “Algorithmic systems are a new front line for unions as well as a challenge to workers’ rights to autonomy.” She draws attention to the increasing surveillance and monitoring of workers at home or in the workplace. She says strong trade union responses are immediately required to balance out the power asymmetry between bosses and workers and to safeguard workers’ privacy and human rights. She also says that improvements to collective agreements as well as to regulatory environments are urgently needed.

Perhaps her most important argument is about the use of algorithms:

Shop stewards must be party to the ex-ante and, importantly, the ex-post evaluations of an algorithmic system. Is it fulfilling its purpose? Is it biased? If so, how can the parties mitigate this bias? What are the negotiated trade-offs? Is the system in compliance with laws and regulations? Both the predicted and realised outcomes must be logged for future reference. This model will serve to hold management accountable for the use of algorithmic systems and the steps they will take to reduce or, better, eradicate bias and discrimination.

Christina Colclough believes the governance of algorithmic systems will require new structures, union capacity-building and management transparency.I can’t disagree with that. But also what is needed is a greater understanding of the use of AI and algorithms – for good and for bad. This means an education campaign – in trade unions but also for the wider public to ensure that developments are for the good and not just another step in the progress of Surveillance Capitalism.

Algorithmic bias explained

August 27th, 2020 by Graham Attwell

Yesterday, UK Prime Minister blamed last weeks fiasco with public examinations on a “mutant algorithm”. This video by the  Institute for Public Policy Research provides a more rational view on why algorithms can go wrong. Algorithms, they say, risk magnifying human bias and error on an unprecedented scale. Rachel Statham explains how they work and why we have to ensure they don’t perpetuate historic forms of discrimination.

New report on Artificial Intelligence in Vocational Education and Training

July 31st, 2020 by Angela Rees

The Taccle AI project has launched it’s 74 page report exploring the use of AI in policy, process and practice in VET. For VET teachers and trainers, there are many possible uses of AI including new opportunities for adapting learning content based on student’s needs, new processes for assessment, analysing possible bottlenecks in learners’ domain understanding and…

PlayPlay

Digitalisation, Artificial Intelligence and Vocational Occupations and Skills

July 27th, 2020 by Graham Attwell

geralt (CC0), Pixabay

The Taccle AI project on Artificial Intelligence and Vocational Education and Training, has published a preprint  version of a paper which has been submitted of publication to the VET network of the European Research Association.

The paper, entitled  Digitalisation, Artificial Intelligence and Vocational Occupations and Skills: What are the needs for training Teachers and Trainers, seeks to explore the impact AI and automation have on vocational occupations and skills and to examine what that means for teachers and trainers in VET. It looks at how AI can be used to shape learning and teaching processes, through for example, digital assistants which support teachers. It also focuses on the transformative power of AI that promises profound changes in employment and work tasks. The paper is based on research being undertaken through the EU Erasmus+ Taccle AI project. It presents the results of an extensive literature review and of interviews with VET managers, teachers and AI experts in five countries. It asks whether machines will complement or replace humans in the workplace before going to look at developments in using AI for teaching and learning in VET. Finally, it proposes extensions to the EU DigiCompEdu Framework for training teachers and trainers in using technology.

The paper can be downloaded here.

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories