Archive for the ‘technology’ Category

People and Machines

March 16th, 2021 by Graham Attwell

One of the results of the rapid deployment of Artificial Intelligence is an increased focus on the relation between humans and machines.

The Economist has published a podcast of an interview with Nobel prize-winning author asking about what his new book “Klara and the Sun” reveals about people’s relationship with machines. They say “he argues that people’s relationship to machines will eventually change the way they think of themselves as individuals.”

And the University of Westminster Press have published a new book, Marx and Digital Machines: Alienation, Technology, Capitalism, by Mike Healy. This book explores the fundamental contradiction at the heart of the digital environment, they say, “technology offers all manner of promises, yet habitually fails to deliver. This failure often arises from numerous problems: the proficiency of the technology or end-user, policy failure at various levels, or a combination of these. Solutions such as better technology and more effective end-user education are often put into place to solve these failures.”

Mike Healy argues that such approaches are inherently faulty drawing upon qualitative research informed by Marx’s theory of alienation.

The book which is distributed under the terms of the Creative Commons Attribution + Noncommercial + NoDerivatives 4.0 license with copyright retained by the author(s) is available for sale in paperback format or for free download in a variety of digital formats.

What is Machine Learning

January 20th, 2021 by Graham Attwell

What is machine learning header

I am copying this from Stephen Downes’ ever informative OLDaily newsletter digest. It features an article entitled What is machine learning? – A beginner’s guide posted on the FutureLearn website.

This is quite a good introduction to machine learning. If you don’t know what it is and would like a quick no-nonsense introduction, this is it. Machine learning is depicted “as the science of getting computers to learn automatically.” It’s a type of artificial intelligence, which means essentially that they are software systems that “operate in an intentional, intelligent, and adaptive manner.” The third point is the most important, because it means they can change their programming based on experience and changing circumstances. The article talks about some types of machine learning systems and outlines some application in the field. It’s FutureLearn, so at the end it recommends some course tracks for people interested in making this a career, and just to dangle a carrot, the web page lets you know the median base salary and number of job opening for the program in question.

AI and Edge computing

January 7th, 2021 by Graham Attwell
ball, abstract, pattern

geralt (CC0), Pixabay

A recent MIT Technology Review Insights reports on a survey of 301 business and technology leaders around their use and future planned us of Artificial Intelligence. The survey confirms that the deployment of AI is increasing, not only in large companies but also in SMEs. It also points to the emergence of what is known as edge  comput9ing, using a variety of devices closer to the applied use than cloud computing allows and capable of near real time processing.

38% report of those surveyed report their AI investment plans are unchanged as a result of the pandemic, and 32% indicate the crisis has accelerated their plans. The percentages of unchanged and revved-up AI plans are greater at organizations that had an AI strategy already in place.

AI is not a new addition to the corporate technology arsenal: 62% of survey respondents are using AI technologies. Respondents from larger organizations (those with more than $500 million in annual revenue) have, at nearly 80%, higher deployment rates. Small organizations (with less than $5 million in revenue) are at 58%, slightly below the average.

Cloud-based AI also allows organizations to operate in an ecosystem of collaborators that includes application developers, analytics companies, and customers themselves.

But while the cloud provides significant AI-fueled advantages for organizations, an increasing number of applications have to make use of the infrastructural capabilities of the “edge,” the intermediary computing layer between the cloud and the devices that need computational power.

Asked to rank the opportunities that AI provides them, respondents identify AI-enabled insight as the most important (see Figure 2). Real-time decision-making is the biggest opportunity, regardless of an organization’s size: AI’s use in fast, effective decision-making is the top-ranked priority for large and small organizations.

For small ones, though, it is tied to the need to use AI as a competitive differentiator.

Again, the need for real-time data or predictive tools is a requirement that could drive demand for edge-based AI resources.

Survey respondents indicate that AI is being used to enhance current and future performance and operational efficiencies: research and development is, by a large margin, the most common current use for AI, used by 53% of respondents, integrating AI-based analytics into their product and service development processes. Anomaly detection and cybersecurity are the next-most-deployed AI applications.

Large organizations have additional priorities: 54% report heavy use of robotic process automation to streamline business processes traditionally done by humans, and 41% use AI in sales and business forecasting. For organizations with AI strategies, 40% rely on robotic process automation, and 42% use AI to estimate future sales.

A focus on both discrete skills and broader human skills

June 17th, 2020 by Graham Attwell
laptop, woman, education

JESHOOTS-com (CC0), Pixabay

There is an interesting article by Allison Dulin Salisbury in the Forbes magazine this morning. The article says that the Covid 19 pandemic is speeding the digital transformation of business, driven by AI and automation and quotes MIT Economist David Autor calling it an “automation forcing event.”

The combined forces of automation and dramatically altered demand are giving rise to a labor market “riptide” in which some sectors of the economy are seeing mass layoffs while others, like healthcare and tech, are still desperate for talent. Against that backdrop, education and training systems are underfunded and ill equipped to meet the demands of a more complex labor market and the shifting demographics of students.

And from the evidence of the last recession, it appears likely that it will be lower paid and lower skilled workers with jobs most at risk.

However, if the analysis of the problem is correct the answers proposed leave room for doubt. The article says: “The past few years have seen a flourishing of high-quality, low-cost training and education programs, many of them online. They are laser-focused on the needs of working learners.” Maybe so in the USA, but in Europe I am yet to see the emergence of flourishing laser focused online learning programmes. And there is plenty of evidence to suggest that online programmes such as MOOcs have more often been focused on the needs of skilled and higher paid workers.

Neither is the appeal to stakeholder capitalism and for the involvement of employers in the provision of training likely to result in big change. More interesting is the call for “investment in practices that help workers identify what career they want before they start an education program,” and to “align training to the competencies required to land a good first job.” This, the article says “means a focus on both discrete skills and broader “human skills,” like communication and problem-solving, that actually become more marketable amid automation.”

Despite reservations, the argument is moving in the right direction. Put simply the Corona virus has on its own caused massive unemployment, with the effect likely to be magnified by a speed up in automation and the use of AI. This requires the development of large scale training programmes, both for unemployed young people and lower skilled workers whose jobs are threatened. Fairly obviously, the use of technology can help in providing such programmes. Nesta in the UK is already looking at developments in this direction. It will be interesting to see what national governments and the European Union will do now to boost training as a response to the crisis.

Stray thoughts on teaching and learning in the COVID 19 lockdowns

June 10th, 2020 by Graham Attwell
covid, covid-2019, covid-19

artpolka (CC0), Pixabay 

 

I must be one of the few ed tech bloggers who has not published anything on the move to online during the COVID 19 lockdowns. Not that I haven’t thought about it (and I even started several posts). However it is difficult to gauge an overall impression of what has happened and what is happening (although I am sure there will be many, many research papers and reports in the future) and from talking with people in perhaps six or seven countries in the past few weeks, there seem to be contradictory messages.

So, instead of trying to write anything coherent here are a few stray and necessarily impressionistic thoughts (in no particular order) which I will update in the future.

Firstly, many teachers seem to have coped remarkably well in the great move to online. Perhaps we have over stressed the lack of training for teachers. Some I talked too were stressed but all seemed to cope in one way or another.

However digital exclusion has reared its ugly head in a big way. Lack of bandwidth and lack of computers have prevented many from participating in online learning. Surely it is time now that internet connectivity is recognized as a key public infrastructure (as the UK labour Party proposed in their 2019 manifesto). And it also needs recognising that access to a computer should be a key provision of schools and education services. Access to space in which to learn is another issue – and not so easy to solve in a lockdown. But after restrictions are lifted in needs remembering that libraries can play an important role for those whose liv9ng space is not conducive to learning.

One think that has become very clear is the economic and social role schools play in providing childcare. Hence the pressure from the UK government to open primary schools despite it being blatantly obvious that such a move was ill prepared and premature. I am not sure that the provision of childcare should not be a wider service than one of education. And maybe it has become such a big issue in the UK because children start in school at a very young age (compared to other countries in Europe) and also have a relatively long school day.

There is a big debate going on in most countries about what universities will look like in the autumn. I think this raises wider questions about the whole purpose and role of universities in society. At least in theory, it should be possible for universities to continue with online learning. But teaching and learning is just one role for universities. With the move to mass higher education in many countries going to university has become a rite of passage. Thus in the UK the weight attached to the student satisfaction survey and the emphasis placed on social activities, sports and so on. And this is a great deal of what the students are paying for. Fees in UK universities are now £9000 a year. The feeling is that many prospective students will not pay that without the full face to face student experience (although I doubt many will miss the full face to face lectures). I also wonder how many younger people will start to realise how it is possible to get an extremely good on line education for free and one thing during the lock down has been the blossoming of online seminar, symposia, conferences and to a lesser extent workshops).

Which brings me to the vexed subject of pedagogy. Of course it is easy to say that with the full affordances of Zoom (and whoever would have predicted its popularity and use as an educational technology platform) all we have seen is lectures being delivered online. Online teaching not online learning. I am not sure this is a good dichotomy to make. Of course a sudden unplanned forced rush to online provision is probably not the greatest way to do things. But there seems plenty of anecdotal evidence that ed-tech support facilitated some excellent online provision (mention also needs to be made on the many resources for teachers made available over the internet). Of course we need to stop thinking about how we can reproduce traditional face to face approaches to teaching and earning online and start designing for creative online learning. But hopefully there is enough impetus now for this to happen.

More thoughts to follow in another post and hopefully I can get some coherent ideas out of all of this

Ethics in AI and Education

June 10th, 2020 by Graham Attwell
industry, industry 4, web

geralt (CC0), Pixabay

The news that IBM is pulling out of the facial recognition market and is calling for “a national dialogue” on the technology’s use in law enforcement has highlighted the ethical concerns around AI powered technology. But the issue is not just confined to policing: it is also a growing concern in education. This post is based on a section in a forthcoming publication on the use of Artificial Intelligence in Vocational Education and Training, produced by the Taccle AI Erasmus Plus project.

Much concern has been expressed over the dangers and ethics of Artificial Intelligence both in general and specifically in education.

The European Commission (2020) has raised the following general issues (Naughton, 2020):

  • human agency and oversight
  • privacy and governance,
  • diversity,
  • non-discrimination and fairness,
  • societal wellbeing,
  • accountability,
  • transparency,
  • trustworthiness

However, John Naughton (2020), a technology journalist from the UK Open University, says “the discourse is invariably three parts generalities, two parts virtue-signalling.” He points to the work of David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society who in January 2020 published an article in the Harvard Data Science Review on the question “Should we trust algorithms?” saying that it is trustworthiness rather than trust we should be focusing on. He suggests a set of seven questions one should ask about any algorithm.

  1. Is it any good when tried in new parts of the real world?
  2. Would something simpler, and more transparent and robust, be just as good?
  3. Could I explain how it works (in general) to anyone who is interested?
  4. Could I explain to an individual how it reached its conclusion in their particular case?
  5. Does it know when it is on shaky ground, and can it acknowledge uncertainty?
  6. Do people use it appropriately, with the right level of scepticism?
  7. Does it actually help in practice?

Many of the concerns around the use of AI in education have already been aired in research around Learning Analytics. These include issues of bias, transparency and data ownership. They also include problematic questions around whether or not it is ethical that students should be told whether they are falling behind or indeed ahead in their work and surveillance of students.

The EU working group on AI in Education has identified the following issues:

  • AI can easily scale up and automate bad pedagogical practices
  • AI may generate stereotyped models of students profiles and behaviours and automatic grading
  • Need for big data on student learning (privacy, security and ownership of data are crucial)
  • Skills for AI and implications of AI for systems requirements
  • Need for policy makers to understand the basics of ethical AI.

Furthermore, it has been noted that AI for education is a spillover from other areas and not purpose built for education. Experts tend to be concentrated in the private sector and may not be sufficiently aware of the requirements in the education sector.

A further and even more troubling concern is the increasing influence and lobbying of large, often multinational, technology companies who are attempting to ‘disrupt’ public education systems. Audrey Waters (2019), who is publishing a book on the history of “teaching machines”, says her concern “is not that “artificial intelligence” will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans — intellectually, emotionally, occupationally — will be reduced to machines.” “Perhaps nothing,” she says, “has become quite as naturalized in education technology circles as stories about the inevitability of technology, about technology as salvation. She quotes the historian Robert Gordon who asserts that new technologies are incremental changes rather than whole-scale alterations to society we saw a century ago. Many new digital technologies, Gordon argues, are consumer technologies, and these will not — despite all the stories we hear – necessarily restructure our world.

There has been considerable debate and unease around the AI based “Smart Classroom Behaviour Management System” in use in schools in China since 2017. The system uses technology to monitor students’ facial expressions, scanning learners every 30 seconds and determining if they are happy, confused, angry, surprised, fearful or disgusted. It provides real time feedback to teachers about what emotions learners are experiencing. Facial monitoring systems are also being used in the USA. Some commentators have likened these systems to digital surveillance.

A publication entitled “Systematic review of research on artificial intelligence applications in higher education- where are the educators?” (Olaf Zawacki-Richter, Victoria I. Marín, Melissa Bond & Franziska Gouverneur (2019) which reviewed 146 out of 2656 identified publications concluded that there was a lack of critical reflection on risks and challenges. Furthermore, there was a weak connection to pedagogical theories and a need for an exploration of ethical and educational approaches. Martin Weller (2020) says educational technologists are increasingly questioning the impacts of technology on learner and scholarly practice, as well as the long-term implications for education in general. Neil Selwyn (2014) says “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique.”

Martin Weller (2020) is concerned at “the invasive uses of technologies, many of which are co-opted into education, which highlights the importance of developing an understanding of how data is used.”

Audrey Watters (2018) has compiled a list of the nefarious social and political uses or connections of educational technology, either technology designed for education specifically or co-opted into educational purposes. She draws particular attention to the use of AI to de-professionalise teachers. And Mike Caulfield (2016) in acknowledging the positive impact of the web and related technologies argues that “to do justice to the possibilities means we must take the downsides of these environments seriously and address them.”

References

Caulfield, M. (2016). Announcing the digital polarization initiative, an open pedagogy project [Blog post]. Hapgood. Retrieved from https://hapgood.us/2016/12/07/announcing-the-digital-polarization-initiative-an-open-pedagogy-joint/

European Commission (2020). White Paper on Artificial Intelligence – A European approach to excellence and trust. Luxembourg: Publications Office of the European Union.

Gordon, R. J. (2016). The Rise and Fall of American Growth – The U.S. Standard of Living Since the Civil War. Princeton University Press.

Naughton, J. (2020). The real test of an AI machine is when it can admit to not knowing something. Guardian. Retrieved from  https://www.theguardian.com/commentisfree/2020/feb/22/test-of-ai-is-when-machine-can-admit-to-not-knowing-something.

Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review. Retrieved from https://hdsr.mitpress.mit.edu/pub/56lnenzj, 27.02.2020.

Watters, A. (2019). Ed-Tech Agitprop. Retrieved from http://hackeducation.com/2019/11/28/ed-tech-agitprop,  27.02.2020

Weller, M (2020). 25 years of Ed Tech. Athabasca University: AU Press.

CareerChat Bot

May 7th, 2020 by Graham Attwell
chatbot, bot, assistant

mohamed_hassan (CC0), Pixabay

Pontydysgu is very happy to be part of a consortium, led by DMH Associates, selected as a finalist for the CareerTech Challenge Prize!

The project is called CareerChat and the ‘pitch’ video above expalisn the ideas behind the project. CareerChat is a chatbot providing a personalised, guided career journey experience for working adults aged 24 to 65 in low skilled jobs in three major cities: Bristol, Derby and Newcastle. It offers informed, friendly and flexible high-quality, local contextual and national labour market information including specific course/training opportunities, and job vacancies to support adults within ‘at risk’ sectors and occupations

CareerChat incorporates advanced AI technologies, database applications and Natural Language Processing and can be accessed on computers, mobile phones and devices. It allows users to reflect, explore, find out and identify pathways and access to new training and work opportunities.

Nesta is delivering the CareerTech Challenge in partnership with the Department for Education as part of their National Retraining Scheme

  • Nesta research suggests that more than six million people in the UK are currently employed in occupations that are likely to radically change or entirely disappear by 2030 due to automation, population aging, urbanisation and the rise of the green economy.
  • In the nearer-term, the coronavirus crisis has intensified the importance of this problem. Recent warnings suggest that a prolonged lockdown could result in 6.5 million people losing their jobs. [1] Of these workers, nearly 80% do not have a university degree. [2]
  • The solutions being funded through the CareerTech Challenge are designed to support people who will be hit the hardest by an insecure job market over the coming years. This includes those without a degree, and working in sectors such as retail, manufacturing, construction and transport.

You can find out more information about the programme here: https://www.nesta.org.uk/project/careertech-challenge/ and email Graham Attwell directly if you would like to know more about the CareerChat project

The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system

March 2nd, 2020 by Graham Attwell


I am speaking at a seminar on Vocational Education and Training’s Role in Business Innovation at the Ramon Areces Foundation in Madrid tomorrow. The title of my presentation is ‘The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system in Valencia’ which is really much too long for a title and I have much too much to say for my allotted 20 minutes.

Any way, this is what I told them I was going to talk about:
The Presentation looks at the future of work, linked to the challenges of Artificial Intelligence, Automation and the new Green Economy. It considers and discusses the various predictions on future jobs and occupations from bodies including CEDEFOP, OECD and the World Bank. It concludes that although one jobs will be v=craeted and some occupations be displaced by new technologies. the greatest impact will be in terms of the tasks performed within jobs. It further discusses future skills needs, including the need for higher level cognitive competences as well as the demand for so called lower skilled work in services and caring professions.
It considers the significance of these changes for vocational education and training, including the need for new curricula, and increased provision of lifelong learning and retraining for those affected by the changing labour market.
Artificial Intelligence may also play an important role in the organisation and delivery of vocational education and training. This includes the use of technologies such as machine learning and Natural Language processing for Learner engagement, recruitment and support, Learning Analytics and ‘nudge learning’ through a Learning Record Store, and  the creation and delivery of learning content. It provides examples such as the use of Chatbots in vocation education and training schools and colleges. It is suggested that the use of AI technologies can allow a move from summary assessment to formative assessment. The use of these technologies will reduce the administrative load for teachers and trainers and allow them to focus on coaching, particularly benefiting those at the top and lower end of the student cohort.
To benefit from this potential will requite new and enhanced continuing professional development for teachers and trainers. Finally the presentation considers what this signifies for the future of the Dual VET system in Spain, looking at findings from both European projects and research undertaken into Dual training in Valencia.
And I will report back here after the event.

Artificial, Intelligence, ethics and education

January 2nd, 2020 by Graham Attwell

I guess we are going to be hearing a lot about AI in education in the next year. As regular readers will know, I am working on a European Commission Erasmus Plus project on Artificial Intelligence and Vocational Education and Training. One subject which is constantly appearing is the issue of ethics. Apart from the UK universities requirements for ethical approval of research projects (more about this in a future post), the issue of ethics rarely appears in education as a focus for debate. Yet it is all over the discussion of AI and how we can or should use AI in education.

There is an interesting and (long) blog post – ‘The Invention of “Ethical AI“‘ recently published by Rodrigo Ochigame on the Intercept web site.

Orchigame worked as a graduate student researcher in the former director of the MIT Media Lab, Joichi Ito’s group on AI ethics at the Media Lab. He left in August last year , immediately after Ito published his initial “apology” regarding his ties to Epstein, in which he acknowledged accepting money from the disgraced financier both for the Media Lab and for Ito’s outside venture funds.

The quotes below provide an outline of his argument although for anyone interested in this field the article merits a full read. the

The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics

The discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

This included working on

the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”

corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).

it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position.

the corporate lobby’s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.

I am not opposed to the emphasis being placed on ethics in AI and education and the debate and practice son Learning Analytics show the need to think clearly about how we use technology. But we have to be careful that we firstly do not just end up paying lip service to ethics and secondly that academic research does not become a cover for teh practices of the Ed tech industry. Moreover, I think we need a clearer understanding of just what we mean when we talk about ethics in the educational context. For me the two biggest ethical issues are the failure of provide education for all and the gross inequalities in educational provision based on things like class and gender.

 

Is this the right way to use machine learning in education?

September 2nd, 2019 by Graham Attwell

An article ‘Predicting Employment through Machine Learning‘ by Linsey S. Hugo on the National Association of Colleges and Employers web site,confirms some of my worries about the use of machine learning in education.

The article presents a scenario which it is said “illustrates the role that machine learning, a form of predictive analytics, can play in supporting student career outcomes.” It is based on a recent study at Ohio University (OHIO) which  leveraged machine learning to forecast successful job offers before graduation with 87 percent accuracy. “The study used data from first-destination surveys and registrar reports for undergraduate business school graduates from the 2016-2017 and 2017-2018 academic years. The study included data from 846 students for which outcomes were known; these data were then used in predicting outcomes for 212 students.”

A key step in the project was “identifying employability signals” based on the idea that “it is well-recognized that employers desire particular skills from undergraduate students, such as a strong work ethic, critical thinking, adept communication, and teamwork.” These signals were adapted as proxies for the “well recognised”skills.

The data were used to develop numerous machine learning models, from commonly recognized methodologies, such as logistic regression, to advanced, non-linear models, such as a support-vector machine. Following the development of the models, new student data points were added to determine if the model could predict those students’ employment status at graduation. It correctly predicted that 107 students would be employed at graduation and 78 students would not be employed at graduation—185 correct predictions out of 212 student records, an 87 percent accuracy rate.

Additionally, this research assessed sensitivity, identifying which input variables were most predictive. In this study, internships were the most predictive variable, followed by specific majors and then co-curricular activities.

As in many learning analytics applications the data could then be used as a basis for intervention to support students employability on gradation. If they has not already undertaken a summer internship then they could be supported in this and so on.

Now on the one hand this is an impressive development of learning analytics to support over worked careers advisers and to improve the chances of graduates finding a job. Also the detailed testing of different machine learning and AI approaches is both exemplary and unusually well documented.

However I still find myself uneasy with the project. Firstly it reduces the purpose of degree level education to employment. Secondly it accepts that employers call the shots through proxies based on unquestioned and unchallenged “well recognised skills” demanded by employers. It may be “well recognised” that employers are biased against certain social groups or have a preference for upper class students. Should this be incorporated in the algorithm. Thirdly it places responsibility for employability on the individual students, rather than looking more closely at societal factors in employment. It is also noted that participation in unpaid interneships is also an increasing factor in employment in the UK: fairly obviously the financial ability to undertake such unpaid work is the preserve of the more wealthy. And suppose that all students are assisted in achieving the “predictive input variable”. Does that mean they would all achieve employment on graduation? Graduate unemployment is not only predicated on individual student achievement (whatever variables are taken into account) but also on the availability of graduate jobs. In teh UK  many graduates are employed in what are classified as non graduate jobs (the classification system is something I will return to in another blog). But is this because they fail to develop their employability signals or simply because there simply are not enough jobs?

Having said all this, I remain optimistic about the role of learning analytics and AI in education and in careers guidance. But there are many issues to be discussed and pitfalls to overcome.

 

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories