Archive for the ‘technology’ Category

A focus on both discrete skills and broader human skills

June 17th, 2020 by Graham Attwell
laptop, woman, education

JESHOOTS-com (CC0), Pixabay

There is an interesting article by Allison Dulin Salisbury in the Forbes magazine this morning. The article says that the Covid 19 pandemic is speeding the digital transformation of business, driven by AI and automation and quotes MIT Economist David Autor calling it an “automation forcing event.”

The combined forces of automation and dramatically altered demand are giving rise to a labor market “riptide” in which some sectors of the economy are seeing mass layoffs while others, like healthcare and tech, are still desperate for talent. Against that backdrop, education and training systems are underfunded and ill equipped to meet the demands of a more complex labor market and the shifting demographics of students.

And from the evidence of the last recession, it appears likely that it will be lower paid and lower skilled workers with jobs most at risk.

However, if the analysis of the problem is correct the answers proposed leave room for doubt. The article says: “The past few years have seen a flourishing of high-quality, low-cost training and education programs, many of them online. They are laser-focused on the needs of working learners.” Maybe so in the USA, but in Europe I am yet to see the emergence of flourishing laser focused online learning programmes. And there is plenty of evidence to suggest that online programmes such as MOOcs have more often been focused on the needs of skilled and higher paid workers.

Neither is the appeal to stakeholder capitalism and for the involvement of employers in the provision of training likely to result in big change. More interesting is the call for “investment in practices that help workers identify what career they want before they start an education program,” and to “align training to the competencies required to land a good first job.” This, the article says “means a focus on both discrete skills and broader “human skills,” like communication and problem-solving, that actually become more marketable amid automation.”

Despite reservations, the argument is moving in the right direction. Put simply the Corona virus has on its own caused massive unemployment, with the effect likely to be magnified by a speed up in automation and the use of AI. This requires the development of large scale training programmes, both for unemployed young people and lower skilled workers whose jobs are threatened. Fairly obviously, the use of technology can help in providing such programmes. Nesta in the UK is already looking at developments in this direction. It will be interesting to see what national governments and the European Union will do now to boost training as a response to the crisis.

Please follow and like us:

Stray thoughts on teaching and learning in the COVID 19 lockdowns

June 10th, 2020 by Graham Attwell
covid, covid-2019, covid-19

artpolka (CC0), Pixabay 

 

I must be one of the few ed tech bloggers who has not published anything on the move to online during the COVID 19 lockdowns. Not that I haven’t thought about it (and I even started several posts). However it is difficult to gauge an overall impression of what has happened and what is happening (although I am sure there will be many, many research papers and reports in the future) and from talking with people in perhaps six or seven countries in the past few weeks, there seem to be contradictory messages.

So, instead of trying to write anything coherent here are a few stray and necessarily impressionistic thoughts (in no particular order) which I will update in the future.

Firstly, many teachers seem to have coped remarkably well in the great move to online. Perhaps we have over stressed the lack of training for teachers. Some I talked too were stressed but all seemed to cope in one way or another.

However digital exclusion has reared its ugly head in a big way. Lack of bandwidth and lack of computers have prevented many from participating in online learning. Surely it is time now that internet connectivity is recognized as a key public infrastructure (as the UK labour Party proposed in their 2019 manifesto). And it also needs recognising that access to a computer should be a key provision of schools and education services. Access to space in which to learn is another issue – and not so easy to solve in a lockdown. But after restrictions are lifted in needs remembering that libraries can play an important role for those whose liv9ng space is not conducive to learning.

One think that has become very clear is the economic and social role schools play in providing childcare. Hence the pressure from the UK government to open primary schools despite it being blatantly obvious that such a move was ill prepared and premature. I am not sure that the provision of childcare should not be a wider service than one of education. And maybe it has become such a big issue in the UK because children start in school at a very young age (compared to other countries in Europe) and also have a relatively long school day.

There is a big debate going on in most countries about what universities will look like in the autumn. I think this raises wider questions about the whole purpose and role of universities in society. At least in theory, it should be possible for universities to continue with online learning. But teaching and learning is just one role for universities. With the move to mass higher education in many countries going to university has become a rite of passage. Thus in the UK the weight attached to the student satisfaction survey and the emphasis placed on social activities, sports and so on. And this is a great deal of what the students are paying for. Fees in UK universities are now £9000 a year. The feeling is that many prospective students will not pay that without the full face to face student experience (although I doubt many will miss the full face to face lectures). I also wonder how many younger people will start to realise how it is possible to get an extremely good on line education for free and one thing during the lock down has been the blossoming of online seminar, symposia, conferences and to a lesser extent workshops).

Which brings me to the vexed subject of pedagogy. Of course it is easy to say that with the full affordances of Zoom (and whoever would have predicted its popularity and use as an educational technology platform) all we have seen is lectures being delivered online. Online teaching not online learning. I am not sure this is a good dichotomy to make. Of course a sudden unplanned forced rush to online provision is probably not the greatest way to do things. But there seems plenty of anecdotal evidence that ed-tech support facilitated some excellent online provision (mention also needs to be made on the many resources for teachers made available over the internet). Of course we need to stop thinking about how we can reproduce traditional face to face approaches to teaching and earning online and start designing for creative online learning. But hopefully there is enough impetus now for this to happen.

More thoughts to follow in another post and hopefully I can get some coherent ideas out of all of this

Please follow and like us:

Ethics in AI and Education

June 10th, 2020 by Graham Attwell
industry, industry 4, web

geralt (CC0), Pixabay

The news that IBM is pulling out of the facial recognition market and is calling for “a national dialogue” on the technology’s use in law enforcement has highlighted the ethical concerns around AI powered technology. But the issue is not just confined to policing: it is also a growing concern in education. This post is based on a section in a forthcoming publication on the use of Artificial Intelligence in Vocational Education and Training, produced by the Taccle AI Erasmus Plus project.

Much concern has been expressed over the dangers and ethics of Artificial Intelligence both in general and specifically in education.

The European Commission (2020) has raised the following general issues (Naughton, 2020):

  • human agency and oversight
  • privacy and governance,
  • diversity,
  • non-discrimination and fairness,
  • societal wellbeing,
  • accountability,
  • transparency,
  • trustworthiness

However, John Naughton (2020), a technology journalist from the UK Open University, says “the discourse is invariably three parts generalities, two parts virtue-signalling.” He points to the work of David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society who in January 2020 published an article in the Harvard Data Science Review on the question “Should we trust algorithms?” saying that it is trustworthiness rather than trust we should be focusing on. He suggests a set of seven questions one should ask about any algorithm.

  1. Is it any good when tried in new parts of the real world?
  2. Would something simpler, and more transparent and robust, be just as good?
  3. Could I explain how it works (in general) to anyone who is interested?
  4. Could I explain to an individual how it reached its conclusion in their particular case?
  5. Does it know when it is on shaky ground, and can it acknowledge uncertainty?
  6. Do people use it appropriately, with the right level of scepticism?
  7. Does it actually help in practice?

Many of the concerns around the use of AI in education have already been aired in research around Learning Analytics. These include issues of bias, transparency and data ownership. They also include problematic questions around whether or not it is ethical that students should be told whether they are falling behind or indeed ahead in their work and surveillance of students.

The EU working group on AI in Education has identified the following issues:

  • AI can easily scale up and automate bad pedagogical practices
  • AI may generate stereotyped models of students profiles and behaviours and automatic grading
  • Need for big data on student learning (privacy, security and ownership of data are crucial)
  • Skills for AI and implications of AI for systems requirements
  • Need for policy makers to understand the basics of ethical AI.

Furthermore, it has been noted that AI for education is a spillover from other areas and not purpose built for education. Experts tend to be concentrated in the private sector and may not be sufficiently aware of the requirements in the education sector.

A further and even more troubling concern is the increasing influence and lobbying of large, often multinational, technology companies who are attempting to ‘disrupt’ public education systems. Audrey Waters (2019), who is publishing a book on the history of “teaching machines”, says her concern “is not that “artificial intelligence” will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans — intellectually, emotionally, occupationally — will be reduced to machines.” “Perhaps nothing,” she says, “has become quite as naturalized in education technology circles as stories about the inevitability of technology, about technology as salvation. She quotes the historian Robert Gordon who asserts that new technologies are incremental changes rather than whole-scale alterations to society we saw a century ago. Many new digital technologies, Gordon argues, are consumer technologies, and these will not — despite all the stories we hear – necessarily restructure our world.

There has been considerable debate and unease around the AI based “Smart Classroom Behaviour Management System” in use in schools in China since 2017. The system uses technology to monitor students’ facial expressions, scanning learners every 30 seconds and determining if they are happy, confused, angry, surprised, fearful or disgusted. It provides real time feedback to teachers about what emotions learners are experiencing. Facial monitoring systems are also being used in the USA. Some commentators have likened these systems to digital surveillance.

A publication entitled “Systematic review of research on artificial intelligence applications in higher education- where are the educators?” (Olaf Zawacki-Richter, Victoria I. Marín, Melissa Bond & Franziska Gouverneur (2019) which reviewed 146 out of 2656 identified publications concluded that there was a lack of critical reflection on risks and challenges. Furthermore, there was a weak connection to pedagogical theories and a need for an exploration of ethical and educational approaches. Martin Weller (2020) says educational technologists are increasingly questioning the impacts of technology on learner and scholarly practice, as well as the long-term implications for education in general. Neil Selwyn (2014) says “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique.”

Martin Weller (2020) is concerned at “the invasive uses of technologies, many of which are co-opted into education, which highlights the importance of developing an understanding of how data is used.”

Audrey Watters (2018) has compiled a list of the nefarious social and political uses or connections of educational technology, either technology designed for education specifically or co-opted into educational purposes. She draws particular attention to the use of AI to de-professionalise teachers. And Mike Caulfield (2016) in acknowledging the positive impact of the web and related technologies argues that “to do justice to the possibilities means we must take the downsides of these environments seriously and address them.”

References

Caulfield, M. (2016). Announcing the digital polarization initiative, an open pedagogy project [Blog post]. Hapgood. Retrieved from https://hapgood.us/2016/12/07/announcing-the-digital-polarization-initiative-an-open-pedagogy-joint/

European Commission (2020). White Paper on Artificial Intelligence – A European approach to excellence and trust. Luxembourg: Publications Office of the European Union.

Gordon, R. J. (2016). The Rise and Fall of American Growth – The U.S. Standard of Living Since the Civil War. Princeton University Press.

Naughton, J. (2020). The real test of an AI machine is when it can admit to not knowing something. Guardian. Retrieved from  https://www.theguardian.com/commentisfree/2020/feb/22/test-of-ai-is-when-machine-can-admit-to-not-knowing-something.

Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review. Retrieved from https://hdsr.mitpress.mit.edu/pub/56lnenzj, 27.02.2020.

Watters, A. (2019). Ed-Tech Agitprop. Retrieved from http://hackeducation.com/2019/11/28/ed-tech-agitprop,  27.02.2020

Weller, M (2020). 25 years of Ed Tech. Athabasca University: AU Press.

Please follow and like us:

CareerChat Bot

May 7th, 2020 by Graham Attwell
chatbot, bot, assistant

mohamed_hassan (CC0), Pixabay

Pontydysgu is very happy to be part of a consortium, led by DMH Associates, selected as a finalist for the CareerTech Challenge Prize!

The project is called CareerChat and the ‘pitch’ video above expalisn the ideas behind the project. CareerChat is a chatbot providing a personalised, guided career journey experience for working adults aged 24 to 65 in low skilled jobs in three major cities: Bristol, Derby and Newcastle. It offers informed, friendly and flexible high-quality, local contextual and national labour market information including specific course/training opportunities, and job vacancies to support adults within ‘at risk’ sectors and occupations

CareerChat incorporates advanced AI technologies, database applications and Natural Language Processing and can be accessed on computers, mobile phones and devices. It allows users to reflect, explore, find out and identify pathways and access to new training and work opportunities.

Nesta is delivering the CareerTech Challenge in partnership with the Department for Education as part of their National Retraining Scheme

  • Nesta research suggests that more than six million people in the UK are currently employed in occupations that are likely to radically change or entirely disappear by 2030 due to automation, population aging, urbanisation and the rise of the green economy.
  • In the nearer-term, the coronavirus crisis has intensified the importance of this problem. Recent warnings suggest that a prolonged lockdown could result in 6.5 million people losing their jobs. [1] Of these workers, nearly 80% do not have a university degree. [2]
  • The solutions being funded through the CareerTech Challenge are designed to support people who will be hit the hardest by an insecure job market over the coming years. This includes those without a degree, and working in sectors such as retail, manufacturing, construction and transport.

You can find out more information about the programme here: https://www.nesta.org.uk/project/careertech-challenge/ and email Graham Attwell directly if you would like to know more about the CareerChat project

Please follow and like us:

The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system

March 2nd, 2020 by Graham Attwell


I am speaking at a seminar on Vocational Education and Training’s Role in Business Innovation at the Ramon Areces Foundation in Madrid tomorrow. The title of my presentation is ‘The future of work, Artificial Intelligence and automation: Innovation and the Dual Vocational Education and training system in Valencia’ which is really much too long for a title and I have much too much to say for my allotted 20 minutes.

Any way, this is what I told them I was going to talk about:
The Presentation looks at the future of work, linked to the challenges of Artificial Intelligence, Automation and the new Green Economy. It considers and discusses the various predictions on future jobs and occupations from bodies including CEDEFOP, OECD and the World Bank. It concludes that although one jobs will be v=craeted and some occupations be displaced by new technologies. the greatest impact will be in terms of the tasks performed within jobs. It further discusses future skills needs, including the need for higher level cognitive competences as well as the demand for so called lower skilled work in services and caring professions.
It considers the significance of these changes for vocational education and training, including the need for new curricula, and increased provision of lifelong learning and retraining for those affected by the changing labour market.
Artificial Intelligence may also play an important role in the organisation and delivery of vocational education and training. This includes the use of technologies such as machine learning and Natural Language processing for Learner engagement, recruitment and support, Learning Analytics and ‘nudge learning’ through a Learning Record Store, and  the creation and delivery of learning content. It provides examples such as the use of Chatbots in vocation education and training schools and colleges. It is suggested that the use of AI technologies can allow a move from summary assessment to formative assessment. The use of these technologies will reduce the administrative load for teachers and trainers and allow them to focus on coaching, particularly benefiting those at the top and lower end of the student cohort.
To benefit from this potential will requite new and enhanced continuing professional development for teachers and trainers. Finally the presentation considers what this signifies for the future of the Dual VET system in Spain, looking at findings from both European projects and research undertaken into Dual training in Valencia.
And I will report back here after the event.
Please follow and like us:

Artificial, Intelligence, ethics and education

January 2nd, 2020 by Graham Attwell

I guess we are going to be hearing a lot about AI in education in the next year. As regular readers will know, I am working on a European Commission Erasmus Plus project on Artificial Intelligence and Vocational Education and Training. One subject which is constantly appearing is the issue of ethics. Apart from the UK universities requirements for ethical approval of research projects (more about this in a future post), the issue of ethics rarely appears in education as a focus for debate. Yet it is all over the discussion of AI and how we can or should use AI in education.

There is an interesting and (long) blog post – ‘The Invention of “Ethical AI“‘ recently published by Rodrigo Ochigame on the Intercept web site.

Orchigame worked as a graduate student researcher in the former director of the MIT Media Lab, Joichi Ito’s group on AI ethics at the Media Lab. He left in August last year , immediately after Ito published his initial “apology” regarding his ties to Epstein, in which he acknowledged accepting money from the disgraced financier both for the Media Lab and for Ito’s outside venture funds.

The quotes below provide an outline of his argument although for anyone interested in this field the article merits a full read. the

The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics

The discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

This included working on

the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”

corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).

it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position.

the corporate lobby’s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.

I am not opposed to the emphasis being placed on ethics in AI and education and the debate and practice son Learning Analytics show the need to think clearly about how we use technology. But we have to be careful that we firstly do not just end up paying lip service to ethics and secondly that academic research does not become a cover for teh practices of the Ed tech industry. Moreover, I think we need a clearer understanding of just what we mean when we talk about ethics in the educational context. For me the two biggest ethical issues are the failure of provide education for all and the gross inequalities in educational provision based on things like class and gender.

 

Please follow and like us:

Is this the right way to use machine learning in education?

September 2nd, 2019 by Graham Attwell

An article ‘Predicting Employment through Machine Learning‘ by Linsey S. Hugo on the National Association of Colleges and Employers web site,confirms some of my worries about the use of machine learning in education.

The article presents a scenario which it is said “illustrates the role that machine learning, a form of predictive analytics, can play in supporting student career outcomes.” It is based on a recent study at Ohio University (OHIO) which  leveraged machine learning to forecast successful job offers before graduation with 87 percent accuracy. “The study used data from first-destination surveys and registrar reports for undergraduate business school graduates from the 2016-2017 and 2017-2018 academic years. The study included data from 846 students for which outcomes were known; these data were then used in predicting outcomes for 212 students.”

A key step in the project was “identifying employability signals” based on the idea that “it is well-recognized that employers desire particular skills from undergraduate students, such as a strong work ethic, critical thinking, adept communication, and teamwork.” These signals were adapted as proxies for the “well recognised”skills.

The data were used to develop numerous machine learning models, from commonly recognized methodologies, such as logistic regression, to advanced, non-linear models, such as a support-vector machine. Following the development of the models, new student data points were added to determine if the model could predict those students’ employment status at graduation. It correctly predicted that 107 students would be employed at graduation and 78 students would not be employed at graduation—185 correct predictions out of 212 student records, an 87 percent accuracy rate.

Additionally, this research assessed sensitivity, identifying which input variables were most predictive. In this study, internships were the most predictive variable, followed by specific majors and then co-curricular activities.

As in many learning analytics applications the data could then be used as a basis for intervention to support students employability on gradation. If they has not already undertaken a summer internship then they could be supported in this and so on.

Now on the one hand this is an impressive development of learning analytics to support over worked careers advisers and to improve the chances of graduates finding a job. Also the detailed testing of different machine learning and AI approaches is both exemplary and unusually well documented.

However I still find myself uneasy with the project. Firstly it reduces the purpose of degree level education to employment. Secondly it accepts that employers call the shots through proxies based on unquestioned and unchallenged “well recognised skills” demanded by employers. It may be “well recognised” that employers are biased against certain social groups or have a preference for upper class students. Should this be incorporated in the algorithm. Thirdly it places responsibility for employability on the individual students, rather than looking more closely at societal factors in employment. It is also noted that participation in unpaid interneships is also an increasing factor in employment in the UK: fairly obviously the financial ability to undertake such unpaid work is the preserve of the more wealthy. And suppose that all students are assisted in achieving the “predictive input variable”. Does that mean they would all achieve employment on graduation? Graduate unemployment is not only predicated on individual student achievement (whatever variables are taken into account) but also on the availability of graduate jobs. In teh UK  many graduates are employed in what are classified as non graduate jobs (the classification system is something I will return to in another blog). But is this because they fail to develop their employability signals or simply because there simply are not enough jobs?

Having said all this, I remain optimistic about the role of learning analytics and AI in education and in careers guidance. But there are many issues to be discussed and pitfalls to overcome.

 

Please follow and like us:

Reading on screen and on paper

September 1st, 2019 by Graham Attwell

Do you read books and papers on screen or do you prefer paper. I am conflicted. I used to have an old Kindle but gave it up because I am no fan of Amazon. And I used to read books on firstly an ipad and latterly an Tesco Huddle tablet – both now sadly deceased.

Like many (at least if the sales figures are to be believed) I have returned to reading books on paper, although I read a lot of papers and such like on my computer, only occasionally being bothered to print them out. But is preferring to physical books a cultural feel good factor or does it really make a difference to comprehension and learning?

An article in the Hechinger Report reports on research by Virginia Clinton, an Assistant Professor at the University of North Dakota who “compiled results from 33 high-quality studies that tested students’ comprehension after they were randomly assigned to read on a screen or on paper and found that her students might be right.”

The studies showed that students of all ages, from elementary school to college, tend to absorb more when they’re reading on paper than on screens, particularly when it comes to nonfiction material.

However the benefit was small – a little more than  a fifth of a standard deviation and there is an important caveat in that the studies that Clinton included in her analysis didn’t allow students to use the add on tools that digital texts can potentially offer.

My feeling is that this is a case of horses for courses. Work undertaken by Pontydysgu suggested that ebooks had an important motivational aspect for slow to learn readers in primary school. Not only could they look up the meaning fo different words but when they had read for a certain amount of time they were allowed to listen to the rest of teh story on the audio transcription. And there is little doubt that e-books offer a cost effective way of providing access to books for learners.

But it would be nice to see some further well designed research in this area.

 

Please follow and like us:

AI and education

February 6th, 2019 by Graham Attwell

Fear you are going to be seeing this headline quite a bit in coming months. And like everyone else I am getting excited and worried about the possibilities of AI for learning – and less so for AI in education management.

Anyway here is the promise from an EU Horizon 2020 project looking mainly at ethics in AI. As an aside, while lots of people seem to be looking at ethics, which f course is very welcome, I see less research into the potentials and possibilities of AI (more to follow).

The SHERPA consortium – a group consisting of 11 members from six European countries – whose mission is to understand how the combination of artificial intelligence and big data analytics will impact ethics and human rights issues today, and in the future.

One of F-Secure’s (a partner in the project) first tasks will be to study security issues, dangers, and implications of the use of data analytics and artificial intelligence, including applications in the cyber security domain. This research project will examine:

  • ways in which machine learning systems are commonly mis-implemented (and recommendations on how to prevent this from happening)
  • ways in which machine learning models and algorithms can be adversarially attacked (and mitigations against such attacks)
  • how artificial intelligence and data analysis methodologies might be used for malicious purposes
Please follow and like us:

What is an algorithm?

September 3rd, 2018 by Graham Attwell

There was an excellent article by Andrew Smith in the Guardian newspaper last week. ‘Franken-algorithms: the deadly consequences of unpredictable code’, examines issues with our “our new algorithmic reality and the “growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on.” “Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice”, Smith says.

I was particularly interested in the changing understandings of what an algorithm is.

In the original understanding of an algorithm, says Andrew Smith, “an algorithm is a small, simple thing; a rule used to automate the treatment of a piece of data. If a happens, then do b; if not, then do c. This is the “if/then/else” logic of classical computing. If a user claims to be 18, allow them into the website; if not, print “Sorry, you must be 18 to enter”. At core, computer programs are bundles of such algorithms.” However, “Recent years have seen a more portentous and ambiguous meaning emerge, with the word “algorithm” taken to mean any large, complex decision-making software system; any means of taking an array of input – of data – and assessing it quickly, according to a given set of criteria (or “rules”).”

And this, of course is a problem, especially where algorithms, even if published, are not in the least transparent and with machine learning, constantly evolving.

Please follow and like us:
  • Search Pontydysgu.org

    Social Media




    News Bites

    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.

    Please follow and like us:


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.

    Please follow and like us:


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.

    Please follow and like us:


    News from 1994

    This is from a Tweet. In 1994 Stephen Heppell wrote in something called SCET” “Teachers are fundamental to this. They are professionals of considerable calibre. They are skilled at observing their students’ capability and progressing it. They are creative and imaginative but the curriculum must give them space and opportunity to explore the new potential for learning that technology offers.” Nothing changes!

    Please follow and like us:


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

      Please follow and like us:
  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories