Archive for the ‘AI’ Category

AI, automation, the future of work and vocational education and training

February 17th, 2020 by Graham Attwell

Regular readers will know I am working on a project on AI and Vocational Education and Training (VET). We are looking both at the impact of AI and automation on work and occupations and the use of AI for teaching and learning. Later in the year we will be organizing a MOOC around this: at the moment we are undertaking interviews with teachers, trainers , managers and developers (among others) in Italy, Greece, Lithuania, Germany and the UK.

The interviews are loosely structured around five questions:

  • What influence do you think AI and automation is going to have on occupations that you or your institution provide training for?
  • Do you think AI is going to effect approaches to teaching and learning? If so could you tell us how?
  • Have you or your institution any projects based around AI. If so could you tell us about them?
  • How can curricula be updated quickly enough to respond to the introduction of AI?
  • Do you think AI and automation will result in less jobs in the future or will it generate new jobs? If so what do you think the content of those jobs will be?

Of course it depends on the work role and interests of the interviewee as to which questions are most discussed. And rather than an interview, with the people I have talked with it tends to be more of a discussion.

while the outcomes of this work will be published in a report later this spring, I will publish here some of the issues which have been come up.

Last week I talked with Chris Percy, who describes himself as a Business strategy consultant and economist.

Chris sees AI and technology as driving an increasing pace of change in how work is done. He says the model for vocational education is to attend college to get skills and enter a trade for ten or twenty years – albeit with refreshers and licenses to update knowledge. This, he says, has been the model for the last 50 years but it may not hold if knowledge is so fast changing. He is not an AI evangelist and thinks changes feed through more slowly. With this change new models for vocational education and training are needed, although what that model might be is open. It could be e to spend one year learning in every seven years or one day a week for three months every year.

The main issue for VET is not how to apply AI but how we structure jobs, Lifelong Learning and pedagogy.

One problem, at least in the UK. has been a reduction in the provision of Life Long Learning has gone down in the UK. In this he sees a disconnect between policy and the needs of the economy.  But it may also be that if change is slower than in the discourse it just has just not impacted yet. Tasks within a job are changing rather than jobs as a whole. We need to update knowledge  for practices we do not yet have. A third possible explanation is that although there are benefits from new technologies and work processes the benefits from learning are not important enough for providing new skills.

New ways of learning are needed – a responsive learning based on AI could help here – but there is not enough demand to overcome inertia. The underpinning technologies are there but have not yet translated into schools to benefit retraining.

Relatively few jobs will disappear in their entirety – but a lot of logistics, front of store jobs, restaurants etc. will be transformed. It could be there will be a lower tier of services based on AI and automation and a higher tier with human provision. Regulators can inhibit the pace of change – which is uneven in different countries and cities e.g. Self driving cars.

In most of the rest of the economy people will change as tasks change. For example the use of digital search in the legal industry  has been done by students, interns and paralegals because someone has to do it – now with AI supporting due diligence students can progress faster to more interesting parts of the work. Due diligence is now AI enabled.

Chris thinks that although AI and automation will impact on jobs, global economic developments will still be a bigger influence on the future of work.

More from the interviews later this week. In the meantime if you would like to contribute to the research – or just would like to contribute your ideas – please et in touch.

 

 

Changing the role of Assessment

February 11th, 2020 by Graham Attwell

Front cover of future of assessment reportFormative assessment should provide a key role in all education and particularly in vocational education and training. Formative assessment can give vital feedback to learners and guidance in the next steps of their learning journey. It can also help teachers in knowing what is effective and what is not, where the gaps are and help in planning learning interventions.

Yet all too often it does not. Assessment is all too often seen at best as something to overcome and at worst as a stress inducing nightmare. With new regulations in England requiring students in further education to pass tests in English and Mathmatics, students are condemned to endless retaking the same exams regardless of achievement in vocational subjects.

For all these reasons a new report published by Jisc today is very welcome.

Jisc say:

Existing and emerging technologies are starting to play a role in changing assessment and could help address these issues, both today and looking further ahead into the future, to make assessment smarter, faster, fairer and more effective.

The report sets five targets for the next five years to progress assessment towards being more authentic, accessible, appropriately automated, continuous and secure.

  • AuthenticAssessments designed to prepare students for what they do next, using technology they will use in their careers

  • AccessibleAssessments designed with an accessibility-first principle

  • Appropriately automatedA balance found of automated and human marking to deliver maximum benefit to students

  • ContinuousAssessment data used to explore opportunities for continuous assessment to improve the learning experience

  • SecureAuthoring detection and biometric authentication adopted for identification and remote proctoring

The report: ‘The future of assessment: five principles, five targets for 2025’ can be downloaded from the Jisc website.

 

Good jobs, bad jobs, skills and gender

February 3rd, 2020 by Graham Attwell

I have written before about the issues of interpreting sense making from Labour Market Data and the difference between Labour Market Information and labour Market Intelligence.

This is exposed dramatically in the article in Social Europe by German Bender entitled ‘The myth of job polarisation may fuel populism’. As German explains “It has become conventional wisdom since the turn of the century that labour markets are rapidly becoming polarised in many western countries. The share of medium-skilled jobs is said to be shrinking, while low- and high-skilled jobs are growing in proportion.” But as German points out: “In a research report published last May by the Stockholm-based think tank Arena Idé, Michael Tåhlin, professor of sociology at the Swedish Institute for Social Research, found no job polarisation—rather, a continuous upgrading of the labour market.”

German goes on to explain:

The main reason is that the research, as is to be expected from studies rooted in economics, has used wages as a proxy for skills: low-paying jobs are taken to be low-skilled jobs and so on. But there are direct ways of measuring skill demands in jobs, and Arena Idé’s report is based on a measure commonly used in sociology—educational requirements as classified by the International Labour Organization’s ISCO (International Standard Classification of Occupations) scheme. Using this methodology to analyse the change in skill composition yields strikingly different results for the middle of the skill distribution.

The study found that while jobs relatively low skill demands but relatively high wages—such as factory and warehouse workers, postal staff and truck drivers—have diminished, others with the same or slightly higher skill demands but lower wages—nursing assistants, personal-care workers, cooks and kindergarten teachers—have increased.

The reason is that the former jobs are male dominated whilst the jobs which have grown have a majority of female workers. Research in most countries has shown that women (and jobs in which women are the majority) are lower paid than jobs for men, regardless of skills levels.

“Put simply”, says German: “wages are a problematic way to measure skills, since they clearly reflect the discrimination toward women prevalent in most, if not all, labour markets across the world.”

A further review of two British studies from 2012 and 2013, showed a change in the composition, but not the volume, of intermediate-level jobs. “Perhaps the most important conclusion”, German says “was that ‘the evidence shows that intermediate-level jobs will remain, though they are changing in nature’.”

The implications of this interpretation of the data are profound. If lower and medium skilled jobs are declining there is little incentive to invest in vocational education and training for those occupations. Furthermore, young people may be put off entering such careers and similarly careers advisers may further mislead school leavers.

There has been a trend in many European countries towards higher level apprenticieships, rather than providing training with the skills need to enter such medium skilled jobs. But even a focus on skills, rather than wages, may also be misleading. It is interesting that jobs such as social care and teaching appear more resistant to automation and job replacement from technologies such as Artificial Intelligence. But those who are arguing that we should be teaching so called soft skills such as team building, empathy and communication are talking about the very skills increasingly demanded in the female dominated low and middle skilled occupations. It may be that we need not ony to relook at how we move away from wages as a proxy for skills, but also look at how we measure skills.

German references research by Daniel Oesch and Giorgio Piccitto, who studied occupational change in Germany, Spain, Sweden and the UK from 1992 to 2015, characterising good and bad jobs according to four alternative indicators: earnings, education, prestige and job satisfaction.

They concluded that occupations with high job quality showed by far the strongest job growth, whereas occupations with low job quality showed weak growth regardless of indicator used.

 

 

 

 

 

 

 

 

 

 

 

Does AI mean we no longer need subject knowledge?

January 15th, 2020 by Graham Attwell

I am a little bemused by the approach of many of those writing about Artificial Intelligence in education to knowledge. The recently released Open University Innovation Report, Innovating Pedagogy, is typical in that respect.

“Helping students learn how to live effectively in a world increasingly impacted by AI also requires a pedagogy”, they say, “that, rather than focusing on what computers are good at (e.g. knowledge acquisition), puts more emphasis on the skills that make humans uniquely human (e.g. critical thinking, communication, collaboration and creativity) – skills in which computers remain weak.”

I have nothing against critical thinking, collaboration or creativity, although I think these are hard subjects to teach. But I find it curious that knowledge is being downplayed on the grounds that computers are good at it. Books have become very good at knowledge over the years but it doesn’t mean that humans have abandoned it to the books. What is striking though is the failure to distinguish between abstracted and applied knowledge. Computers are very good at producing (and using) information and data. But they are not nearly as good at applying that knowledge in real world interactions. Computers (in the form of robots) will struggle to open a door. Computers may know all about the latest hair styles but I very much doubt that we will be trusting them to cut our hair in the near future. But of course, the skills I am talking about here are vocational skills – not the skills that universities are used to teaching.

As opposed to the emergent Anglo Saxon discourse around “the skills that make humans uniquely human” in Germany the focus on Industry 4.0 is leading to an alternative idea. They are seeing AI and automation as requiring new and higher levels of vocational knowledge and skills in areas like, for example, the preventative maintenance of automated production machinery. This seems to me to be a far more promising area of development. The problem I suspect for education researchers in the UK is that they have to start thinking about education outside the sometimes rarified world of the university.

Equally I do not agree with the reports assertion that most AI applications for education are student-facing and are designed to replace some existing teacher tasks. “If this continues”, they say “while in the short run it might relieve some teacher burdens, it will inevitably lead to teachers becoming side-lined or deprofessionalised. In this possible AI-driven future, teachers will only be in classrooms to facilitate the AI to do the ‘actual’ teaching.”

The reality is that there are an increasing number of AI applications which assist tecahers rather than replace them – and that allow teachers to get on with their real job of teaching and supporting learning, rather than undertaking an onerous workload of admin. There is no evidence of the inevitability of teachers being either sidelined or deprofessionaised. And those experiments from Silicon Valley trying to ‘disrupy’ education by a move to purely online and algorithm driven learning have generally been a big failure.

 

 

Artificial, Intelligence, ethics and education

January 2nd, 2020 by Graham Attwell

I guess we are going to be hearing a lot about AI in education in the next year. As regular readers will know, I am working on a European Commission Erasmus Plus project on Artificial Intelligence and Vocational Education and Training. One subject which is constantly appearing is the issue of ethics. Apart from the UK universities requirements for ethical approval of research projects (more about this in a future post), the issue of ethics rarely appears in education as a focus for debate. Yet it is all over the discussion of AI and how we can or should use AI in education.

There is an interesting and (long) blog post – ‘The Invention of “Ethical AI“‘ recently published by Rodrigo Ochigame on the Intercept web site.

Orchigame worked as a graduate student researcher in the former director of the MIT Media Lab, Joichi Ito’s group on AI ethics at the Media Lab. He left in August last year , immediately after Ito published his initial “apology” regarding his ties to Epstein, in which he acknowledged accepting money from the disgraced financier both for the Media Lab and for Ito’s outside venture funds.

The quotes below provide an outline of his argument although for anyone interested in this field the article merits a full read. the

The emergence of this field is a recent phenomenon, as past AI researchers had been largely uninterested in the study of ethics

The discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.

This included working on

the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”

corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).

it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical AI” enables precisely this position.

the corporate lobby’s effort to shape academic research was extremely successful. There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.

I am not opposed to the emphasis being placed on ethics in AI and education and the debate and practice son Learning Analytics show the need to think clearly about how we use technology. But we have to be careful that we firstly do not just end up paying lip service to ethics and secondly that academic research does not become a cover for teh practices of the Ed tech industry. Moreover, I think we need a clearer understanding of just what we mean when we talk about ethics in the educational context. For me the two biggest ethical issues are the failure of provide education for all and the gross inequalities in educational provision based on things like class and gender.

 

Readings on AI and Education

November 18th, 2019 by Graham Attwell

In an early activity in our new project on Artificial Intelligence in Vocational Education and Training, we are undertaking a literature review. Although there seems to be little about AI and VET, the issue of AI in education is thsi years hot trend. Of course there seems to be more talk than actual practice. Any way, here is a quick summary (just notes really) of things I stumbled on last week.

Perhaps most interesting was an online webinar organised by the European Distance Education Network (EDEN) as part of European Distance Learning Week.  According to the online platform there were 49 of us present and four presentations. Sadly the recording is not yet available but I will link to it once it is online. What was most interesting was that almost everyone who spoke, and I recognised quite a few prominent researchers in the contributions, were pretty much opposed to AI. Too dangerous, no benefit, just hype, developers with no idea about learning etc. Really only one speaker, Alexandra Cristea from Durham University could see potential.

I found teh follwing publiscation by her. Demographic Indicators Influencing Learning Activities in MOOCs: Learning Analytics of Future Learn Courses (PDF) by Alexandra I. Cristea  and Lei Shi from the University of from Liverpool University  looks at pre-course survey data and online learner interaction data collected from two MOOCs, delivered by the University of Warwick,in 2015, 2016,and 2017. The data is used  to explore how learner demographic indicators may influence learner activities.Recommendations for educational information system development and instructional design, especially when a course attracts a diverse group of learners, are provided.

Meanwhile in the UK, NESTA are continuing to promote AI. However, they too emphasis ethical issues with the use of the technology. In ‘Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges’ they say

Although challenges for the ethical and responsible use of artificial intelligence and the sharing of data are common to many sectors, schools and colleges present a distinct combination of properties and considerations. The sharing of data needs to be governed in a manner that realises benefit for the public, and AIEd must be used ethically and responsibly.

AIEd’s potential and risks is reflected in the views of parents. 61% of parents anticipate that AI will be fairly or very important to the classroom of the near future. However, many are fairly or very concerned about consequences of determinism (77%), accountability (77%) and privacy and security (73%).

Finally, I had a look at the X5GON project website. X5GON is a large scale European research programme project, bringing togther a number of leading European Universities. It appears to be developing AI driven tools. particarrly focused on Open educational Resources. The project website says:

This new AI-driven platform will deliver OER content from everywhere, for the students’ need at the right time and place. This learning and development solution will use the following solutions to accomplish this goal:

  • Aggregation: It will gather relevant content in one place, from the projects case studies as well as external providers and other preferred resources.
  • Curation: AI and machine learning will be key to curate relevant and contextual content and external students at the right time and point of need.
  • Personalization: It will make increasingly personalized recommendations for learning content to suit students’ needs, based on the analysis of relevant factors.
  • Creation: Large, small and medium-sized universities have tacit knowledge that can be unlocked and re-used. This approach will allow any organization to release and build their own content libraries quickly and conveniently to share with the world and vice versa.

I’ll keep writing up my findings, in the form of notes on this site. And if anyone has any recommendations of what else I should be reading please add in the comments below.

AI needs diversity

November 6th, 2019 by Graham Attwell

As promised another AI post. One of the issues we are looking at in our project on AI and education is that of ethics. It seems to me that the tech companies have set up all kinds of ethical frameworks but I am not sure about the ethics! they seem to be trying to allay fears that the robots will take over: this is not a fear I share. I am far ore worried about what humans will do with AI. In that respect I very much like this TEDxWarwick talk by Kriti Sharma.

She says AI algorithms make important decisions about you all the time — like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.I wonder too, how much the lack of diversity in educational technology is holding back opportunities for learning

AI, education and training and the future of work

November 5th, 2019 by Graham Attwell

Last week was the first meeting of a new Erasmus Plus project entitled ‘Improving skills and competences of VET teachers and trainers in the age of Artificial Intelligence’. The project, led by the University of Bremen has partners frm the UK (Pontydysgu), Lithuania, Greece and Italy.

Kick off meetings are usually rather dull – with an understandable emphasis on rules and regulation, reporting and so on. Not this one. Everyone came prepared with ideas of their own on how we can address such a broad and important subject. And to our collective surprise I think, we had a remarkable degree of agreement on ways forward. I will write more about this(much more) in the coming days. For the moment here is my opening presentation to the project. A lot of the ideas come from the excellent book, “Artificial Intelligence in Education, Promises and Implications for Teaching and Learning” by the Center for Curriculum Redesign which as the website promises, “immerses the reader in a discussion on what to teach students in the era of AI and examines how AI is already demanding much needed updates to the school curriculum, including modernizing its content, focusing on core concepts, and embedding interdisciplinary themes and competencies with the end goal of making learning more enjoyable and useful in students’ lives. The second part of the book dives into the history of AI in education, its techniques and applications –including the way AI can help teachers be more effective, and finishes on a reflection about the social aspects of AI. This book is a must-read for educators and policy-makers who want to prepare schools to face the uncertainties of the future and keep them relevant.”

Improving the skills and competences of VET teachers and trainers in the age of Artificial Intelligence

October 11th, 2019 by Graham Attwell

Pontydysgu are partners in a new project on Artificial Intelligence and vocational Education and Training, starting this month. The project will last for two years and is funded through the EU Erasmus Plus programme. It is coordinated by the Institut Technik und Bildung at the University of Bremen and includes partners from Greece, Italy and Lithuania.

Below is a description of the project. There is also a short form to sign up for project newsletters and if your organisation is interested, to join the project as associate partners.

Artificial Intelligence (AI) can be defined as a computer system that has been designed to interact with the world in ways we think of as human and intelligent. Ample data, cheap computing and AI algorithms mean technology can learn very quickly. The transformative power of AI cuts across all economic and social sectors, including education. UNESCO says AI has the potential to accelerate the process of achieving the global education goals through reducing barriers to accessing learning, automating management processes, and optimizing methods in order to improve learning outcomes. Education will be profoundly transformed by AI.Teaching tools, ways of learning, access to knowledge, and teacher training will be revolutionized.

A recent European Joint Research Council policy foresight report suggests that “in the next years AI will change learning, teaching, and education. The speed of technological change will be very fast, and it will create high pressure to transform educational practices, institutions, and policies.” They say it is therefore important to understand the potential impact of AI on learning, teaching, and education, as well as on policy development.

AI is particularly important for vocational education and training as it promises profound changes in employment and work tasks. There have been a series of reports attempting to predict the future impact of AI on employment, producing varying estimates of the number of jobs vulnerable to automation as well as new jobs which will be created. But the greatest implications for VET lies in the changing tasks and roles within jobs, requiring changes in initial and continuing training, for those in work as well as those seeking employment. Cooperative robotics offers new work designs and job scenarios for occupations avoiding repetitive work tasks. This will require changes in existing VET
content, new programmes such as the design of AI systems in different sectors, and adaptation to
new ways of cooperative work with AI.

If teachers are to prepare young people for this new world of work, and to excite young people to engage with careers in designing and building future AI ecosystems, then VET teachers and trainers themselves require training to understand the impact of AI and the new needs of their students. There is an urgent need for young people to be equipped with a knowledge about AI, meaning the need for educators to be similarly equipped is imperative. This requires cooperation between policy makers, organisations involved in teacher training, vocational schools and occupational sector organisations, including social partners.

For VET teachers and trainers there are many possible uses of AI including new opportunities for adapting learning content based on student’s needs, new processes for assessment, analysing possible bottlenecks in learners’ domain understanding and improvement in guidance for learners. AI systems can provide diagnostic data to learners so that they can reflect on their metacognitive approaches and areas in need of development. New pedagogical possibilities include learning companions based on affective computing and emotion AI. AI systems can help in interpreting
activities undertaken in VET, linking theoretical and practice-based learning.

AI can be a key technology in the modernisation of VET by providing new opportunities for adapting
learning content based on student’s needs, new processes for assessment, analysing possible bottlenecks in learners’ domain understanding and improvement in guidance for learners. The project will promote open innovative methods and pedagogies and develop learning materials, tools and actions in the form of Open Educational Resources that support the effective use of Information and Communication Technologies (ICT) to provide initial training and continued professional development for VET teachers and trainers in Artificial Intelligence. The project will extend the European Framework for the Digital Competence of Educators, a reference framework tool for implementing regional and national tools and training programmes to include AI.

The project will seek to support VET teachers and trainers in extending and adapting open curriculum models for incorporating AI in vocational education and training. Furthermore, the project will develop an Open Massive Open Online Course in AI in education in English and German, open to all teachers and trainers in VET in Europe. The course materials will be freely available for other organisations to use for professional development.

The realisation of the potential of AI for VET requires the involvement of European teachers and trainers in designing solutions to the key educational challenges facing VET. Technologists alone cannot design effective AI solutions. The implications of AI for VET curriculum and for teaching and training in schools and the workplace are profound and educators must engage in discussing what needs to change as a matter of urgency

 

 

 

An Ethics of Artificial Intelligence Curriculum for Middle School Students

September 23rd, 2019 by Graham Attwell

With all the bad news emanating from MIT Media Lab in the last few weeks it is good to have something positive to report. MIT have released ‘An Ethics of Artificial Intelligence Curriculum for Middle School Students’ created by Blakeley H. Payne with support from the MIT Media Lab Personal Robots Group, directed by Cynthia Breazeal under a Creative Commons CC-BY-NC license. This  license allows you to  remix, tweak, and build upon these materials non-commercially as long as you include acknowledgement to the creators. Derivative works should include acknowledgement but do not have to be licensed as CC-BY-NC.

Details of the curriculum can be found in a Google docs document which they say includes a set of activities, teacher guides, assessments, materials, and more to assist educators in teaching about the ethics of artificial intelligence. These activities were developed at the MIT Media Lab to meet a growing need for children to understand artificial intelligence, its impact on society, and how they might shape the future of AI.

The curriculum was designed and tested for middle school students (approximately grades 5th-8th). Most activities are unplugged and only require the materials included in this document, although unplugged modifications are suggested for the activities which require computer access.

Pontydysgu are partners in a new project working with vocational teachers and trainers around the imapct of AI in their work. Although this curriculum was designed for middle school stdents a quick look suggests much of it can be amended for our users.

 

 

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories