Archive for the ‘AI’ Category

European Union, AI and data strategy

July 9th, 2020 by Graham Attwell
lens, colorful, background

geralt (CC0), Pixabay

is the rapporteur for the industry committe for European Parliament’s own-initiative  on data strategy and  a standing rapporteur on the World Trade Organization e-commerce negotiations in the European Parliament’s international trade committee.

Writing in Social Europe she says:

Building a human-centric data economy and human-centric artificial intelligence starts from the user. First, we need trust. We need to demystify the data economy and AI: people tend to avoid, resist or even fear developments they do not fully understand.

Education plays a crucial role in shaping this understanding and in making digitalisation inclusive. Although better services—such as services used remotely—make life easier also outside cities, the benefits of digitalisation have so far mostly accrued to an educated fragment of citizens in urban metropoles and one of the biggest obstacles to the digital shift is lack of awareness of new possibilities and skills.

Kampula-Natri draws attention to the Finnish-developed, free online course, ‘Elements of AI’. This started as a course for students in the University of Helsinki but has extended  its reach to over 1 per cent of Finnish citizens.

Kampula-Natri points out that in the Nordic countries, the majority of participants on the ‘Elements of AI’ course are female and in the rest of the world the proportion exceeds 40 per cent—more than three times as high as the average ratio of women working in the technology sector. She says that after the course had been running in Finland for a while, the number of women applying to study computer science in the University of Helsinki increased by 80 per cent.

Learning about surveillance

July 3rd, 2020 by Graham Attwell
eye, surveillance, privacy

GDJ (CC0), Pixabay

I found this on the Social Media Collective website. The Social Media Collective is a network of social science and humanistic researchers, part of the Microsoft Research labs in New England and New York.

Yesterday the Wayne County Prosecutor publicly apologized to the first American known to be wrongfully arrested by a facial recognition algorithm: a black man arrested earlier this year by the Detroit Police. The statement cited the unreliability of software, especially as applied to people of color.

With this context in mind, some university and high school instructors teaching about technology may be interested in engaging with the Black Lives Matter protests by teaching about computing, race, and surveillance.

I’m delighted that thanks to the generosity of Tawana Petty and others, ESC can share a module on this topic developed for an online course. You are free to make use of it in your own teaching, or you might just find the materials interesting (or shocking).

The lesson consists of a case study of Detroit’s Project Green Light, a new city-wide police surveillance system that involves automated facial recognition, real-time police monitoring, very-high-resolution imagery, cameras indoors on private property, a paid priority response system, a public/private partnership, and other distinctive features. The system has allegedly been deployed to target peaceful Black Lives Matter protesters.

Here is the lesson:

Race, Policing, and Detroit’s Project Green Light

Artificial Intelligence for and in Vocational Education and Training

June 30th, 2020 by Graham Attwell

Last week the Taccle AI project organised a workshop at the European Distance Education Network (EDEN) Annual Conference. The conference, which had been scheduled to be held in Romania, was moved online due to the Covid 19 pandemic.There were four short presentations followed by an online discussion.

Graham Attwell introduced the workshop and explained the aims of the Taccle AI project. In the next years, he said “AI will change learning, teaching, and education. The speed of technological change will be very fast, and it will create high pressure to transform educational practices, institutions, and policies.” He was followed by Vidmantas Tulys who focused on AI and human work. He put forward five scenarios for the future of work in the light of AI and the fourth industrial revolution. Ludger Deitmer looked at the changes in the mechatronics occupation due to the impact of AI. He examine dhow training was being redesigned to meet new curriculum and occupational needs and how AI was being introduced in the curriculum. Finally, Sofia Roppertz focused on AI for VET, exploring how AI can support access to education, collaborative environments and intelligent tutoring systems to support teachers and trainers.

AI cloud computing to support formative assessment in vocational education and training

June 30th, 2020 by Graham Attwell

geralt (CC0), Pixabay

I have written before abut the the great work being done around AI by Bolton College in the UK and particularly their ADA Chatbot.

One of my main interests about the use of AI in vocational education and training is the potential for freeing  up teachers for more personalized learning support for both those students who are struggling and also for the advanced students. At the moment too many teachers are forced by workloads to teaxh to the middle.

My second big hope is around assessment. Vocational students need, I think, regular feedback and that can come form formative assessment. However, at present teacher do not have the time to prepare and provide feedback on regular formative assessments. But with AI this become possible.

Bolton College previously received VocTech seed funding to prove the concept of using Artificial Intelligence (AI) to analyse short and long form answers and to demonstrate that real-time feedback can be offered to vocational learners as they respond to online open-ended formative assessment tasks.

Their FirstPass tool provided an initial introduction to AI cloud computing technologies which are able to support vocational students and their teachers with open-ended formative assessment tasks.

Now according to Ufi who provide Voctech fundiing, a new project :will provide further development of FirstPass to ensure that it is effective and robust in use and can demonstrably improve the teaching, learning and assessment experience of vocational learners. It will provide teachers with a richer medium for assessing students due to its ability to pose open-ended questions that can be automatically analysed and assessed by a computer, giving students real-time feedback and the opportunity to qualify and clarify their responses.”

Ethics in AI and Education

June 10th, 2020 by Graham Attwell
industry, industry 4, web

geralt (CC0), Pixabay

The news that IBM is pulling out of the facial recognition market and is calling for “a national dialogue” on the technology’s use in law enforcement has highlighted the ethical concerns around AI powered technology. But the issue is not just confined to policing: it is also a growing concern in education. This post is based on a section in a forthcoming publication on the use of Artificial Intelligence in Vocational Education and Training, produced by the Taccle AI Erasmus Plus project.

Much concern has been expressed over the dangers and ethics of Artificial Intelligence both in general and specifically in education.

The European Commission (2020) has raised the following general issues (Naughton, 2020):

  • human agency and oversight
  • privacy and governance,
  • diversity,
  • non-discrimination and fairness,
  • societal wellbeing,
  • accountability,
  • transparency,
  • trustworthiness

However, John Naughton (2020), a technology journalist from the UK Open University, says “the discourse is invariably three parts generalities, two parts virtue-signalling.” He points to the work of David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society who in January 2020 published an article in the Harvard Data Science Review on the question “Should we trust algorithms?” saying that it is trustworthiness rather than trust we should be focusing on. He suggests a set of seven questions one should ask about any algorithm.

  1. Is it any good when tried in new parts of the real world?
  2. Would something simpler, and more transparent and robust, be just as good?
  3. Could I explain how it works (in general) to anyone who is interested?
  4. Could I explain to an individual how it reached its conclusion in their particular case?
  5. Does it know when it is on shaky ground, and can it acknowledge uncertainty?
  6. Do people use it appropriately, with the right level of scepticism?
  7. Does it actually help in practice?

Many of the concerns around the use of AI in education have already been aired in research around Learning Analytics. These include issues of bias, transparency and data ownership. They also include problematic questions around whether or not it is ethical that students should be told whether they are falling behind or indeed ahead in their work and surveillance of students.

The EU working group on AI in Education has identified the following issues:

  • AI can easily scale up and automate bad pedagogical practices
  • AI may generate stereotyped models of students profiles and behaviours and automatic grading
  • Need for big data on student learning (privacy, security and ownership of data are crucial)
  • Skills for AI and implications of AI for systems requirements
  • Need for policy makers to understand the basics of ethical AI.

Furthermore, it has been noted that AI for education is a spillover from other areas and not purpose built for education. Experts tend to be concentrated in the private sector and may not be sufficiently aware of the requirements in the education sector.

A further and even more troubling concern is the increasing influence and lobbying of large, often multinational, technology companies who are attempting to ‘disrupt’ public education systems. Audrey Waters (2019), who is publishing a book on the history of “teaching machines”, says her concern “is not that “artificial intelligence” will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans — intellectually, emotionally, occupationally — will be reduced to machines.” “Perhaps nothing,” she says, “has become quite as naturalized in education technology circles as stories about the inevitability of technology, about technology as salvation. She quotes the historian Robert Gordon who asserts that new technologies are incremental changes rather than whole-scale alterations to society we saw a century ago. Many new digital technologies, Gordon argues, are consumer technologies, and these will not — despite all the stories we hear – necessarily restructure our world.

There has been considerable debate and unease around the AI based “Smart Classroom Behaviour Management System” in use in schools in China since 2017. The system uses technology to monitor students’ facial expressions, scanning learners every 30 seconds and determining if they are happy, confused, angry, surprised, fearful or disgusted. It provides real time feedback to teachers about what emotions learners are experiencing. Facial monitoring systems are also being used in the USA. Some commentators have likened these systems to digital surveillance.

A publication entitled “Systematic review of research on artificial intelligence applications in higher education- where are the educators?” (Olaf Zawacki-Richter, Victoria I. Marín, Melissa Bond & Franziska Gouverneur (2019) which reviewed 146 out of 2656 identified publications concluded that there was a lack of critical reflection on risks and challenges. Furthermore, there was a weak connection to pedagogical theories and a need for an exploration of ethical and educational approaches. Martin Weller (2020) says educational technologists are increasingly questioning the impacts of technology on learner and scholarly practice, as well as the long-term implications for education in general. Neil Selwyn (2014) says “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique.”

Martin Weller (2020) is concerned at “the invasive uses of technologies, many of which are co-opted into education, which highlights the importance of developing an understanding of how data is used.”

Audrey Watters (2018) has compiled a list of the nefarious social and political uses or connections of educational technology, either technology designed for education specifically or co-opted into educational purposes. She draws particular attention to the use of AI to de-professionalise teachers. And Mike Caulfield (2016) in acknowledging the positive impact of the web and related technologies argues that “to do justice to the possibilities means we must take the downsides of these environments seriously and address them.”

References

Caulfield, M. (2016). Announcing the digital polarization initiative, an open pedagogy project [Blog post]. Hapgood. Retrieved from https://hapgood.us/2016/12/07/announcing-the-digital-polarization-initiative-an-open-pedagogy-joint/

European Commission (2020). White Paper on Artificial Intelligence – A European approach to excellence and trust. Luxembourg: Publications Office of the European Union.

Gordon, R. J. (2016). The Rise and Fall of American Growth – The U.S. Standard of Living Since the Civil War. Princeton University Press.

Naughton, J. (2020). The real test of an AI machine is when it can admit to not knowing something. Guardian. Retrieved from  https://www.theguardian.com/commentisfree/2020/feb/22/test-of-ai-is-when-machine-can-admit-to-not-knowing-something.

Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review. Retrieved from https://hdsr.mitpress.mit.edu/pub/56lnenzj, 27.02.2020.

Watters, A. (2019). Ed-Tech Agitprop. Retrieved from http://hackeducation.com/2019/11/28/ed-tech-agitprop,  27.02.2020

Weller, M (2020). 25 years of Ed Tech. Athabasca University: AU Press.

Using AI in a German VET School

June 3rd, 2020 by Graham Attwell

This post by Sophia Roppertz and Ludger Deitmer is part of the TaccleAI project for “‘Improving the Skills and Competences of VET teachers and trainers in the age of Artificial Intelligence.” It describes what is clled a ‘Deep Reinforcement Learning Project” in a German Vocational Education and Training school.

The topic of the project was Deep Reinforcement Learning – preparation of the topic “artificial intelligence” and implementation of an agent in the game “Sonic the Hedgehog”. Sonic is a computer game series of the Japanese publisher Sega. The classic main parts of the series are characterized by fast 2D jump ‘n’ run passages. There you control the blue game character Sonic The Hedgehog through so-called “zones”, which are divided into individual “acts”. In all Sonic games, rings are collected, which the main character loses when touching an opponent. If he is hit without rings, you lose an extra life. In the classic main games, after using up all extra lives and continues, you have to start all over again after a game over.

The task of the student group was to implement an agent into the game and finally to give a project presentation about the project. To accomplish this overall goal, some intermediate goals had to be achieved:

1) Acquire an understanding of artificial intelligence and neural networks

2) Gain advanced knowledge of the Python programming language

3) The AI should master different levels independently

How is the project structured?

Trainees of the vocational school “information technology assistants” (German: “Informationstechnische*r Assistent*in”) took part in the AI project. The AI project took place in the second year of training within the framework of the learning field “Planning, implementing and evaluating projects” (practice). The total time required was 160 hours per school year. The project meetings usually took place on a full day of lessons. The students had the opportunity to work in the computer room or in the corresponding workshops of the school. During this time, a teacher was present to provide support but did not actively participate in the project.

As part of the KI project, the students were given a presentation on project management by the responsible teacher. With this knowledge team rules were established, field analyses were made, a target matrix was created, a schedule and work packages were created. The individual work packages were assigned performance specifications and outputs that had to be delivered. Responsibilities for the work packages were also defined. Furthermore, the students were assigned roles within the project group: e.g.

Team speaker: Moderates the group work and makes sure that everyone can get involved, that the topic is worked on consistently, and that the team rules are observed.

Timekeeper: Makes sure that the timetable is respected.

Foreign Minister: Communicates with people outside the team, maintains contact, and involves people.

What do the trainees learn in the project?

The trainees were able to acquire both technical and social skills in the course of this project. On the one hand, they learned project-oriented work in a group, they set themselves goals and divided and organised their work independently. On the other hand, they independently dealt with a programming language (Python) that was new to them and learned its basics to the extent that they were able to understand, modify, and create programs. In addition, the trainees have dealt with the basics of neural networks and the different terms of machine learning, so that they were able to present the basics to their fellow students and explain the terms. They acquired this knowledge mainly by watching videos. They used textbooks less because they mostly dealt with the AI topic in a very mathematical way and the mathematical knowledge of the students was not sufficient for this.

They have dealt with the topic “Deep Reinforcement Learning” and were able to program an agent to such an extent or to change existing programs in such a way that this “agent” learns to improve “his” game. In the end, they got so far into the programming of the “agent” that they were able to explain to their classmates which parameters they had to adjust/change so that their “agent” could improve his game.

Reflection and Recommendations for other teachers

The supervising teacher reports in the interview that basic knowledge in the field of AI is becoming increasingly important for information technology assistants since, in the context of the digitalised working world, processes are increasingly influenced by algorithms and the use of computers. In addition, many of the students attend the technical secondary school (In Germany: Fachoberschule für Technik) after their vocational schooling in order to subsequently complete a corresponding course of study. Since the students have to deal with the topic of artificial intelligence at the latest then, it makes sense to deal with it already in the vocational school. In the project documentation, the students report that it was surprisingly easy to acquire basic knowledge about AI. However, they emphasize that the deeper immersion in the subject matter was an obstacle, as more complex mathematical knowledge would have been necessary. The students report that reading about this AI content sometimes led to lower motivation and productivity. Overall, however, the students report that the choice of project was a good decision and that they have gained an advanced understanding of AI and its practical implementation.

When asked about what needs to happen on the part of the school and the teachers so that such projects can be practiced regularly, the teacher interviewed reported that, on the one hand, appropriate further training for the teachers is necessary. Besides the transfer of knowledge about AI, the joint development of teaching concepts should be more important. In addition, existing teaching materials should be jointly reviewed and classified. Useful material could then be made available to interested colleagues as Open Educational Resources. The exchange with product developers is considered desirable in the area of teacher training. In such a framework, the social, political, and sociological aspects of AI should be discussed more critically.

The teacher recommends that the students have a say in choosing the appropriate topic. Students need motivation and perseverance to work in project groups, so it is an advantage if the project tasks are linked to the students’ interests. In addition, clear evaluation criteria should be established and communicated transparently.

Pathways to Future Jobs

June 1st, 2020 by Graham Attwell

katielwhite91 (CC0), Pixabay

Even before the COVIP 19 crisis and the consequent looming economic recession labour market researchers and employment experts were concerned at the prospects for the future of work due to automation and Artificial Intelligence.

The jury is still out concerning the overall effect of automation and AI on employment numbers. Some commentators have warned of drastic cuts in jobs, more optimistic projections have speculated that although individual occupations may suffer, the end effect may even be an increase in employment as new occupations and tasks emerge.

There is however general agreement on two things. The first is that there will be disruption to may occupations, in some cases leasing to a drastic reduction in the numbers employed and that secondly the tasks involved in different occupations will change.

In such a situation it is necessary to provide pathways for people from jobs at risk due to automation and AI to new and hopefully secure employment. In the UK NESTA are running the CareerTech Challenge programme, aimed at using technology to support the English Government’s National Retraining Scheme. In Canada, the Brookfield Institute has produced a research report ‘Lost and Found, Pathways from Disruption to Employment‘, proposing a framework for identifying and realizing opportunities in areas of growing employment, which, they say “could help guide the design of policies and programs aimed at supporting mid-career transitions.”

The framework is based on using Labour Market Information. But, as the authors point out, “For people experiencing job loss, the exact pathways from shrinking jobs to growing opportunities are not always readily apparent, even with access to labour market information (LMI).”

The methodology is based on the identification of origin occupations and destination occupations. Origin occupations are jobs which are already showing signs of employment. Decline regardless of the source of th disruption. Destination jobs are future orientated jobs into which individuals form an origin occupation can be reasonably expected to transition. They are growing, competitive and relatively resilient to shocks.

Both origin and destination occupations are identified by an analysis of employment data.

They are matched by analysing the underlying skills, abilities, knowledge, and work activities they require. This is based on data from the O*Net program. Basically, the researchers were looking for a high 80 or 90 per cent match. They also were looking for destination occupations which would include an increase in pay – or at least no decrease.

But even then, some qualitative analysis is needed. For instance, even with a strong skills match, a destination occupation might require certification which would require a lengthy or expensive training programme. Thus, it is not enough to rely on the numbers alone. Yet od such pathways can be identified then it could be possible to provide bespoke training programmes to support people in moving between occupations.

The report emphasises that skills are not the only issue and discusses other factors that affect a worker’s journey, thereby, they say “grounding the model in practical realities. We demonstrate that exploring job pathways must go beyond skills requirements to reflect the realities of how people make career transitions.”

These could include personal confidence or willingness or ability to move for a new job. They also include the willingness of employers to look beyond formal certificates as the basis for taking on new staff.

The report emphasises the importance of local labour market information. That automation and AI are impacting very differently in different cities and regions is also shown in research from both Nesta and the Centre for Cities in the UK. Put quite simply in some cities there are many jobs likely to be hard hit by automation and AI, in other cities far less. Of course, such analysis is going to be complicated by COVID 19. Cities, such as Derby in the UK, have a high percentage of jobs in the aerospace industry and these previously seemed relatively secure: this is now not so.

In this respect there is a problem with freely available Labour Market Information. The Brookfield Institute researchers were forced to base their work on the Canadian 2006 and 2016 censuses which as they admit was not ideal. Tn the UK data on occupations and employment from the Office of National Statistics is not available at a city level and it is very difficult to match up qualifications to employment. If similar work is to be undertaken in the UK, there will be a need for more disaggregated local Labour Market Information, some of it which may already be being collected through city governments and Local Economic Partnerships.

Creatively working with AI

May 14th, 2020 by Graham Attwell

A major theme in the research literature about Artificial Intelligence (AI) and the future of work is the potential of people working alongside or with AIs. However, it is quite hard to visualize what that might mean, outside the sphere of maintenance technicians in automated factories.

This weeks edition of the Raconteur online magazine provides six examples of people working with AI drawn from very different occupations and contexts:

 

 

 

  • Furniture designer
  • Journalist
  • Filmmaker
  • Musician
  • Fragrance developer

The article says: “Artificial intelligence is shaking up the world of work, automating out routine tasks and freeing workers to concentrate on the more creative elements of their job. But it can often be surprisingly good at mimicking human creativity, and with varying levels of human involvement is now making inroads into new areas of work.”

 

COVID-19, AI and automation

May 13th, 2020 by Graham Attwell

jarmoluk (CC0), Pixabay

It is worth thinking about how the COVID-19 pandemic will effect the future development and implementation of AI and automation. Of course such speculation is problematic – there are many factors coming into play – not least the length of the crisis and the impact of the resulting economic downturn on economies and on business.

A paper by and published by the World Economic Forum suggests “COVID-19 could spur automation and reverse globalization – to some extent.

Providing as an example current supply shortages of critical medical equipment, for example Personal Protection Equipment in the UK, they say: “The current COVID-19 pandemic has fully exposed the vulnerabilities of global value chains (GVCs) which are characterised by high interdependencies between global lead firms and suppliers located across several continents.”

They go on to say that: “Long before the COVID-19 pandemic, in an effort to mitigate supply chain risks, increase flexibility, and improve product standards, global lead firms have relied on Industry 4.0 technologies, such as robots, 3D printing, and smart factories, and occasionally reshored parts of their production.”

and   think the current crisis further spur automation and reshoring in Global Value Chains reducing reliance on “low-skill, low-cost labour in manufacturing” and production moving to a more regional basis, closer to final consumer markets. However they think it   unlikely that entire supply chains will be automated in the short term due to a shortage of skilled workers who are able to operate the machines and the cost of automated production for products with low value-to-weight ratios.

Furthermore growing demand for mid-range consumer goods in emerging markets and the availability of cheap labour in these markets could actually slow down the trend towards automation and reshoring.

CareerChat Bot

May 7th, 2020 by Graham Attwell
chatbot, bot, assistant

mohamed_hassan (CC0), Pixabay

Pontydysgu is very happy to be part of a consortium, led by DMH Associates, selected as a finalist for the CareerTech Challenge Prize!

The project is called CareerChat and the ‘pitch’ video above expalisn the ideas behind the project. CareerChat is a chatbot providing a personalised, guided career journey experience for working adults aged 24 to 65 in low skilled jobs in three major cities: Bristol, Derby and Newcastle. It offers informed, friendly and flexible high-quality, local contextual and national labour market information including specific course/training opportunities, and job vacancies to support adults within ‘at risk’ sectors and occupations

CareerChat incorporates advanced AI technologies, database applications and Natural Language Processing and can be accessed on computers, mobile phones and devices. It allows users to reflect, explore, find out and identify pathways and access to new training and work opportunities.

Nesta is delivering the CareerTech Challenge in partnership with the Department for Education as part of their National Retraining Scheme

  • Nesta research suggests that more than six million people in the UK are currently employed in occupations that are likely to radically change or entirely disappear by 2030 due to automation, population aging, urbanisation and the rise of the green economy.
  • In the nearer-term, the coronavirus crisis has intensified the importance of this problem. Recent warnings suggest that a prolonged lockdown could result in 6.5 million people losing their jobs. [1] Of these workers, nearly 80% do not have a university degree. [2]
  • The solutions being funded through the CareerTech Challenge are designed to support people who will be hit the hardest by an insecure job market over the coming years. This includes those without a degree, and working in sectors such as retail, manufacturing, construction and transport.

You can find out more information about the programme here: https://www.nesta.org.uk/project/careertech-challenge/ and email Graham Attwell directly if you would like to know more about the CareerChat project

  • Search Pontydysgu.org

    Social Media




    News Bites

    News from 1994

    This is from a Tweet. In 1994 Stephen Heppell wrote in something called SCET” “Teachers are fundamental to this. They are professionals of considerable calibre. They are skilled at observing their students’ capability and progressing it. They are creative and imaginative but the curriculum must give them space and opportunity to explore the new potential for learning that technology offers.” Nothing changes!


    Graduate Jobs

    As reported by WONKHE, a survey of 1,200 final year students conducted by Prospects in the UK found that 29 per cent have lost their jobs, and 26 per cent have lost internships, while 28 per cent have had their graduate job offer deferred or rescinded. 47 per cent of finalists are considering postgraduate study, and 29 per cent are considering making a career change. Not surprisingly, the majority feel negative about their future careers, with 83 per cent reporting a loss of motivation and 82 per cent saying they feel disconnected from employers


    Post-Covid ed-tech strategy

    The UK Ufi VocTech Trust are supporting the Association of Colleges to ensure colleges are supported to collectively overcome challenges to delivering online provision at scale. Over the course of the next few months, AoC will carry out research into colleges’ current capacity to enable high quality distance learning. Findings from the research will be used to create a post-Covid ed-tech strategy for the college sector.

    With colleges closed for most face-to-face delivery and almost 100% of provision now being delivered online, the Ufi says, learners will require online content and services that are sustainable, collective and accessible. To ensure no one is disadvantaged or left behind due to the crisis, this important work will contribute to supporting businesses to transform and upskilling and reskilling those out of work or furloughed.


    Erasmus+

    The European Commission has published an annual report of the Erasmus+ programme in 2018. During that time the programme funded more than 23,500 projects and supported the mobility of over 850,00 students, of which 28,247 were involved in UK higher education projects, though only one third of these were UK students studying abroad while the remainder were EU students studying in the UK. The UK also sent 3,439 HE staff to teach or train abroad and received 4,970 staff from elsewhere in the EU.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories