Archive for the ‘Assessment’ Category

Recommenders or e-Portfolios

September 24th, 2015 by Graham Attwell

I was interested by a comment by Stephen Downes in yesterdays OLDaily. Stephen says:

(People rating) will replace grades and evaluations. Because, when you have an entire network evaluating people, why on earth would you fall back on something as imprecise as a test? (p.s. smart network-based evaluations are what finally break up ‘old boy networks’ that mutually support each other with positive recommendations).

He was commenting on an article on the CBCNews website about the development of a new App being developed in Calgary. The CEO and co-founder of the people App Julia Cordray said: “You’re going to rate people in the three categories that you can possibly know somebody — professionally, personally or romantically.”

As Stephen commented there is really nothing new about this App. And we have experimented with this sort of thing in the MatureIP project. But instead of asking people to rate other people we rather asked them to list other peoples skills and competences. Despite my misgivings it worked well in a six month trial in a careers company in north England. What we found, I think, was that official records of what people can do and what their skills are are scanty and often not accessible and that people are often too shy to promote their own abilities.

But coming back to Stephens comments, I tend to think that smart network based recommendations may only strengthen old boys networks, rather than break them up. In research into Small and Medium Enterprises we found that owner . managers rarely worried too much bout qualifications, preferring to hire based on recommendations form existing employees or from their own social (off line at that time) networks.Yes of course tests are imprecise. But tests are open(ish) to all and were once seen as a democratising factor in job selection. Indeed some organisations are moving away from degrees as a recruitment benchmark – given their poor performance as a predictor of future success. But it doesn’t seem to em that recommendation services such as LinkedIn already deploy are going to help greatly even with smart network based evaluations. I still see more future in e-Portfolios and Personal Learning Networks, which allow users to show and explain their experience. I am a bit surprised about how quiet the ePortfolio scene has been of late but then again the Technology Enhanced Learning community are notorious for dropping ideas to jump on the latest trendy bandwagon.

Talking sense about assessment

September 24th, 2015 by Graham Attwell

In recent years educations seems to have become obsessed with metrics and assessment, comparisons and league tables. I do not think teh outpourings, enquiries, research and recommendation and so on have done much if anything to improve the standard of education, let alone learning and certainly nothing to improve the experience of learners.

So I found this blurb for the Brian Simon memorial lecture by Alison Peacock refreshing:

Alison will focus on her experience of leading the Wroxham School, a primary school where children and adults are inspired by the principles of ‘Learning without Limits’ to establish a learning community which is creative, inclusive and ambitious for all.  She will talk about ways of enabling assessment to be an intrinsic part of day to day teaching that builds a culture of trust in which children can think and talk about their learning confidently.  Alison will also discuss her role as a member of the government’s Commission on Assessment without Levels,  and her optimism that the removal of levels provides space for a more equitable approach to assessment within classrooms.

Years ago, I was part of a working team for the UK National Council for Vocational Qualifications looking at the whole issue of levels. We concluded that levels were an unworkable construct and that the *then new) national Vocational Qualifications would work much better without them. Needless to say our advice was ignored. So it is nice to see the proposal coming round again.

A flyer on Alison’s lecture can be downloaded here.

Issues in developing and implementing e-Portfolios

February 7th, 2013 by Graham Attwell

Diagramme: @lee74 (some rights reserved)

One of the issues driving the adoption of technology for learning in organisations – particularly in sectors and occupations such as teaching and the medial sector – is the need to show continuing professional development as a requirement for continuing registration.

Many organisations are looking to some form of e-Portfolio to meet this need. Yet there is a tension between the use of e-portfolios to record and reflect on learning, as a tools for learning itself and as a means to assessment.

A recently published study, (lif)e-Portfolio: a framework for implementation (PDF downlaod) by Lee D Ballantyne, from Cambridge International Examinations (CIE) and University of Cambridge ESOL Examinations (ESOL) , examines some of these issues.

Ballantyne says:

There has been much recent discussion (e.g. Barrett, 2009; JISC, 2012d) concerning the dichotomy of e-portfolios which have the primary purpose of learning versus those which have the primary purpose of assessment. E-portfolio systems developed specifically for assessment purposes often forgo key elements of the learner-centred e-portfolio: social tools, longevity, and personalisation. By contrast, e- portfolios primarily for learning often lack the award-specific structure and reporting tools required for assessment (see Appendix II). A suitable e-portfolio solution must take into consideration the backwash of assessment and that ―from the students‘ point of view assessment always defines the actual curriculum‖ (Ramsden, 1992, p 187), and when the purpose of an e-portfolio changes from a learning tool to summative assessment it becomes ―something that is done to them rather than something they WANT to maintain as a lifelong learning tool‖ (Barrett, 2004a). There is a clear link between an assessment purpose and lack of engagement (Tosh et al., 2005) and yet CIE and ESOL both have stakeholder groups (teachers and trainee teachers) who straddle both learner (professional development) and candidate (teaching awards). The main challenge is to convey the value of the whole e-portfolio to all stakeholders; to find the right balance between assessment-driven (institution-centric) requirements and learner-driven (user-centric) requirements; and to achieve a level of standardisation yet allow for personalisation and creativity (Barrett, 2009). This unprecedented link between teaching, learning and high stakes assessment is fundamentally disruptive: pedagogically, organisationally and technologically (Baume cited Taylor & Gill, 2006, p 4; Cambridge, 2012; Eynon cited Shada et al., 2011. p 75), and planning for successful implementation is critical (JISC, 2012e; Joyes et al., 2010; Meyer & Latham, 2008; Shada at el., 2011).

Innovating Pedagogy

August 19th, 2012 by Graham Attwell

The UK Open University have launched an interesting new series, Innovating Pedagogy. The series of reports is intended to explore new forms of teaching, learning and assessment for an interactive world, to guide teachers and policy makers in productive innovation.

Mike Sharples explains:

We wanted to distinguish our perspective from that of the EDUCAUSE Horizon reports, which start from a consideration of how technologies may influence education. I would argue that ours aren’t ‘technology-driven opportunities’, but are rather an exploring of new and emerging forms of teaching, learning and assessment in an age of technology. All innovations in education nowadays are framed in relation to technology, but that doesn’t mean they are ‘technology driven’. So, for example, personal inquiry learning is mediated and enhanced by technology, but not driven by it.

We had a long discussion over ‘pedagogies’. The problem is that there isn’t a word in English that means ‘the processes of teaching, learning and assessment’. I would argue that in current usage ‘pedagogy’ has broadened from a formal learning experience conducted by a teacher, as we have become more aware of the opportunities for peer learning, non-formal apprenticeship etc. See e.g. . The origin of the word isn’t ‘teacher’ but “slave who took children to and from school” We were careful to indicate in the Introduction our usage of the word: “By pedagogy we mean the theory and practice of teaching, learning, and assessment.” So, within that usage are practices that might contribute towards effective learning, such as creating and sharing annotations of textbooks.

The ten trends explored in the first report are:

Although the list may seem as little idiosyncratic, authors emphasise that the themes are often interlinked in practice. I wonder though, if there is something of a contradiction between Assessment for Learning and Learning Analytics?

I am also interested in the definition of rhizomatic learning: “supporting rhizomatic learning requires the creation of a context within which the curriculum and knowledge are constructed by members of a learning community and which can be reshaped in a dynamic manner in response to environmental conditions. The learning experience may build on social, conversational processes, as well as personal knowledge creation, linked into unbounded personal learning networks that merge formal and informal media.”

Recognising learning with badges

June 19th, 2012 by Graham Attwell

Moving into uncharted waters: are open badges the future for skills accreditation?

I am ever more interested in the idea of badges in generla and the Mozilla Badges project in particular.

Having said this I think some of the pilot work has been on the wrong track – in providing accreditation for vocational competence in fields with pre-existing qualifications, rather than looking at areas lacking existing froms of recognition.

Badges should be about recognising learning. And it is probably more important in motivating learners that they are able to recognise their own learning. So I see badges as an extension tot he assessment for learning movement. In this respect the sample badge on the Mozilla Open Badges project site is unhelpful. I know it is up to the provider to determine the forms of assessment and that Mozilla does not determine who can become a provider. But the example inevitably will influence how potential providers view badges. Assessment needs to be an active process, contirbuting both to the leaners’s understanding and facilitating the process fo recogniciton. Simple check boxes as in the example above do neither.

L like the Mozilla Backpack and obviously a great deal of effort is being put into developing a robust architecture. But just as important as the electronic badges is something learners can display. Jenny Hughes has suggested we should provide learners with a badge holder (at least for younger learners) and that they should be allowed to select one badge to wear to school each day.

The badges could look very similar to the popular football cards being distributed by German supermarkets. If youlook at the back of the card (below) thereis even space for several metadata fields.

In a follow up post I will discuss several practical ideas for piloting the badges.

Open Learning Analytics or Architectures for Open Curricula?

February 12th, 2012 by Graham Attwell

George Siemen’s latest post, based on his talk at TEDxEdmonton, makes for interesting reading.

George says:

Classrooms were a wonderful technological invention. They enabled learning to scale so that education was not only the domain of society’s elites. Classrooms made it (economically) possible to educate all citizens. And it is a model that worked quite well.

(Un)fortunately things change. Technological advancement, coupled with rapid growth of information, global connectedness, and new opportunities for people to self-organized without a mediating organization, reveals the fatal flaw of classrooms: slow-developing knowledge can be captured and rendered as curriculum, then be taught, and then be assessed. Things breakdown when knowledge growth is explosive. Rapidly developing knowledge and context requires equally adaptive knowledge institutions. Today’s educational institutions serve a context that no longer exists and its (the institution’s) legacy is restricting innovation.

George calls for the development of an open learning analytics architecture based on the idea that: “Knowing how schools and universities are spinning the dials and levers of content and learning – an activity that ripples decades into the future – is an ethical and more imperative for educators, parents, and students.”

I am not opposed to what he is saying, although I note Frances Bell’s comment about privacy of personal data. But I am unsure that such an architecture really would improve teaching and learning – and especially learning.

As George himself notes, the driving force behind the changes in teaching and learning that we are seeing today is the access afforded by new technology to learning outside the institution. Such access has largely rendered irrelevant the old distinctions between formal, non formal and informal learning. OK – there is still an issue in that accreditation is largely controlled by institutions who naturally place much emphasis on learning which takes place within their (controlled and sanctioned) domain. yet even this is being challenged by developments such as Mozilla’s Open Badges project.

Educational technology has played only a limited role in extending learning. In reality we have provided access to educational technology to those already within the system. But the adoption of social and business software for learning – as recognised in the idea of the Personal Learning Environment – and the similar adaption of these technologies for teaching and learning through Massive Open Online Courses (MOOCs) – have moved us beyond the practice of merely replicating traditional classroom architectures and processes in technology.

However there remain a series of problematic issues. Perhaps foremost is the failure to develop open curricula – or, better put, to rethink the role of curricula for self-organized learning.

For better or worse, curricula traditionally played a role in scaffolding learning – guiding learners through a series of activities to develop skills and knowledge. These activities were graded, building on previously acquired knowledge in developing a personal knowledge base which could link constituent parts, determining how the parts relate to one another and to an overall structure or purpose.

As Peter Pappas points out in his blog on ‘A Taxonomy of Reflection’, this in turn allows the development of what Bloom calls ‘Higher Order Reflection’ – enabling learners to combine or reorganize elements into a new pattern or structure.

Vygostsky recognised the importance of a ‘More Knowledgeable Other’ in supporting reflection in learning through a Zone of Peripheral Development. Such an idea is reflected in the development of Personal Learning Networks, often utilising social software.

Yet the curricula issue remains – and especially the issue of how we combine and reorganise elements of learning into new patterns and structure without the support of formal curricula. This is the more so since traditional subject boundaries are breaking down. Present technology support for this process is very limited. Traditional hierarchical folder structures have been supplemented by keywords and with some effort learners may be able to develop their own taxonomies based on metadata. But the process remains difficult.

So – if we are to go down the path of developing new open architectures – my priority would be for an open architecture of curricula. Such a curricula would play a dual role in supporting self organised learning for individuals but also at the same time supporting emergent rhizomatic curricula at a social level.


Using technology to develop assessment for learning

January 21st, 2012 by Graham Attwell

Assessment isn’t really my thing. That doesn’t mean I do not see it as important. I am interested in learning. Assessment for learning should help teachers and learners alike in developing their learning. But all too often assessment has little to do with learning. Indeed assessment has emerged as a barrier to the development of effective teaching and learning strategies especially collaborative learning using web 2.0 and social software tools.

This presentation by Luis Tinoca follows the present trend of adding 2.0 on the end of everything but is a useful exploration of how we can use technologies to support assessment for learning

Open online seminar

January 21st, 2012 by Graham Attwell

Jisc are hosting an open, online seminar on ‘Making Assessment Count (MAC)’ on Friday 3rd Feb – 1-2pm. The presenters are Professor Peter Chatterton (Daedalus e-World Ltd) and Professor Gunter Saunders (University of Westminster).

The mailing for the seminar says” “The objective of Making Assessment Count is primarily to help students engage more closely with the assessment process, either at the stage where they are addressing an assignment or at the stage when they receive feedback on a completed assignment. In addition an underlying theme of MAC is to use technology to help connect student reflections on their assessment with their tutors. To facilitate the reflection aspect of MAC a web based tool called e-Reflect is often used. This tool enables the authoring of self-review questionnaires by tutors for students. On completion of an e-Reflect questionnaire a report is generated for the student containing responses that are linked to the options the student selected on the questionnaire.”

You can find out more ans sign up for the seminar at

PISA vs Politics

November 4th, 2011 by Jenny Hughes

After a particularly tedious week and the prospect of a working weekend, Friday afternoon did not promise a lot. However, the last thing in the electronic in-tray today was to have a look at the entries for a competition Pontydysgu is sponsoring as part of the Learning About Politics project.

The competition was aimed at 8-14 year olds and asked them to write a story using any combination of digital media

“The theme for your story should be on a political event that has happened – or is currently happening – in Wales.
We are not just interested in the facts but on your opinions and impressions. For example, how do you feel about the event you are describing? Who do you agree with and why? What have been the consequences of the event you have chosen?”

Suddenly life got a lot better! The black and white world of education that I seem to have lived in for the last few weeks was in brilliant technicolour. The stories were variously funny, poignant, angry, persuasive and insightful. All of them were well researched, referenced, technically at a level that would put many class teachers to shame and above all, they entertained me and taught me a whole lot I didn’t know. Surely the definition of a good learning experience!

(And by the time I had settled down with a glass of wine and a cigarette, the learning environment seemed pretty good as well).

The thing that cheered me up the most was that these kids had opinions – well argued, well expressed and authentic. I was pretty rubbish at history (Was? ‘Am’ actually! More maths and physics, me…) but short of those exam questions which always started “Compare and contrast….” or “What arguments would you use to support …something ” I don’t ever remember being allowed to have a ‘real’ opinion on anything historical, still less encouraged to express them if I did. Especially not in primary school – I think I was doing post-grad before I earned that privilege.

Which brings me on to my main point! There is a great public panic at the moment about Wales’s performance in the Programme for International Student Assessment (PISA) because they are two beans behind somewhere or other, half a Brownie point below an average or a nanopoint lower than last time. Puhlease!!

I am not being dismissive from a point of total ignorance here – some years ago I worked on the PISA statistics and the methodology for several months; I even remember doing a keynote presentation at European Conference for Education Research on PISA . Nor am I suggesting that standards do not matter. What I am saying is that the ‘Ain’t it awful’ media frenzy generated by the Smartie counting exercise that is PISA – and the politicians’ heavy-handed response – does a huge disservice to this generation of feisty, articulate and confident kids. And to the amazing generation of teachers that scaffold their learning.

Working in Pontydysgu, being a teacher trainer and a very active school governor means that I spend a lot of time in classrooms and my contention is that 99% of teachers are doing a fantastic job under pretty rubbish conditions. (Did I say this in a previous post? Yes? Well I don’t care – it needs to be shouted from the roof tops).

So what am I going to do about it? Firstly, I am tempted to rewrite the newspaper headlines showing that Welsh education is improving and is better than ‘average’. A claim I could easily back-up by a different manipulation of the PISA figures. Secondly, I could point out that the PISA survey takes place every four years but that changes at the lower age ranges – such as the introduction of the new 3-7 yr old Foundation Phase in Wales (which is awesome) will not impact on PISA results for another nine years so knee-jerk changes to ‘fix’ things seem a bit premature. Thirdly, I could argue that putting so much store on paper-based testing in Reading, Maths and Science as the measure of success of ‘a broad and balanced curriculum’ and ‘pupil-centred, experiential learning’ is a bit of an oxymoron. Fourthly, I could remind our government that Wales led the way on getting rid of SATs and league tables on the very valid grounds that comparisons are unfair because they are not comparing like with like. They funded research which showed standardised testing to be unhelpful, demotivating and did nothing to improve performance. So on a local and national level they don’t work – do they suddenly work on an international one? Or maybe I should become a politician and take on the establishment in the debating chamber – but Hey! I’ve just found there’s a whole new generation of politically astute, sussed and sorted 10year olds who are going to do that much better than I could. Fifteen years from now, it’s going to be move over Minister! Leighton Andrews – ‘your’ education system has much to be proud of.

P.S. I might put some of the entries on the Pontydysgu website over the next few weeks so that you can see for yourself. Any teacher interested in getting their kids to write and publish political stories too, have a look at the Learning About Politics website and get back to us.

Open Badges, assessment and Open Education

August 25th, 2011 by Graham Attwell

I have spent some time this morning thinking about the Mozilla Open Badges and assessment project, spurred on by the study group set up by Doug Belshaw to think about the potential of the scheme. And the more I think about it, the more I am convinced of its potential as perhaps one of the most significant developments in the move towards Open Education. First though a brief recap for those of you who have not already heard about the project.

The Open Badges framework, say the project developers, is designed to allow any learner to collect badges from multiple sites, tied to a single identity, and then share them out across various sites — from their personal blog or web site to social networking profiles. The infrastructure needs to be open to allow anyone to issue badges, and for each learner to carry the badges with them across the web and other contexts.

Now some of the issues. I am still concerned of attempts to establish taxonomies, be it those of hierarchy in terms of award structures or those of different forms of ability / competence / skill (pick your own terminology). Such undertakings have bedeviled attempts to introduce new forms of recognition and I worry that those coming more from the educational technology world may not realise the pitfalls of taxonomies and levels.

Secondly is the issue of credibility. There is a two fold danger here. One is that the badges will only be adopted for achievements in areas / subjects / domains presently outside ‘official’ accreditation schemes and thus will be marginalised. There is also a danger that in the desire to gain recognition, badges will be effectively benchmarked against present accreditation programmes (e.g. university modules / degrees) and thus become subject to all the existing restrictions of such accreditation.

And thirdly, as the project roils towards a full release, there may be pressures for restricting badge issuers to existing accreditation bodies, and concentrating on the technological infrastructure, rather than rethinking practices in assessment.

Lets look at some of the characteristics of any assessment system:

  • Reliability

Reliability is a measure of consistency. A robust assessment system should be reliable, that is, it should yield the same results irrespective of who is conducting it or the environmental conditions under which it is taking place. Intra-tester reliability simply means that if the same assessor is looking at your work his or her judgement should be consistent and not influenced by, for example, another assessment they might have undertaken! Inter-tester reliability means that if two different assessors were given exactly the same evidence and so on, their conclusions should also be the same. Extra-tester reliability means that the assessors conclusions should not be influenced by extraneous circumstances, which should have no bearing on the evidence.

  • Validity

Validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. There are three sorts of validity. Face validity implies a match between what is being evaluated or tested and how that is being done. For example, if you are evaluating how well someone can bake a cake or drive a car, then you would probably want them to actually do it rather than write an essay about it! Content validity means that what you are testing is actually relevant, meaningful and appropriate and there is a match between what the learner is setting out to do and what is being assessed. If an assessment system has predictive validity it means that the results are still likely to hold true even under conditions that are different from the test conditions. For example, performance evaluation of airline pilots who are trained to cope with emergency situations on a simulator must be very high on predictive validity.

  • Replicability

Ideally an assessment should be carried out and documented in a way which is transparent and which allows the assessment to be replicated by others to achieve the same outcomes. Some ‘subjectivist’ approaches to evaluation would disagree, however.

  • Transferability

Although each assessment is looking at a particular set of outcomes, a good assessment system is one that could be adapted for similar outcomes or could be extended easily to new learning.  Transferability is about the shelf-life of the assessment and also about maximising its usefulness.

  • Credibility

People actually have to believe in the assessment! It needs to be authentic, honest, transparent and ethical. If people question the rigour of the assessment process, doubt the results or challenge the validity of the conclusions, the assessment loses credibility and is not worth doing.

  • Practicality

This means simply that however sophisticated and technically sound the assessment is, if it takes too much of people’s time or costs too much or is cumbersome to use or the products are inappropriate then it is not a good evaluation!

Pretty obviously there is going to be a trade off between different factors. It is possible to design extremely sophisticated assessments which have a high degree of validity. However, such assessment may be extremely time consuming and thus not practical. The introduction of multiple tests through e-learning platforms is cheap and easy to produce. However they often lack face validity, especially for vocational skills and work based learning.

Lets try to make this discussion more concrete by focusing on one of the Learning Badges pilot assessments at the School of Webcraft.

OpenStreetMapper Badge Challenge

Description: The OpenStreetMapper badge recognizes the ability of the user to edit OpenStreetMap wherever satellite imagery is available in Potlatch 2.

Assessment Type: PEER – any peer can review the work and vote. The badge will be issued with 3 YES votes.

Assessment Details: is essentially a Wikipedia site for maps. OpenStreetMap benefits from real-time collaboration from thousands of global volunteers, and it is easy to join. Satellite images are available in most parts of the world.

P2PU has a basic overview of what OpenStreetMap is, and how to make edits in Potlatch 2 (Flash required). This isn’t the default editor, so please read “An OpenStretMap How-To“:

Your core tasks are:

  1. Register with OpenStreetMap and create a username. On your user page, accessible at this link , change your editor to Potlatch 2.
  2. On, search and find a place near you. Find an area where a restaurant, school, or gas station is unmapped, or could use more information. Click ‘Edit’ on the top of the map. You can click one of the icons, drag it onto the map, and release to make it stick.
  3. To create a new road, park, or other 2D shape, simply click to add points. Click other points on the map where there are intersections. Use the Escape to finish editing.
  4. To verify your work, go to edit your point of interest, click Advanced at the bottom of the editor to add custom tags to this point, and add the tag ‘p2pu’. Make its value be your P2PU username so we can connect the account posting on this page to the one posting on OpenStreetMap.
  5. Submit a link to your OpenStreetMap edit history. Fill in the blank in the following link with your OpenStreetMap username

You can also apply for the Humanitarian Mapper badge:

Assessment Rubric:

  1. Created OpenStreetMap username
  2. Performed point-of-interest edit
  3. Edited a road, park, or other way
  4. Added the tag p2pu and the value [username] to the point-of-interest edit
  5. Submitted link to OpenStreetMap edit history or user page to show what edits were made

NOTE for those assessing the submitted work. Please compare the work to the rubric above and vote YES if the submitted work meets the requirements (and leave a comment to justify your vote) or NO if the submitted work does not meet the rubric requirements (and leave a comment of constructive feedback on how to improve the work)

CC-BY-SA JavaScript Basic Badge used as template5.

Pretty clearly this assessment scores well on validity and also looks to be reliable. The template could easily be transferred as indeed it has in the pilot. It is also very practical. However, much of this is due to the nature of the subject being assessed – it is much easier to use computers for assessing practical tasks which involve the use of computers than it is for tasks which do not!

This leaves the issue of credibility. I have to admit  know nothing about the School of Webcraft, neither do I know who were the assessors for this pilot. But it would seem that instead of relying on external bodies in the form of examination boards and assessment agencies to provide credibility (deserved for otherwise), if the assessment process is integrated within communities of practice – and indeed assessment tasks such as the one given above could become a shared artefact of that community – then then the Badge could gain credibility. And this seems a much better way of buidli9ng credibility than trying to negotiate complicated arrangements that n number of badges at n level would be recognized as a degree or other ‘traditional’ qualification equivalent.

But lets return to some of the general issues around assessment again.

So far most of the discussions about the Badges project seem to be focused on summative assessment. But there is considerable research evidence that formative assessment is critical for learning. Formative assessment can be seen as

“all those activities undertaken by teachers, and by their students in assessing themselves, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged. Such assessment becomes ‘formative assessment’ when the evidence is actually used to adapt the teaching work to meet the needs.”

Black and Williams (1998)

And that is there the Badges project could come of age. One of the major problems with Personal Learning Environments is the difficulties learners have in scaffolding their own learning. The development of formative assessment to provide (on-line) feedback to learners could help them develop their personal learning plans and facilitate or mediate community involvement in that learning.Furthermore a series of tasks based assessments could guide learners through what Vygotsky called the Zone of Proximal Development (and incidentally in Vygotsky’s terms assessors would act as Significantly Knowledgeable Others).

In these terms the badges project has the potential not only to support learning taking place outside the classroom but to build a significant infrastructure or ecology to support learning that takes place anywhere, regardless of enrollment on traditional (face to face or distance) educational programmes.

In a second article in the next few days I will provide an example of how this could work.

  • Search

    News Bites

    The future of libraries

    The NMC Horizon Project, an ongoing research project designed to identify and describe emerging technologies likely to have an impact on teaching, learning, and creative inquiry has released its library edition. Six key trends, six significant challenges, and six important developments in technology are identified across three adoption horizons over the next one to five years, giving library leaders and staff, they say, a valuable guide for strategic technology planning.

    The NMC Horizon Report > 2015 Library Edition identifies “Increasing Value of the User Experience” and “Prioritization of Mobile Content and Delivery” as short-term impact trends driving changes in academic and research libraries over the next one to two years. The “Evolving Nature of the Scholarly Record” and “Increasing Focus on Research Data Management” are mid-term impact trends expected to accelerate technology use in the next three to five years; and “Increasing Accessibility of Research Content” and “Rethinking Library Spaces” are long-term impact trends, anticipated to impact libraries for the next five years or more.

    Online Educa Berlin

    Are you going to Online Educa Berlin 2014. As usual we will be there, with Sounds of the Bazaar, our internet radio station, broadcasting live from the Marlene bar on Thursday 4 and Friday 5 December. And as always, we are looking for people who would like to come on the programme. Tell us about your research or your project. tell us about cool new ideas and apps for learning. Or just come and blow off steam about something you feel strongly about. If you would like to pre-book a slot on the radio email graham10 [at] mac [dot] com telling us what you would like to talk about.


    Diana Laurillard, Chair of ALT, has invited contributions to a consultation on education technology to provide input to ETAG, the Education Technology Action Group, which was set up in England in February 2014 by three ministers: Michael Gove, Matthew Hancock and David Willetts.

    The deadline for contributions is 23 June at

    Social Tech Guide

    The Nominet Trust have announced their new look Social Tech Guide.

    The Social Tech Guide first launched last year, initially as a home to the 2013 Nominet Trust 100 – which they describe as a list of 100 inspiring digital projects tackling the world’s most pressing social issues.

    In  a press relase they say: “With so many social tech ventures out there supporting people and enforcing positive change on a daily basis, we wanted to create a comprehensive resource that allows us to celebrate and learn from the pioneers using digital technology to make a real difference to millions of lives.

    The Social Tech Guide now hosts a collection of 100’s of social tech projects from around the world tackling everything from health issues in Africa to corruption in Asia. You can find out about projects that have emerged out of disaster to ones that use data to build active and cohesive communities. In fact, through the new search and filter functionality on the site, you should find it quick and easy to immerse yourself in an inspiring array of social tech innovations.”

    Other Pontydysgu Spaces

  • Twitter

  • Sounds of the Bazaar AudioBoo

  • Recent Posts

  • Archives

  • Meta

  • Upcoming Events

      There are no events.
  • Categories