Archive for the ‘Assessment’ Category

Recognising learning with badges

June 19th, 2012 by Graham Attwell

Moving into uncharted waters: are open badges the future for skills accreditation?

I am ever more interested in the idea of badges in generla and the Mozilla Badges project in particular.

Having said this I think some of the pilot work has been on the wrong track – in providing accreditation for vocational competence in fields with pre-existing qualifications, rather than looking at areas lacking existing froms of recognition.

Badges should be about recognising learning. And it is probably more important in motivating learners that they are able to recognise their own learning. So I see badges as an extension tot he assessment for learning movement. In this respect the sample badge on the Mozilla Open Badges project site is unhelpful. I know it is up to the provider to determine the forms of assessment and that Mozilla does not determine who can become a provider. But the example inevitably will influence how potential providers view badges. Assessment needs to be an active process, contirbuting both to the leaners’s understanding and facilitating the process fo recogniciton. Simple check boxes as in the example above do neither.

L like the Mozilla Backpack and obviously a great deal of effort is being put into developing a robust architecture. But just as important as the electronic badges is something learners can display. Jenny Hughes has suggested we should provide learners with a badge holder (at least for younger learners) and that they should be allowed to select one badge to wear to school each day.

The badges could look very similar to the popular football cards being distributed by German supermarkets. If youlook at the back of the card (below) thereis even space for several metadata fields.

In a follow up post I will discuss several practical ideas for piloting the badges.

Open Learning Analytics or Architectures for Open Curricula?

February 12th, 2012 by Graham Attwell

George Siemen’s latest post, based on his talk at TEDxEdmonton, makes for interesting reading.

George says:

Classrooms were a wonderful technological invention. They enabled learning to scale so that education was not only the domain of society’s elites. Classrooms made it (economically) possible to educate all citizens. And it is a model that worked quite well.

(Un)fortunately things change. Technological advancement, coupled with rapid growth of information, global connectedness, and new opportunities for people to self-organized without a mediating organization, reveals the fatal flaw of classrooms: slow-developing knowledge can be captured and rendered as curriculum, then be taught, and then be assessed. Things breakdown when knowledge growth is explosive. Rapidly developing knowledge and context requires equally adaptive knowledge institutions. Today’s educational institutions serve a context that no longer exists and its (the institution’s) legacy is restricting innovation.

George calls for the development of an open learning analytics architecture based on the idea that: “Knowing how schools and universities are spinning the dials and levers of content and learning – an activity that ripples decades into the future – is an ethical and more imperative for educators, parents, and students.”

I am not opposed to what he is saying, although I note Frances Bell’s comment about privacy of personal data. But I am unsure that such an architecture really would improve teaching and learning – and especially learning.

As George himself notes, the driving force behind the changes in teaching and learning that we are seeing today is the access afforded by new technology to learning outside the institution. Such access has largely rendered irrelevant the old distinctions between formal, non formal and informal learning. OK – there is still an issue in that accreditation is largely controlled by institutions who naturally place much emphasis on learning which takes place within their (controlled and sanctioned) domain. yet even this is being challenged by developments such as Mozilla’s Open Badges project.

Educational technology has played only a limited role in extending learning. In reality we have provided access to educational technology to those already within the system. But the adoption of social and business software for learning – as recognised in the idea of the Personal Learning Environment – and the similar adaption of these technologies for teaching and learning through Massive Open Online Courses (MOOCs) – have moved us beyond the practice of merely replicating traditional classroom architectures and processes in technology.

However there remain a series of problematic issues. Perhaps foremost is the failure to develop open curricula – or, better put, to rethink the role of curricula for self-organized learning.

For better or worse, curricula traditionally played a role in scaffolding learning – guiding learners through a series of activities to develop skills and knowledge. These activities were graded, building on previously acquired knowledge in developing a personal knowledge base which could link constituent parts, determining how the parts relate to one another and to an overall structure or purpose.

As Peter Pappas points out in his blog on ‘A Taxonomy of Reflection’, this in turn allows the development of what Bloom calls ‘Higher Order Reflection’ – enabling learners to combine or reorganize elements into a new pattern or structure.

Vygostsky recognised the importance of a ‘More Knowledgeable Other’ in supporting reflection in learning through a Zone of Peripheral Development. Such an idea is reflected in the development of Personal Learning Networks, often utilising social software.

Yet the curricula issue remains – and especially the issue of how we combine and reorganise elements of learning into new patterns and structure without the support of formal curricula. This is the more so since traditional subject boundaries are breaking down. Present technology support for this process is very limited. Traditional hierarchical folder structures have been supplemented by keywords and with some effort learners may be able to develop their own taxonomies based on metadata. But the process remains difficult.

So – if we are to go down the path of developing new open architectures – my priority would be for an open architecture of curricula. Such a curricula would play a dual role in supporting self organised learning for individuals but also at the same time supporting emergent rhizomatic curricula at a social level.

 

Using technology to develop assessment for learning

January 21st, 2012 by Graham Attwell

Assessment isn’t really my thing. That doesn’t mean I do not see it as important. I am interested in learning. Assessment for learning should help teachers and learners alike in developing their learning. But all too often assessment has little to do with learning. Indeed assessment has emerged as a barrier to the development of effective teaching and learning strategies especially collaborative learning using web 2.0 and social software tools.

This presentation by Luis Tinoca follows the present trend of adding 2.0 on the end of everything but is a useful exploration of how we can use technologies to support assessment for learning

Open online seminar

January 21st, 2012 by Graham Attwell

Jisc are hosting an open, online seminar on ‘Making Assessment Count (MAC)’ on Friday 3rd Feb – 1-2pm. The presenters are Professor Peter Chatterton (Daedalus e-World Ltd) and Professor Gunter Saunders (University of Westminster).

The mailing for the seminar says” “The objective of Making Assessment Count is primarily to help students engage more closely with the assessment process, either at the stage where they are addressing an assignment or at the stage when they receive feedback on a completed assignment. In addition an underlying theme of MAC is to use technology to help connect student reflections on their assessment with their tutors. To facilitate the reflection aspect of MAC a web based tool called e-Reflect is often used. This tool enables the authoring of self-review questionnaires by tutors for students. On completion of an e-Reflect questionnaire a report is generated for the student containing responses that are linked to the options the student selected on the questionnaire.”

You can find out more ans sign up for the seminar at  http://jiscmac.eventbrite.co.uk/

PISA vs Politics

November 4th, 2011 by Jenny Hughes

After a particularly tedious week and the prospect of a working weekend, Friday afternoon did not promise a lot. However, the last thing in the electronic in-tray today was to have a look at the entries for a competition Pontydysgu is sponsoring as part of the Learning About Politics project.

The competition was aimed at 8-14 year olds and asked them to write a story using any combination of digital media

“The theme for your story should be on a political event that has happened – or is currently happening – in Wales.
We are not just interested in the facts but on your opinions and impressions. For example, how do you feel about the event you are describing? Who do you agree with and why? What have been the consequences of the event you have chosen?”

Suddenly life got a lot better! The black and white world of education that I seem to have lived in for the last few weeks was in brilliant technicolour. The stories were variously funny, poignant, angry, persuasive and insightful. All of them were well researched, referenced, technically at a level that would put many class teachers to shame and above all, they entertained me and taught me a whole lot I didn’t know. Surely the definition of a good learning experience!

(And by the time I had settled down with a glass of wine and a cigarette, the learning environment seemed pretty good as well).

The thing that cheered me up the most was that these kids had opinions – well argued, well expressed and authentic. I was pretty rubbish at history (Was? ‘Am’ actually! More maths and physics, me…) but short of those exam questions which always started “Compare and contrast….” or “What arguments would you use to support …something ” I don’t ever remember being allowed to have a ‘real’ opinion on anything historical, still less encouraged to express them if I did. Especially not in primary school – I think I was doing post-grad before I earned that privilege.

Which brings me on to my main point! There is a great public panic at the moment about Wales’s performance in the Programme for International Student Assessment (PISA) because they are two beans behind somewhere or other, half a Brownie point below an average or a nanopoint lower than last time. Puhlease!!

I am not being dismissive from a point of total ignorance here – some years ago I worked on the PISA statistics and the methodology for several months; I even remember doing a keynote presentation at European Conference for Education Research on PISA . Nor am I suggesting that standards do not matter. What I am saying is that the ‘Ain’t it awful’ media frenzy generated by the Smartie counting exercise that is PISA – and the politicians’ heavy-handed response – does a huge disservice to this generation of feisty, articulate and confident kids. And to the amazing generation of teachers that scaffold their learning.

Working in Pontydysgu, being a teacher trainer and a very active school governor means that I spend a lot of time in classrooms and my contention is that 99% of teachers are doing a fantastic job under pretty rubbish conditions. (Did I say this in a previous post? Yes? Well I don’t care – it needs to be shouted from the roof tops).

So what am I going to do about it? Firstly, I am tempted to rewrite the newspaper headlines showing that Welsh education is improving and is better than ‘average’. A claim I could easily back-up by a different manipulation of the PISA figures. Secondly, I could point out that the PISA survey takes place every four years but that changes at the lower age ranges – such as the introduction of the new 3-7 yr old Foundation Phase in Wales (which is awesome) will not impact on PISA results for another nine years so knee-jerk changes to ‘fix’ things seem a bit premature. Thirdly, I could argue that putting so much store on paper-based testing in Reading, Maths and Science as the measure of success of ‘a broad and balanced curriculum’ and ‘pupil-centred, experiential learning’ is a bit of an oxymoron. Fourthly, I could remind our government that Wales led the way on getting rid of SATs and league tables on the very valid grounds that comparisons are unfair because they are not comparing like with like. They funded research which showed standardised testing to be unhelpful, demotivating and did nothing to improve performance. So on a local and national level they don’t work – do they suddenly work on an international one? Or maybe I should become a politician and take on the establishment in the debating chamber – but Hey! I’ve just found there’s a whole new generation of politically astute, sussed and sorted 10year olds who are going to do that much better than I could. Fifteen years from now, it’s going to be move over Minister! Leighton Andrews – ‘your’ education system has much to be proud of.

P.S. I might put some of the entries on the Pontydysgu website over the next few weeks so that you can see for yourself. Any teacher interested in getting their kids to write and publish political stories too, have a look at the Learning About Politics website and get back to us.

Open Badges, assessment and Open Education

August 25th, 2011 by Graham Attwell

I have spent some time this morning thinking about the Mozilla Open Badges and assessment project, spurred on by the study group set up by Doug Belshaw to think about the potential of the scheme. And the more I think about it, the more I am convinced of its potential as perhaps one of the most significant developments in the move towards Open Education. First though a brief recap for those of you who have not already heard about the project.

The Open Badges framework, say the project developers, is designed to allow any learner to collect badges from multiple sites, tied to a single identity, and then share them out across various sites — from their personal blog or web site to social networking profiles. The infrastructure needs to be open to allow anyone to issue badges, and for each learner to carry the badges with them across the web and other contexts.

Now some of the issues. I am still concerned of attempts to establish taxonomies, be it those of hierarchy in terms of award structures or those of different forms of ability / competence / skill (pick your own terminology). Such undertakings have bedeviled attempts to introduce new forms of recognition and I worry that those coming more from the educational technology world may not realise the pitfalls of taxonomies and levels.

Secondly is the issue of credibility. There is a two fold danger here. One is that the badges will only be adopted for achievements in areas / subjects / domains presently outside ‘official’ accreditation schemes and thus will be marginalised. There is also a danger that in the desire to gain recognition, badges will be effectively benchmarked against present accreditation programmes (e.g. university modules / degrees) and thus become subject to all the existing restrictions of such accreditation.

And thirdly, as the project roils towards a full release, there may be pressures for restricting badge issuers to existing accreditation bodies, and concentrating on the technological infrastructure, rather than rethinking practices in assessment.

Lets look at some of the characteristics of any assessment system:

  • Reliability

Reliability is a measure of consistency. A robust assessment system should be reliable, that is, it should yield the same results irrespective of who is conducting it or the environmental conditions under which it is taking place. Intra-tester reliability simply means that if the same assessor is looking at your work his or her judgement should be consistent and not influenced by, for example, another assessment they might have undertaken! Inter-tester reliability means that if two different assessors were given exactly the same evidence and so on, their conclusions should also be the same. Extra-tester reliability means that the assessors conclusions should not be influenced by extraneous circumstances, which should have no bearing on the evidence.

  • Validity

Validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. There are three sorts of validity. Face validity implies a match between what is being evaluated or tested and how that is being done. For example, if you are evaluating how well someone can bake a cake or drive a car, then you would probably want them to actually do it rather than write an essay about it! Content validity means that what you are testing is actually relevant, meaningful and appropriate and there is a match between what the learner is setting out to do and what is being assessed. If an assessment system has predictive validity it means that the results are still likely to hold true even under conditions that are different from the test conditions. For example, performance evaluation of airline pilots who are trained to cope with emergency situations on a simulator must be very high on predictive validity.

  • Replicability

Ideally an assessment should be carried out and documented in a way which is transparent and which allows the assessment to be replicated by others to achieve the same outcomes. Some ‘subjectivist’ approaches to evaluation would disagree, however.

  • Transferability

Although each assessment is looking at a particular set of outcomes, a good assessment system is one that could be adapted for similar outcomes or could be extended easily to new learning.  Transferability is about the shelf-life of the assessment and also about maximising its usefulness.

  • Credibility

People actually have to believe in the assessment! It needs to be authentic, honest, transparent and ethical. If people question the rigour of the assessment process, doubt the results or challenge the validity of the conclusions, the assessment loses credibility and is not worth doing.

  • Practicality

This means simply that however sophisticated and technically sound the assessment is, if it takes too much of people’s time or costs too much or is cumbersome to use or the products are inappropriate then it is not a good evaluation!

Pretty obviously there is going to be a trade off between different factors. It is possible to design extremely sophisticated assessments which have a high degree of validity. However, such assessment may be extremely time consuming and thus not practical. The introduction of multiple tests through e-learning platforms is cheap and easy to produce. However they often lack face validity, especially for vocational skills and work based learning.

Lets try to make this discussion more concrete by focusing on one of the Learning Badges pilot assessments at the School of Webcraft.

OpenStreetMapper Badge Challenge

Description: The OpenStreetMapper badge recognizes the ability of the user to edit OpenStreetMap wherever satellite imagery is available in Potlatch 2.

Assessment Type: PEER – any peer can review the work and vote. The badge will be issued with 3 YES votes.

Assessment Details:

OpenStreetMap.org is essentially a Wikipedia site for maps. OpenStreetMap benefits from real-time collaboration from thousands of global volunteers, and it is easy to join. Satellite images are available in most parts of the world.

P2PU has a basic overview of what OpenStreetMap is, and how to make edits in Potlatch 2 (Flash required). This isn’t the default editor, so please read “An OpenStretMap How-To“:

Your core tasks are:

  1. Register with OpenStreetMap and create a username. On your user page, accessible at this link , change your editor to Potlatch 2.
  2. On OpenStreetMap.org, search and find a place near you. Find an area where a restaurant, school, or gas station is unmapped, or could use more information. Click ‘Edit’ on the top of the map. You can click one of the icons, drag it onto the map, and release to make it stick.
  3. To create a new road, park, or other 2D shape, simply click to add points. Click other points on the map where there are intersections. Use the Escape to finish editing.
  4. To verify your work, go to edit your point of interest, click Advanced at the bottom of the editor to add custom tags to this point, and add the tag ‘p2pu’. Make its value be your P2PU username so we can connect the account posting on this page to the one posting on OpenStreetMap.
  5. Submit a link to your OpenStreetMap edit history. Fill in the blank in the following link with your OpenStreetMap username http://www.openstreetmap.org/user/____/edits

You can also apply for the Humanitarian Mapper badge: http://badges.p2pu.org/questions/132/humanitarian-mapper-badge-challenge

Assessment Rubric:

  1. Created OpenStreetMap username
  2. Performed point-of-interest edit
  3. Edited a road, park, or other way
  4. Added the tag p2pu and the value [username] to the point-of-interest edit
  5. Submitted link to OpenStreetMap edit history or user page to show what edits were made

NOTE for those assessing the submitted work. Please compare the work to the rubric above and vote YES if the submitted work meets the requirements (and leave a comment to justify your vote) or NO if the submitted work does not meet the rubric requirements (and leave a comment of constructive feedback on how to improve the work)

CC-BY-SA JavaScript Basic Badge used as template5.

Pretty clearly this assessment scores well on validity and also looks to be reliable. The template could easily be transferred as indeed it has in the pilot. It is also very practical. However, much of this is due to the nature of the subject being assessed – it is much easier to use computers for assessing practical tasks which involve the use of computers than it is for tasks which do not!

This leaves the issue of credibility. I have to admit  know nothing about the School of Webcraft, neither do I know who were the assessors for this pilot. But it would seem that instead of relying on external bodies in the form of examination boards and assessment agencies to provide credibility (deserved for otherwise), if the assessment process is integrated within communities of practice – and indeed assessment tasks such as the one given above could become a shared artefact of that community – then then the Badge could gain credibility. And this seems a much better way of buidli9ng credibility than trying to negotiate complicated arrangements that n number of badges at n level would be recognized as a degree or other ‘traditional’ qualification equivalent.

But lets return to some of the general issues around assessment again.

So far most of the discussions about the Badges project seem to be focused on summative assessment. But there is considerable research evidence that formative assessment is critical for learning. Formative assessment can be seen as

“all those activities undertaken by teachers, and by their students in assessing themselves, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged. Such assessment becomes ‘formative assessment’ when the evidence is actually used to adapt the teaching work to meet the needs.”

Black and Williams (1998)

And that is there the Badges project could come of age. One of the major problems with Personal Learning Environments is the difficulties learners have in scaffolding their own learning. The development of formative assessment to provide (on-line) feedback to learners could help them develop their personal learning plans and facilitate or mediate community involvement in that learning.Furthermore a series of tasks based assessments could guide learners through what Vygotsky called the Zone of Proximal Development (and incidentally in Vygotsky’s terms assessors would act as Significantly Knowledgeable Others).

In these terms the badges project has the potential not only to support learning taking place outside the classroom but to build a significant infrastructure or ecology to support learning that takes place anywhere, regardless of enrollment on traditional (face to face or distance) educational programmes.

In a second article in the next few days I will provide an example of how this could work.

What role does technology have in shaping a new future in education?

January 3rd, 2011 by Graham Attwell

The first blog of the new year looks at what I see as something of a contradiction for those of us wanting to change and hopefully improve education. Lets look at two trends from 2010.

In terms of the use of technology for teaching and learning we saw limited technical innovation. OK, the UK saw an increasing trend towards providing Virtual Learning environments (mainly Moodle) in primary schools. Applications like Google docs and Dropbox allowed enhanced facilities for collaborative work and file sharing. However neither of these was designed specifically for educational use. Indeed the main technical trend may have been on the one hand the increased use of social software and cloud computing apps for learning and on the other hand a movement away from free social software towards various premium business models. Of course mobile devices are fast evolving and are making an increasing impact on teaching and learning.

But probably the main innovation was in terms of pedagogy and in wider approaches to ideas around learning. and here the major development is around open learning. Of course we do not have a precise or agreed definition of what open education or open learning means. But the movement around Open Educational Resources appears to be becoming a part of the mainstream development in the provision of resources for tecahing and learning, despite significant barriers still to be overcome.  And there is increasing open and free tecahing provision be it through online ‘buddy’ systems, say for language learning, various free courses available through online VLEs and the proliferation of programmes offered as Massive Open Online Courses (MOOCs) using a variety of both educational and social software. Whilst we are still struggling to develop new financial models for such programmes, perhaps the major barrier is recognition. This issue can be viewed at three different levels.

  1. The first level is a more societal issue of how we recognise learning (or attainment). at the moment this tends to be through the possession of accreditation or certification from accredited institutions. Recognition takes the form of entry into a profession or job, promotion to a higher level or increased pay.
  2. The second level is that of accreditation. Who should be able to provide such accreditation and perhaps more importantly what should it be for (this raises the question of curriculum).
  3. The third is the issue of assessment. Although traditional forms of individual assessment can be seen as holding back more innovative and group based forms of teaching and learning there are signs of movement in this direction – see, for example the Jisc Effective Assessment in a Digital Age, featured as his post of the year by Stephen Downes.

These issues can be overcome and I think there are significant moves towards recognising broader forms of learning in different contexts. In this respect, the development of Personal Learning Environments and Personal Learning Networks are an important step forward in allowing access to both technology and sources of learning to those not enrolled in an institution.

However, such ‘progress’ is not without contradiction. One of the main gains of social democratic and workers movements over the last century has been to win free access to education and training for all based on nee4d rather than class or income. OK, there are provisos. Such gains were for those in rich industrialised countries – in many areas of the world children still have no access to secondary education – let alone university. Even in those rich countries, there are still big differences in terms of opportunities based on class. And it should not be forgotten that whilst workers movements have fought for free and universal access to education, it has been the needs of industry and the economic systems which have tended to prevail in extending access (and particularly in moulding the forms of provision (witness the widely different forms of the education systems in northern Europe).

Now those gains are under attack. With pressures on econo0mies due of the collapse of the world banking system, governments are trying to roll back on the provision of free education. In countries like the UK, the government is to privatise education – both through developing a market driven system and through transferring the cost of education from the state to the individual or family.

Students have led an impressive (and largely unexpected) fightback in the UK and the outcome of this struggle is by no means clear. Inevitably they have begun to reflect on the relation between their learning and the activities they are undertaking in fighting the increases in fees and cutbacks in finances, thus raising the issue of the wider societal purposes and forms of education.

And that also poses issues for those of us who have viewed the adoption of technology for learning as an opportunity for innovation and change in pedagogy and for extending learning (through Open Education) to those outside schools and universities. How can we defend traditional access to institutional learning, whilst at the same time attacking it for its intrinsic limitations.

At their best, both the movements around Open Education and the student movement against cuts have begun to pose wider issues of pedagogy and the purpose and form of education as will as the issues of how we recognise learning. One of the most encouraging developments in the student movement in the UK has been the appropriation of both online and physical spaces to discuss these wider issues (interestingly in opposition to the police who have in contrast attempted to close access to spaces and movement through he so-called kettling tactic).

I wonder now, if it is possibel to bring together the two different movements to develop new visions of education together with a manifesto or rather manifestos for aschieveing such visions.

Digital story telling stops plagiarism!

June 21st, 2010 by Graham Attwell

There’s an interesting aside in an article in today’s Guardian newspaper on the so called problems of plagiarism. Why do I say so called? Whilst I would agree that practices of buying and selling essays are a problem, these practices have always gone on. When, many years again pre-internet days, I was a student at Swansea University, it was always possible to buy an essay in a bar. And I would also argue that a side benefit of cut and stick technologies is that standards of referencing in universities today is much higher than it was in my time as student. Indeed at that time, you were expected to buy your tutors’ textbooks and to paraphrase (plagiarise) their work. Plagiarism is as much a social construct as it is a technological issue.

But coming back to today’s article, reporting on a three day international conference on plagiarism at Northumbria University, the Guardian reports that “The conference will also hear that the problem of plagiarism at university could be reduced if students used “digital storytelling” – creating packages of images and voiceovers – rather than essays to explain their learning from an imagined personal perspective.

Phil Davies, senior lecturer at Glamorgan university’s computing school, said he had been using the technique for two years and had not seen any evidence of cheating. “Students find it really hard but it’s very rewarding, because they’re not copying and writing an essay, they have to think about it and bring their research into a personal presentation.”

Another approach is to focus on authentic assessment – or rather assessment of authentic learning tasks. In this case students are encouraged to use the internet for research but have to reflect on and re-purpose materials for reporting on their own individual research.

In both cases this goes beyond dealing plagiarism – it is good practice in teaching and learning. And I wonder if that might be a better starting point for the efforts of researchers, developers and teachers.

Rethinking e-Portfolios

March 14th, 2010 by Graham Attwell

The second in my ‘Rethinking’ series of blog posts. This one – Rethinking e-portfolios’ is the notes for a forthcoming book chapter which I will post on the Wales wide Web when completed..

Several years ago, e-portfolios were the vogue in e-learning research and development circles. Yet today little is heard of them. Why? This is not an unimportant question. One of the failures of the e-learnng community is our tendency to move from one fad to the next, without ever properly examining what worked, what did not, and the reasons for it.

First of all it is important to note that  there was never a single understanding or approach to the development and purpose of an e-Portfolio. This can largely due be ascribed to different didactic and pedagogic approaches to e-Portfolio development and use. Some time ago I wrote that “it is possible to distinguish between three broad approaches: the use of e-Portfolios as an assessment tool, the use of e-Portfolios as a tool for professional or career development planning (CDP), and a wider understanding of e-Portfolios as a tool for active learning.”

In a paper presented at the e-Portfolio conference in Cambridge in 2005 (Attwell, 2005), I attempted to distinguish between the different process in e-Portfolio development and then examined the issue of ownership for each of these processes.

eport

The diagramme reveals not only ownership issues, but possibly contradictory purposes for an e-Portfolio. Is an e-Portfolio intended as a space for learners to record all their learning – that which takes place in the home or in the workplace as well as in a course environment or is it a place or responding to prescribed outcomes for a course or learning programme? How much should a e-Portfolio be considered a tool for assessment and how much for reflection on learning? Can tone environment encompass all of these functions?

These are essentially pedagogic issues. But, as always, they are reflected in e-learning technologies and applications. I worked for a whole on a project aiming to ‘repurpose the OSPI e-portfolio (later merged into Sakai) for use in adult education in the UK. It was almost impossible. The pedagogic use of the e-Portfolio, essentially o report against course outcomes – was hard coded into the software.

Lets look at another, and contrasting, e-Portfolio application, ELGG. Although now used as a social networking platform, in its original incarnation ELGG stared out as a social e-portfolio, originating in research undertaken by Dave Tosh on an e-portfolio project. ELGG essentially provided for students to blog within a social network with fine grained and easy to use access controls. All well and good: students were not restricted to course outcomes in their learning focus. But when it came to report on learning as part of any assessment process, ELGG could do little. There was an attempt to develop a ‘reporting’ plug in tool but that offered little more than the ability to favourite selected posts and accumulate them in one view.

Mahara is another popular open source ePortfolio tool. I have not actively played with Maraha for two years. Although still built around a blogging platform, Mahara incorporated a series of reporting tools, to allow students to present achievements. But it also was predicated on a (university) course and subject structure.

Early thinking around e-Portfolios failed to take into account the importance of feedback – or rather saw feedback as predominately as coming from teachers. The advent of social networking applications showed the power of the internet for what are now being called personal Learning networks, in other words to develop personal networks to share learning and share feedback. An application which merely allowed e-learners to develop their own records of learning, even if they could generate presentations, was clearly not enough.

But even if e-portfolios could be developed with social networking functionality, the tendency for institutionally based learning to regard the class group as the natural network, limited their use in practice. Furthermore the tendency, at least in the school sector, of limited network access in the mistaken name of e-safety once more limited the wider development of ‘social e-Portfolios.”

But perhaps the biggest problem has been around the issue of reflection. Champions have lauded e-portfolios as a natural tools to facilitate reflection on learning. Helen Barrett (2004) says an “electronic portfolio is a reflective tool that demonstrates growth over time.” Yet  are e-Portfolios effective in promoting reflection? And is it possible to introduce a reflective tool in an educations system that values the passing of exams through individual assessment over all else? Merely providing spaces for learners to record their learning, albeit in a discursive style does not automatically guarantee reflection. It may be that reflection involves discourse and tools for recording outcomes offer little in this regard.

I have been working for the last three years on developing a reflective e-Portfolio for a careers service based din the UK. The idea is to provide students an opportunity to research different career options and reflect on their preferences, desired choices and outcomes.

We looked very hard at existing opens source e-portfolios as the basis for the project, nut could not find any that met our needs. We eventually decided to develop an e-Portfolio based on WordPress – which we named Freefolio.

At a technical level Freefolio was part hack and part the development of a plug in. Technical developments included:

  • The ability to aggregate summaries of entries on a group basis
  • The ability add custom profiles to see profiles of peers
  • Enhanced group management
  • The ability to add blog entries based on predefined xml templates
  • More fine grained access controls
  • An enhanced workspace view

Much of this has been overtaken by subsequent releases of WordPress multi user and more recently Buddypress. But at the time Freefolio was good. However it did  not work in practice. Why? There were two reasons I think. Firstly, the e-Portfolio was only being used for careers lessons in school and that forms too little a part of the curriculum to build a critical mass of familiarity with users. And secondly, it was just too complex for many users. The split between the front end and the back end of WordPress confused users. The pedagogic purpose, as opposed to the functional use was too far apart. Why press on something called ‘new post’ to write about your career choices.

And, despite our attempts to allow users to select different templates, we had constant feedback that there was not enough ease of customisation in the appearance of the e-Portfolio.

In phase two of the project we developed a completely different approach. Rather than produce an overarching e-portfolip, we have developed a series of careers ‘games; to be accessed through the Careers company web site. Each of the six or so games, or mini applications we have developed so far encourages users to reflect on different aspects of their careers choices. Users are encouraged to rate different careers and to return later to review their choices. The site is yet to be rolled out but initial evaluations are promising.

I think there are lessons to be learnt from this. Small applications that encourage users to think are far better than comprehensive e-portfolios applications which try to do everything.

Interestingly, this view seems to have concur with that of CETIS. Simon Grant points out: “The concept of the personal learning environment could helpfully be more related to the e-portfolio (e-p), as both can help informal learning of skills, competence, etc., whether these abilities are formally defined or not.”

I would agree: I have previously seen both as related on a continuum, with differing foci but similar underpinning ideas. However I have always tended to view Personal Learning Environments as a pedagogic capproach, rather than an application. Despite this, there have been attempts to ‘build a PLE’. In that respect (and in relation to rethinking e-Portfolios) Scott Wilson’s views are interesting. Simon Grant says: “As Scott Wilson pointed out, it may be that the PLE concept overreached itself. Even to conceive of “a” system that supports personal learning in general is hazardous, as it invites people to design a “big” system in their own mind. Inevitably, such a “big” system is impractical, and the work on PLEs that was done between, say, 2000 and 2005 has now been taken forward in different ways — Scott’s work on widgets is a good example of enabling tools with a more limited scope, but which can be joined together as needed.”

Simon Grant goes on to say the ““thin portfolio” concept (borrowing from the prior “personal information aggregation and distribution service” concept) represents the idea that you don’t need that portfolio information in one server; but that it is very helpful to have one place where one can access all “your” information, and set permissions for others to view it. This concept is only beginning to be implemented.”

This is similar to the Mash Up Personal Learning Environment, being promoted in a number of European projects. Indeed a forthcoming paper by Fridolin Wild reports on research looking at the value of light weight widgets for promoting reflection that can be embedded in existing e-learning programmes. This is an interesting idea in suggesting that tools for developing an e-Portfolio )or for that matter, a PLE can be embedded in learning activities. This approach does not need to be restricted to formal school or university based learning courses. Widgets could easily be embedded in work based software (and work flow software) and our initial investigations of Work Oriented Personal Learning Environments (WOMBLES) has shown the potential of mobile devices for capturing informal and work based learning.

Of course, one of the big developments in software since the early e-Portfolio days has been the rise of web 2.0, social software and more recently cloud computing. There seems little point in us spending time and effort developing applications for students to share powerpoint presentations when we already have the admirable slideshare application. And for bookmarks, little can compete with Diigo. Most of these applications allow embedding so all work can be displayed in one place. Of course there is an issue as to the longevity of data on such sites (but then, we have the same issue with institutional e-Portfolios and I would always recommend that students retain a local copy of their work). Of course, not all students are confident in the use of such tools: a series of recent studies have blown apart the Digital Native (see for example Hargittai, E. (2010). Digital Na(t)ives? Variation in Internet Skills and Uses among Members of the “Net Generation”. Sociological Inquiry. 80(1):92-113).  And some commercial services may be more suitable than other for developing an e-Portfolio: Facebook has in my view limitations! But, somewhat ironically, cloud computing may be moving us nearer to Helen Barrett’s idea of an e-Portfolio. John Morrison recently gave a presentation (downloadable here) based on his study of ‘what aspects of identity as learners and understandings of ways to learn are shown by students who have been through a program using course-based networked learning?’ In discussing technology he looked at University as opposed to personally acquired, standalone as opposed to networked and Explored as opposed to ongoing use.

He found that students:

Did not rush to use new technology

Used face-to-face rather than technology, particularly in early brainstorming phases of a project

Tried out software and rejected that which was not meeting a need

Used a piece of software until another emerged which was better

Restrained the amount of software they used regularly to relatively few programs

Certain technologies were ignored and don’t appear to have been tried out by the students

Students used a piece of software until another emerged which was better  which John equates with change. Students restrained the amount of software they used regularly to relatively few programs  which he equates with conservatism

Whilst students were previously heavy users of Facebook, they were now abandoning it. And whilst there was little previous use of Google docs, his latest survey suggested that this cloud application was now being heavily used. This is important in that one of the more strange aspects of previous e0Portolio development has been the requirement for most students to upload attached files, produced in an off line work processor, to the e-Portfolio and present as a file attachment. But if students (no doubt partly driven by costs savings) are using online software for their written work, this may make it much easier to develop online e-portfolios.

John concluded that :this cohort lived through substantial technological change. They simplified and rationalized their learning tools. They rejected what was not functional, university technology and some self-acquired tools. They operate from an Acquisition model of learning.” He concluded that “Students can pick up and understand new ways to learn from networks. BUT… they generally don’t. They pick up what is intended.” (It is also well worth reading the discussion board around John’s presentation – – although you will need to be logged in to the Elesig Ning  site).

So – the e-Portfolio may have a new life. But what particularly interests me us the interplay between pedagogic ideas and applications and software opportunities and developments in providing that new potential life. And of course, we still have to solve that issue of control and ownership. And as John says, students pick up what is intended. If we continue to adhere to an acquisition model of learning, it will be hard to persuade students to develop reflective e-Portfolios. We should continue to rethink e-Portfolios through a widget based approach. But we have also to continue to rethink our models of education and learning.

Using computers in exams

November 4th, 2009 by Graham Attwell

Late yesterday afternoon I had a phone call from BBC Radio Wales asking of I would come on the morning news programme to talk about the use of computers in exams. According to the researcher / producer (?) this was a debate opened up by a reform in Denmark. A quick Google search came up with the following article from the Politiken newspaper.

“Danish ‘A’ level students are likely to be able to use the Internet in their written exams if a test run later this year proves successful.

The Ministry of Education says that pupils already use the Internet for tests.

“It’s a good way to get hold of historical facts or an article that can be useful, for example, in a written social sciences exam,” Ministry Education Consultant Søren Vagner tells MetroXpress.

Digital hand-in

In order to prevent students from cheating by downloading translation programmes or communicating using chats, the idea is that papers should be handed in digitally and that there should be random checks on sites that students visit during an exam”

So early in the morning (at least for me) I got in and skyped into the BBC Cardiff newsroom. I was on the programme to defend the use of computers, Chris Woodhead, the ex Chief Inspector of Schools, was the opponent. And we had five minutes of knock around fun. The BBC preceded the item with three or four vox pops with ‘A’ Level school students from Monmouth in East Wales, who rather predictably said what a bad idea it was as it would penalise those who had worked hard to remember all the facts.

I said I thought on the whole it was a good idea becuas eit would allow students to use teh technolgie savaible in the ral worlls to show their creativity and ability to develop ideas and knowledge, Chris siad it was a bad thing because they would waste tiem surfing and it would prevent them showing their creativity and grasp on knowledge and ideas. and thatw a sit.

In reality, I think the discussion is a much deeper one over the nature and purpose of assessment. The ‘A’ level exam in the UK is essentially used as a filter mechanism, to select students for university. As such their is little authenticity. Students are inevitably taught for the exam. I saw some research a time ago suggesting that ‘A” levels are a poor predicator for later success in university but cannot find a reference ot that at the moment. The problem is that the examinations do not really test the students learned, but their ability to apply what they have learnt to a particular series of formalised tests. neither do the exams serve to help the students in their learning, Other than, I suppose, motivating them to learn a lot of facts in the run up to the exam. I fear that little of what we call revision for exams actually involves reflection on learning. And if the use of computers were to herald a move away from learning facts, to reflecting on meanings, then it could only be a good thing. But at then end of the day, I can’t get excited – and certainly couldn’t so early in the morning. The big issue for me is how to use technology to support learning. And that is another thing.

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories