Archive for the ‘Assessment’ Category

Open Badges, assessment and Open Education

August 25th, 2011 by Graham Attwell

I have spent some time this morning thinking about the Mozilla Open Badges and assessment project, spurred on by the study group set up by Doug Belshaw to think about the potential of the scheme. And the more I think about it, the more I am convinced of its potential as perhaps one of the most significant developments in the move towards Open Education. First though a brief recap for those of you who have not already heard about the project.

The Open Badges framework, say the project developers, is designed to allow any learner to collect badges from multiple sites, tied to a single identity, and then share them out across various sites — from their personal blog or web site to social networking profiles. The infrastructure needs to be open to allow anyone to issue badges, and for each learner to carry the badges with them across the web and other contexts.

Now some of the issues. I am still concerned of attempts to establish taxonomies, be it those of hierarchy in terms of award structures or those of different forms of ability / competence / skill (pick your own terminology). Such undertakings have bedeviled attempts to introduce new forms of recognition and I worry that those coming more from the educational technology world may not realise the pitfalls of taxonomies and levels.

Secondly is the issue of credibility. There is a two fold danger here. One is that the badges will only be adopted for achievements in areas / subjects / domains presently outside ‘official’ accreditation schemes and thus will be marginalised. There is also a danger that in the desire to gain recognition, badges will be effectively benchmarked against present accreditation programmes (e.g. university modules / degrees) and thus become subject to all the existing restrictions of such accreditation.

And thirdly, as the project roils towards a full release, there may be pressures for restricting badge issuers to existing accreditation bodies, and concentrating on the technological infrastructure, rather than rethinking practices in assessment.

Lets look at some of the characteristics of any assessment system:

  • Reliability

Reliability is a measure of consistency. A robust assessment system should be reliable, that is, it should yield the same results irrespective of who is conducting it or the environmental conditions under which it is taking place. Intra-tester reliability simply means that if the same assessor is looking at your work his or her judgement should be consistent and not influenced by, for example, another assessment they might have undertaken! Inter-tester reliability means that if two different assessors were given exactly the same evidence and so on, their conclusions should also be the same. Extra-tester reliability means that the assessors conclusions should not be influenced by extraneous circumstances, which should have no bearing on the evidence.

  • Validity

Validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. There are three sorts of validity. Face validity implies a match between what is being evaluated or tested and how that is being done. For example, if you are evaluating how well someone can bake a cake or drive a car, then you would probably want them to actually do it rather than write an essay about it! Content validity means that what you are testing is actually relevant, meaningful and appropriate and there is a match between what the learner is setting out to do and what is being assessed. If an assessment system has predictive validity it means that the results are still likely to hold true even under conditions that are different from the test conditions. For example, performance evaluation of airline pilots who are trained to cope with emergency situations on a simulator must be very high on predictive validity.

  • Replicability

Ideally an assessment should be carried out and documented in a way which is transparent and which allows the assessment to be replicated by others to achieve the same outcomes. Some ‘subjectivist’ approaches to evaluation would disagree, however.

  • Transferability

Although each assessment is looking at a particular set of outcomes, a good assessment system is one that could be adapted for similar outcomes or could be extended easily to new learning.  Transferability is about the shelf-life of the assessment and also about maximising its usefulness.

  • Credibility

People actually have to believe in the assessment! It needs to be authentic, honest, transparent and ethical. If people question the rigour of the assessment process, doubt the results or challenge the validity of the conclusions, the assessment loses credibility and is not worth doing.

  • Practicality

This means simply that however sophisticated and technically sound the assessment is, if it takes too much of people’s time or costs too much or is cumbersome to use or the products are inappropriate then it is not a good evaluation!

Pretty obviously there is going to be a trade off between different factors. It is possible to design extremely sophisticated assessments which have a high degree of validity. However, such assessment may be extremely time consuming and thus not practical. The introduction of multiple tests through e-learning platforms is cheap and easy to produce. However they often lack face validity, especially for vocational skills and work based learning.

Lets try to make this discussion more concrete by focusing on one of the Learning Badges pilot assessments at the School of Webcraft.

OpenStreetMapper Badge Challenge

Description: The OpenStreetMapper badge recognizes the ability of the user to edit OpenStreetMap wherever satellite imagery is available in Potlatch 2.

Assessment Type: PEER – any peer can review the work and vote. The badge will be issued with 3 YES votes.

Assessment Details:

OpenStreetMap.org is essentially a Wikipedia site for maps. OpenStreetMap benefits from real-time collaboration from thousands of global volunteers, and it is easy to join. Satellite images are available in most parts of the world.

P2PU has a basic overview of what OpenStreetMap is, and how to make edits in Potlatch 2 (Flash required). This isn’t the default editor, so please read “An OpenStretMap How-To“:

Your core tasks are:

  1. Register with OpenStreetMap and create a username. On your user page, accessible at this link , change your editor to Potlatch 2.
  2. On OpenStreetMap.org, search and find a place near you. Find an area where a restaurant, school, or gas station is unmapped, or could use more information. Click ‘Edit’ on the top of the map. You can click one of the icons, drag it onto the map, and release to make it stick.
  3. To create a new road, park, or other 2D shape, simply click to add points. Click other points on the map where there are intersections. Use the Escape to finish editing.
  4. To verify your work, go to edit your point of interest, click Advanced at the bottom of the editor to add custom tags to this point, and add the tag ‘p2pu’. Make its value be your P2PU username so we can connect the account posting on this page to the one posting on OpenStreetMap.
  5. Submit a link to your OpenStreetMap edit history. Fill in the blank in the following link with your OpenStreetMap username http://www.openstreetmap.org/user/____/edits

You can also apply for the Humanitarian Mapper badge: http://badges.p2pu.org/questions/132/humanitarian-mapper-badge-challenge

Assessment Rubric:

  1. Created OpenStreetMap username
  2. Performed point-of-interest edit
  3. Edited a road, park, or other way
  4. Added the tag p2pu and the value [username] to the point-of-interest edit
  5. Submitted link to OpenStreetMap edit history or user page to show what edits were made

NOTE for those assessing the submitted work. Please compare the work to the rubric above and vote YES if the submitted work meets the requirements (and leave a comment to justify your vote) or NO if the submitted work does not meet the rubric requirements (and leave a comment of constructive feedback on how to improve the work)

CC-BY-SA JavaScript Basic Badge used as template5.

Pretty clearly this assessment scores well on validity and also looks to be reliable. The template could easily be transferred as indeed it has in the pilot. It is also very practical. However, much of this is due to the nature of the subject being assessed – it is much easier to use computers for assessing practical tasks which involve the use of computers than it is for tasks which do not!

This leaves the issue of credibility. I have to admit  know nothing about the School of Webcraft, neither do I know who were the assessors for this pilot. But it would seem that instead of relying on external bodies in the form of examination boards and assessment agencies to provide credibility (deserved for otherwise), if the assessment process is integrated within communities of practice – and indeed assessment tasks such as the one given above could become a shared artefact of that community – then then the Badge could gain credibility. And this seems a much better way of buidli9ng credibility than trying to negotiate complicated arrangements that n number of badges at n level would be recognized as a degree or other ‘traditional’ qualification equivalent.

But lets return to some of the general issues around assessment again.

So far most of the discussions about the Badges project seem to be focused on summative assessment. But there is considerable research evidence that formative assessment is critical for learning. Formative assessment can be seen as

“all those activities undertaken by teachers, and by their students in assessing themselves, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged. Such assessment becomes ‘formative assessment’ when the evidence is actually used to adapt the teaching work to meet the needs.”

Black and Williams (1998)

And that is there the Badges project could come of age. One of the major problems with Personal Learning Environments is the difficulties learners have in scaffolding their own learning. The development of formative assessment to provide (on-line) feedback to learners could help them develop their personal learning plans and facilitate or mediate community involvement in that learning.Furthermore a series of tasks based assessments could guide learners through what Vygotsky called the Zone of Proximal Development (and incidentally in Vygotsky’s terms assessors would act as Significantly Knowledgeable Others).

In these terms the badges project has the potential not only to support learning taking place outside the classroom but to build a significant infrastructure or ecology to support learning that takes place anywhere, regardless of enrollment on traditional (face to face or distance) educational programmes.

In a second article in the next few days I will provide an example of how this could work.

What role does technology have in shaping a new future in education?

January 3rd, 2011 by Graham Attwell

The first blog of the new year looks at what I see as something of a contradiction for those of us wanting to change and hopefully improve education. Lets look at two trends from 2010.

In terms of the use of technology for teaching and learning we saw limited technical innovation. OK, the UK saw an increasing trend towards providing Virtual Learning environments (mainly Moodle) in primary schools. Applications like Google docs and Dropbox allowed enhanced facilities for collaborative work and file sharing. However neither of these was designed specifically for educational use. Indeed the main technical trend may have been on the one hand the increased use of social software and cloud computing apps for learning and on the other hand a movement away from free social software towards various premium business models. Of course mobile devices are fast evolving and are making an increasing impact on teaching and learning.

But probably the main innovation was in terms of pedagogy and in wider approaches to ideas around learning. and here the major development is around open learning. Of course we do not have a precise or agreed definition of what open education or open learning means. But the movement around Open Educational Resources appears to be becoming a part of the mainstream development in the provision of resources for tecahing and learning, despite significant barriers still to be overcome.  And there is increasing open and free tecahing provision be it through online ‘buddy’ systems, say for language learning, various free courses available through online VLEs and the proliferation of programmes offered as Massive Open Online Courses (MOOCs) using a variety of both educational and social software. Whilst we are still struggling to develop new financial models for such programmes, perhaps the major barrier is recognition. This issue can be viewed at three different levels.

  1. The first level is a more societal issue of how we recognise learning (or attainment). at the moment this tends to be through the possession of accreditation or certification from accredited institutions. Recognition takes the form of entry into a profession or job, promotion to a higher level or increased pay.
  2. The second level is that of accreditation. Who should be able to provide such accreditation and perhaps more importantly what should it be for (this raises the question of curriculum).
  3. The third is the issue of assessment. Although traditional forms of individual assessment can be seen as holding back more innovative and group based forms of teaching and learning there are signs of movement in this direction – see, for example the Jisc Effective Assessment in a Digital Age, featured as his post of the year by Stephen Downes.

These issues can be overcome and I think there are significant moves towards recognising broader forms of learning in different contexts. In this respect, the development of Personal Learning Environments and Personal Learning Networks are an important step forward in allowing access to both technology and sources of learning to those not enrolled in an institution.

However, such ‘progress’ is not without contradiction. One of the main gains of social democratic and workers movements over the last century has been to win free access to education and training for all based on nee4d rather than class or income. OK, there are provisos. Such gains were for those in rich industrialised countries – in many areas of the world children still have no access to secondary education – let alone university. Even in those rich countries, there are still big differences in terms of opportunities based on class. And it should not be forgotten that whilst workers movements have fought for free and universal access to education, it has been the needs of industry and the economic systems which have tended to prevail in extending access (and particularly in moulding the forms of provision (witness the widely different forms of the education systems in northern Europe).

Now those gains are under attack. With pressures on econo0mies due of the collapse of the world banking system, governments are trying to roll back on the provision of free education. In countries like the UK, the government is to privatise education – both through developing a market driven system and through transferring the cost of education from the state to the individual or family.

Students have led an impressive (and largely unexpected) fightback in the UK and the outcome of this struggle is by no means clear. Inevitably they have begun to reflect on the relation between their learning and the activities they are undertaking in fighting the increases in fees and cutbacks in finances, thus raising the issue of the wider societal purposes and forms of education.

And that also poses issues for those of us who have viewed the adoption of technology for learning as an opportunity for innovation and change in pedagogy and for extending learning (through Open Education) to those outside schools and universities. How can we defend traditional access to institutional learning, whilst at the same time attacking it for its intrinsic limitations.

At their best, both the movements around Open Education and the student movement against cuts have begun to pose wider issues of pedagogy and the purpose and form of education as will as the issues of how we recognise learning. One of the most encouraging developments in the student movement in the UK has been the appropriation of both online and physical spaces to discuss these wider issues (interestingly in opposition to the police who have in contrast attempted to close access to spaces and movement through he so-called kettling tactic).

I wonder now, if it is possibel to bring together the two different movements to develop new visions of education together with a manifesto or rather manifestos for aschieveing such visions.

Digital story telling stops plagiarism!

June 21st, 2010 by Graham Attwell

There’s an interesting aside in an article in today’s Guardian newspaper on the so called problems of plagiarism. Why do I say so called? Whilst I would agree that practices of buying and selling essays are a problem, these practices have always gone on. When, many years again pre-internet days, I was a student at Swansea University, it was always possible to buy an essay in a bar. And I would also argue that a side benefit of cut and stick technologies is that standards of referencing in universities today is much higher than it was in my time as student. Indeed at that time, you were expected to buy your tutors’ textbooks and to paraphrase (plagiarise) their work. Plagiarism is as much a social construct as it is a technological issue.

But coming back to today’s article, reporting on a three day international conference on plagiarism at Northumbria University, the Guardian reports that “The conference will also hear that the problem of plagiarism at university could be reduced if students used “digital storytelling” – creating packages of images and voiceovers – rather than essays to explain their learning from an imagined personal perspective.

Phil Davies, senior lecturer at Glamorgan university’s computing school, said he had been using the technique for two years and had not seen any evidence of cheating. “Students find it really hard but it’s very rewarding, because they’re not copying and writing an essay, they have to think about it and bring their research into a personal presentation.”

Another approach is to focus on authentic assessment – or rather assessment of authentic learning tasks. In this case students are encouraged to use the internet for research but have to reflect on and re-purpose materials for reporting on their own individual research.

In both cases this goes beyond dealing plagiarism – it is good practice in teaching and learning. And I wonder if that might be a better starting point for the efforts of researchers, developers and teachers.

Rethinking e-Portfolios

March 14th, 2010 by Graham Attwell

The second in my ‘Rethinking’ series of blog posts. This one – Rethinking e-portfolios’ is the notes for a forthcoming book chapter which I will post on the Wales wide Web when completed..

Several years ago, e-portfolios were the vogue in e-learning research and development circles. Yet today little is heard of them. Why? This is not an unimportant question. One of the failures of the e-learnng community is our tendency to move from one fad to the next, without ever properly examining what worked, what did not, and the reasons for it.

First of all it is important to note that  there was never a single understanding or approach to the development and purpose of an e-Portfolio. This can largely due be ascribed to different didactic and pedagogic approaches to e-Portfolio development and use. Some time ago I wrote that “it is possible to distinguish between three broad approaches: the use of e-Portfolios as an assessment tool, the use of e-Portfolios as a tool for professional or career development planning (CDP), and a wider understanding of e-Portfolios as a tool for active learning.”

In a paper presented at the e-Portfolio conference in Cambridge in 2005 (Attwell, 2005), I attempted to distinguish between the different process in e-Portfolio development and then examined the issue of ownership for each of these processes.

eport

The diagramme reveals not only ownership issues, but possibly contradictory purposes for an e-Portfolio. Is an e-Portfolio intended as a space for learners to record all their learning – that which takes place in the home or in the workplace as well as in a course environment or is it a place or responding to prescribed outcomes for a course or learning programme? How much should a e-Portfolio be considered a tool for assessment and how much for reflection on learning? Can tone environment encompass all of these functions?

These are essentially pedagogic issues. But, as always, they are reflected in e-learning technologies and applications. I worked for a whole on a project aiming to ‘repurpose the OSPI e-portfolio (later merged into Sakai) for use in adult education in the UK. It was almost impossible. The pedagogic use of the e-Portfolio, essentially o report against course outcomes – was hard coded into the software.

Lets look at another, and contrasting, e-Portfolio application, ELGG. Although now used as a social networking platform, in its original incarnation ELGG stared out as a social e-portfolio, originating in research undertaken by Dave Tosh on an e-portfolio project. ELGG essentially provided for students to blog within a social network with fine grained and easy to use access controls. All well and good: students were not restricted to course outcomes in their learning focus. But when it came to report on learning as part of any assessment process, ELGG could do little. There was an attempt to develop a ‘reporting’ plug in tool but that offered little more than the ability to favourite selected posts and accumulate them in one view.

Mahara is another popular open source ePortfolio tool. I have not actively played with Maraha for two years. Although still built around a blogging platform, Mahara incorporated a series of reporting tools, to allow students to present achievements. But it also was predicated on a (university) course and subject structure.

Early thinking around e-Portfolios failed to take into account the importance of feedback – or rather saw feedback as predominately as coming from teachers. The advent of social networking applications showed the power of the internet for what are now being called personal Learning networks, in other words to develop personal networks to share learning and share feedback. An application which merely allowed e-learners to develop their own records of learning, even if they could generate presentations, was clearly not enough.

But even if e-portfolios could be developed with social networking functionality, the tendency for institutionally based learning to regard the class group as the natural network, limited their use in practice. Furthermore the tendency, at least in the school sector, of limited network access in the mistaken name of e-safety once more limited the wider development of ‘social e-Portfolios.”

But perhaps the biggest problem has been around the issue of reflection. Champions have lauded e-portfolios as a natural tools to facilitate reflection on learning. Helen Barrett (2004) says an “electronic portfolio is a reflective tool that demonstrates growth over time.” Yet  are e-Portfolios effective in promoting reflection? And is it possible to introduce a reflective tool in an educations system that values the passing of exams through individual assessment over all else? Merely providing spaces for learners to record their learning, albeit in a discursive style does not automatically guarantee reflection. It may be that reflection involves discourse and tools for recording outcomes offer little in this regard.

I have been working for the last three years on developing a reflective e-Portfolio for a careers service based din the UK. The idea is to provide students an opportunity to research different career options and reflect on their preferences, desired choices and outcomes.

We looked very hard at existing opens source e-portfolios as the basis for the project, nut could not find any that met our needs. We eventually decided to develop an e-Portfolio based on WordPress – which we named Freefolio.

At a technical level Freefolio was part hack and part the development of a plug in. Technical developments included:

  • The ability to aggregate summaries of entries on a group basis
  • The ability add custom profiles to see profiles of peers
  • Enhanced group management
  • The ability to add blog entries based on predefined xml templates
  • More fine grained access controls
  • An enhanced workspace view

Much of this has been overtaken by subsequent releases of WordPress multi user and more recently Buddypress. But at the time Freefolio was good. However it did  not work in practice. Why? There were two reasons I think. Firstly, the e-Portfolio was only being used for careers lessons in school and that forms too little a part of the curriculum to build a critical mass of familiarity with users. And secondly, it was just too complex for many users. The split between the front end and the back end of WordPress confused users. The pedagogic purpose, as opposed to the functional use was too far apart. Why press on something called ‘new post’ to write about your career choices.

And, despite our attempts to allow users to select different templates, we had constant feedback that there was not enough ease of customisation in the appearance of the e-Portfolio.

In phase two of the project we developed a completely different approach. Rather than produce an overarching e-portfolip, we have developed a series of careers ‘games; to be accessed through the Careers company web site. Each of the six or so games, or mini applications we have developed so far encourages users to reflect on different aspects of their careers choices. Users are encouraged to rate different careers and to return later to review their choices. The site is yet to be rolled out but initial evaluations are promising.

I think there are lessons to be learnt from this. Small applications that encourage users to think are far better than comprehensive e-portfolios applications which try to do everything.

Interestingly, this view seems to have concur with that of CETIS. Simon Grant points out: “The concept of the personal learning environment could helpfully be more related to the e-portfolio (e-p), as both can help informal learning of skills, competence, etc., whether these abilities are formally defined or not.”

I would agree: I have previously seen both as related on a continuum, with differing foci but similar underpinning ideas. However I have always tended to view Personal Learning Environments as a pedagogic capproach, rather than an application. Despite this, there have been attempts to ‘build a PLE’. In that respect (and in relation to rethinking e-Portfolios) Scott Wilson’s views are interesting. Simon Grant says: “As Scott Wilson pointed out, it may be that the PLE concept overreached itself. Even to conceive of “a” system that supports personal learning in general is hazardous, as it invites people to design a “big” system in their own mind. Inevitably, such a “big” system is impractical, and the work on PLEs that was done between, say, 2000 and 2005 has now been taken forward in different ways — Scott’s work on widgets is a good example of enabling tools with a more limited scope, but which can be joined together as needed.”

Simon Grant goes on to say the ““thin portfolio” concept (borrowing from the prior “personal information aggregation and distribution service” concept) represents the idea that you don’t need that portfolio information in one server; but that it is very helpful to have one place where one can access all “your” information, and set permissions for others to view it. This concept is only beginning to be implemented.”

This is similar to the Mash Up Personal Learning Environment, being promoted in a number of European projects. Indeed a forthcoming paper by Fridolin Wild reports on research looking at the value of light weight widgets for promoting reflection that can be embedded in existing e-learning programmes. This is an interesting idea in suggesting that tools for developing an e-Portfolio )or for that matter, a PLE can be embedded in learning activities. This approach does not need to be restricted to formal school or university based learning courses. Widgets could easily be embedded in work based software (and work flow software) and our initial investigations of Work Oriented Personal Learning Environments (WOMBLES) has shown the potential of mobile devices for capturing informal and work based learning.

Of course, one of the big developments in software since the early e-Portfolio days has been the rise of web 2.0, social software and more recently cloud computing. There seems little point in us spending time and effort developing applications for students to share powerpoint presentations when we already have the admirable slideshare application. And for bookmarks, little can compete with Diigo. Most of these applications allow embedding so all work can be displayed in one place. Of course there is an issue as to the longevity of data on such sites (but then, we have the same issue with institutional e-Portfolios and I would always recommend that students retain a local copy of their work). Of course, not all students are confident in the use of such tools: a series of recent studies have blown apart the Digital Native (see for example Hargittai, E. (2010). Digital Na(t)ives? Variation in Internet Skills and Uses among Members of the “Net Generation”. Sociological Inquiry. 80(1):92-113).  And some commercial services may be more suitable than other for developing an e-Portfolio: Facebook has in my view limitations! But, somewhat ironically, cloud computing may be moving us nearer to Helen Barrett’s idea of an e-Portfolio. John Morrison recently gave a presentation (downloadable here) based on his study of ‘what aspects of identity as learners and understandings of ways to learn are shown by students who have been through a program using course-based networked learning?’ In discussing technology he looked at University as opposed to personally acquired, standalone as opposed to networked and Explored as opposed to ongoing use.

He found that students:

Did not rush to use new technology

Used face-to-face rather than technology, particularly in early brainstorming phases of a project

Tried out software and rejected that which was not meeting a need

Used a piece of software until another emerged which was better

Restrained the amount of software they used regularly to relatively few programs

Certain technologies were ignored and don’t appear to have been tried out by the students

Students used a piece of software until another emerged which was better  which John equates with change. Students restrained the amount of software they used regularly to relatively few programs  which he equates with conservatism

Whilst students were previously heavy users of Facebook, they were now abandoning it. And whilst there was little previous use of Google docs, his latest survey suggested that this cloud application was now being heavily used. This is important in that one of the more strange aspects of previous e0Portolio development has been the requirement for most students to upload attached files, produced in an off line work processor, to the e-Portfolio and present as a file attachment. But if students (no doubt partly driven by costs savings) are using online software for their written work, this may make it much easier to develop online e-portfolios.

John concluded that :this cohort lived through substantial technological change. They simplified and rationalized their learning tools. They rejected what was not functional, university technology and some self-acquired tools. They operate from an Acquisition model of learning.” He concluded that “Students can pick up and understand new ways to learn from networks. BUT… they generally don’t. They pick up what is intended.” (It is also well worth reading the discussion board around John’s presentation – – although you will need to be logged in to the Elesig Ning  site).

So – the e-Portfolio may have a new life. But what particularly interests me us the interplay between pedagogic ideas and applications and software opportunities and developments in providing that new potential life. And of course, we still have to solve that issue of control and ownership. And as John says, students pick up what is intended. If we continue to adhere to an acquisition model of learning, it will be hard to persuade students to develop reflective e-Portfolios. We should continue to rethink e-Portfolios through a widget based approach. But we have also to continue to rethink our models of education and learning.

Using computers in exams

November 4th, 2009 by Graham Attwell

Late yesterday afternoon I had a phone call from BBC Radio Wales asking of I would come on the morning news programme to talk about the use of computers in exams. According to the researcher / producer (?) this was a debate opened up by a reform in Denmark. A quick Google search came up with the following article from the Politiken newspaper.

“Danish ‘A’ level students are likely to be able to use the Internet in their written exams if a test run later this year proves successful.

The Ministry of Education says that pupils already use the Internet for tests.

“It’s a good way to get hold of historical facts or an article that can be useful, for example, in a written social sciences exam,” Ministry Education Consultant Søren Vagner tells MetroXpress.

Digital hand-in

In order to prevent students from cheating by downloading translation programmes or communicating using chats, the idea is that papers should be handed in digitally and that there should be random checks on sites that students visit during an exam”

So early in the morning (at least for me) I got in and skyped into the BBC Cardiff newsroom. I was on the programme to defend the use of computers, Chris Woodhead, the ex Chief Inspector of Schools, was the opponent. And we had five minutes of knock around fun. The BBC preceded the item with three or four vox pops with ‘A’ Level school students from Monmouth in East Wales, who rather predictably said what a bad idea it was as it would penalise those who had worked hard to remember all the facts.

I said I thought on the whole it was a good idea becuas eit would allow students to use teh technolgie savaible in the ral worlls to show their creativity and ability to develop ideas and knowledge, Chris siad it was a bad thing because they would waste tiem surfing and it would prevent them showing their creativity and grasp on knowledge and ideas. and thatw a sit.

In reality, I think the discussion is a much deeper one over the nature and purpose of assessment. The ‘A’ level exam in the UK is essentially used as a filter mechanism, to select students for university. As such their is little authenticity. Students are inevitably taught for the exam. I saw some research a time ago suggesting that ‘A” levels are a poor predicator for later success in university but cannot find a reference ot that at the moment. The problem is that the examinations do not really test the students learned, but their ability to apply what they have learnt to a particular series of formalised tests. neither do the exams serve to help the students in their learning, Other than, I suppose, motivating them to learn a lot of facts in the run up to the exam. I fear that little of what we call revision for exams actually involves reflection on learning. And if the use of computers were to herald a move away from learning facts, to reflecting on meanings, then it could only be a good thing. But at then end of the day, I can’t get excited – and certainly couldn’t so early in the morning. The big issue for me is how to use technology to support learning. And that is another thing.

Self Evaluation or Assessment – it isn’t hard

October 26th, 2009 by Graham Attwell

evalfin.001

I have written many times about Assessment for Learning and the self assessment or evaluation of learning. Assessment for Learning is the idea of formative assessment to support the learning process, rather than most of our present assessment systems which are designed to support comparisons or as a screening mechanism for entry into higher education or education and training or into employment.
And self evaluation – it is what it says. The idea that learners are able to evaluate or assess tehir own learning, often with a surprising degree of insight and accuracy. Of course when they do this they own the assessment – it ceases ot be something that is done ot them but is part of their own reflective learning process.
But, say teachers, this is hard to do. Learners will not know how to do it. they will over-rate their own abilities.
So practical examples are always welcome and I was lucky enough to see today the self evaluation of one of my friend’s children in a school in Bremen (reproduced above).
The process went something like this. Last week the students – aged 8 – were asked to fill in their own assessments in the left had column. Then the sheets were passed to their two form teachers who also filled in the assessment in the right hand column. And then today there were individual meetings between teachers and the students to discuss the results. (It is interesting to note that like in previous expercises of this sort that I have seen, teachers tended to rate students slightly higher than the students themselves).
Seems pretty cool to me (even if a little overly emphasising behaviour and conformity) and much, much more useful than the UK Standard Assessment tests (SATs).

Learning pathways and the European Qualification Framework: can the two go together

February 16th, 2009 by Graham Attwell

Last week I took part in a fairly impassioned Flash meeting debate about the use of the European Qualification Framework for the training of teachers and trainers. Opinions varied greatly between those who saw the EQF as a useful tool for promoting new qualifications for teachers and trainers to those who saw it as a barrier in this field.

Firstly, for non European readers, it may be useful to recapitulate what the EF is all about. The European Qualifications Framework (EQF), says the European Commission,  “acts as a translation device to make national qualifications more readable across Europe, promoting workers’ and learners’ mobility between countries and facilitating their lifelong learning.” The primary users of the EQF are seen as being bodies in charge of national and/or sectoral qualification
systems and frameworks. The idea is that once they have related their respective systems to the EQF, the EQF will help individuals, employers and education and training providers compare individual qualifications from different countries and education and training systems.

To achieve this the European Commission has designed a framework of eight different levels. Each of the 8 levels is defined by a set of descriptors indicating the learning outcomes expressed as knowledge, skills and outcomes relevant to qualifications at that level.

The problem is that it doesn’t work. For the moment I will ignore the more epistemological issues related to the definition of knowledge kills and competences. The biggest problem for me relates to levels. The Framework has mixed a series of different indicators to derive the levels. Some of the indicators are based on academic attainment. For instance the descriptor for knowledge at Level 8 states (can) “demonstrate substantial authority, innovation, autonomy, scholarly
and professional integrity and sustained commitment to the development of new ideas or processes at the forefront of work or study contexts including research.” Some are based on levels of responsibility and autonomy in work roles.  Level 4 talks of the ability to “supervise the routine work of others, taking some responsibility for the evaluation and improvement of work or study activities.” Others are based the complexity of the work being undertaken. Level four skills says: :advanced skills, demonstrating mastery and innovation, required to solve complex and unpredictable
problems in a specialised field of work or study.” Yet others are based on quite abstract ideas of knowledge. Level six knowledge comprises of “advanced knowledge of a field of work or study, involving a critical understanding of theories and principles.”

One of the biggest problems is the framework attempts to bring together applied knowledge and skills within a work process, work roles as expressed by responsibility and knowledge as expressed through academic achievement. And of course it is impossible to equate these, still less to derive a hierarchical table of progression and value. It is easy to pick holes: why for instance is “exercise management and supervision in contexts of work or study activities where there is unpredictable change” level 5, whilst having a “critical awareness of knowledge issues in a field and at the interface between different fields.”

There would appear to be a serie sof unspoken and implicit value judgements related both to the value of academic versus vocational and applied learning and to different roles within the workplace. There also seems to be an attempt to deal with work organisation with teamwork being written in as a high level function. Of course it maybe, but then again in particualr contexts teamwortk may not be so important. In some jobs, the ability for autonomous work may be important, in others not so. And how can we translate between such abilities, competences or whatever they are called and qualifications.

I do not think it is possible to design such a frameworks, nor do I think that levels are a useful concept, especially given the hierarchical structures of this and other similar frameworks. Why should one particular competence or skill be valued over another. Even more important is the idea of hierarchical progression. Lest this be thought to be merely an academic question, the UK government has already withdrawn funding support for those wishing to progress from one qualification to another at the same EQF related level.

one of teh aims of the Frameowrk is to promote Lifelong Learning. But an individual may not wish ot advance their learning in the social forms envisaged by the Framework. There is an assumption for instance that a teacher or trainer will progress towards being a manager, as represented in the higer levels within the EQF. But many teachers and trainers that I have talked to actually want to improve their practice as tecahers or trainers. Or they many wish to move ‘sideways’ – to learn more about working with particular groups or to undertake work as a counsellor for instance. The implicit progression routes inherent in the Frameowkr do not necessarily represent the way we work and learn. far better nore me is the idea of Learning Pathways. Learning pathways can represent a progression in our learning based on the context of our life and our work and based on individual interest and motivation. Our Learning Pathway may at times go upwards, downwards or sideways in the table of skills, competnces and knowledge as representd in the Framework. Such an idea of Learning Pathways is contiguous with the idea of Perosnal Learning Environments and of teh idea of more self directed learning. Of course it is useful to have a framework to assist in selecting progression routes and for counselling, guidance and support. But lets abolish the taxonomy of levels and start representing learning opportunties as what they are, rather than a somewhat oddly derived taxonomy trying to make things fit neatly which do not and embodying implict social values.

MOOCs, Connectivism, Humpty Dumpty and more – with Dave Cormier

November 9th, 2008 by Graham Attwell

Last weeks Emerging Mondays seminar was on the topic of MOOCs and Open Course Models. The speaker was Dave Cormier from the University of Prince Edward Island.

Dave spoke about his experiences, so far, of the CCK MOOC on Connectivism and Connected Knowledge, the technological platforms being used to support participants, the tensions that exist within the course design and the peer support models that are being embraced.  Dave’s introduction led to a wide ranging discussion including the nature and furture of courses and communities, issues of scale, how to support learners, open accreditation and the future of open education – and …Humpty Dumpty and Alice in Wonderland!

If you missed the session – or would like to hear it again – we are providing you with three different versions. You can watch a replay of the event in Elluminate. This provides you with access to the sidebar chat discussion as well as to the audio.

Or – if you are short of time you can listen to an MP3 podcast of Dave’s introduction.

Or you can listen to the full session inline or on your MP3 player.

This is the link to the Elluminate version.

More about learning 2.0

October 31st, 2008 by Graham Attwell

Another post on the IPTS seminar on Learning 2.0 in Seville. This workshop was interesting becuase it brought togther researchers and practitioners from all over Europe. And, somewhat to my surpise, there was a fair degree of consensus. We agreed social software provided many opportunties for creating, raher than passively consuming learning. We agreed that learnng opportunities were being developed outside the classroom. We even agreed that the locus of control was switching from institutions to the learners and that this might well be a good thing. We agreed we were moving towards individual learning pathways and that learners needed to be supported to finding their pathways.

We agreed that the context of learning was important. Mobile learning would become increasingly important with the development of context sensitive devices. (Also see Serge Ravet’s post on User Generated Content or User Generated Contexts).

But there were also limits to the consensus. Whilst there appeared agreement on new roles for teachers, no-one was sure what that role was?

Much of the discussion centred on the scaffolding of learning. How much support did leaners need and how much of that support would come from teachers?

Neither were participants agreed on the future role of institutions. More critically, was Learning 2.0 something which happened outside the school, and had only a limited impact on institutional practice, or did it pose a fundamental challenge for the future of schooling?

There was even greater disagreement over curriculum. Should there be a curriculum for basic skill and knowledge that everyone should learn? Did learners need a basic grounding in their subject before theyc oudl develop their own learning pathways? Who should define such a curriculum? What was the role of ‘experts’ and who were they anyway?

And perhaps the greatest disagreement was over assessment and accreditation. Many of us felt that we needed to move towards community based formative assessment. Employers, we said, would be more interested in what people were able to do than formal certicates. Others, pointing to occupations such as doctors and plumbers felt there should be some form of standards against which people should be assessed and accredited.

A final comment on the form of the project. Although the work is about Learning 2.0 the present form of the work is decidedly Research 1.0. This research is important enough that it needs to be opened out to the community. It seems a wiki is being d veloped and when it is up I will blog here about it. In the meantime here are some of the photos of the flip charts used for brainstorming around different issues at the workshop. I will pass on any comments on this post to the project organisers.

Open Accreditation – a model

October 14th, 2008 by Graham Attwell

Can we develop an Open Accreditation system.  What would we be looking for. In this post Jenny Hughes looks at criteria for a robust and effective cccreditation system.

An accreditation system depends on the robustness of the assessment system on which it is based.

Imagine you were in a shop that sold accreditation / assessment systems ‘off-the-peg” – what would criteria would you use if you went in to buy one?

Reliability
Reliability is a measure of consistency. A robust assessment system should be reliable; that is, it should be based on an assesssment process that yields the same results irrespective of who is conducting it or the environmental conditions under which it is taking place. Intra-tester reliability simply means that if the same asessor is assessing performance his or her judgement should be consistent and not influenced by, for example, another learner they might have just assessed or whether they feel unwell or just in a bad mood! Inter-tester reliability means that if two different assessors were given exactly the same questions, data collection tools, output data and so on, their conclusions should also be the same. Extra-tester reliability means that the assessor’s conclusions should not be influenced by extraneous circumstances, which should have no bearing on the assessment object.

Validity
Validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. There are three sorts of validity. Face validity implies a match between what is being assessed or tested and how that is being done. For example, if you are assessing how well someone can bake a cake or drive a car then you would probably want them to actually do it rather than write an essay about it! Content validity means that what you are testing is actually relevant, meaningful and appropriate and there is a match between what the learner is setting out to do and what is being assessed. If an assessment system has predictive validity it means that the results are still likely to hold true even under conditions that are different from the test conditions. For example, performance assessment of airline pilots who are trained to cope with emergency situations on a simulator must be very high on predictive validity.

Replicability
Ideally an assessment should be carried out and documented in a way which is transparent and which allows the assessment to be replicated by others to achieve the same outcomes. Some ‘subjectivist’ approaches to assessment would disagree, however.

Transferability
Although each assessment should be designed around a particular piece of learning, a good assessment  system is one which could be adapted for similar  situations or could be extended easily to new activities. That is, if your situation evolves and changes over a period of time in response to need, it would be useful if you didn’t have to rethink your entire assessment system. Transferability is about the shelf-life of the assessment and also about maximising its usefulness

Credibility
People actually have to believe in yourassessment! It needs to be authentic, honest, transparent and ethical. If you have even one group of stakeholders questioning the rigour of the assessment process or doubting the results or challenging the validity of the conclusions, the assessment loses credibility and is not worth doing.

Practicality
This means simply that however sophisticated and technically sound the assessment is, if it takes too much of people’s time or costs too much or is cumbersome to use or the products are inappropriate then it is not a good assessment system !

Comparability
Although an assessment system should be customised to meet the needs of particular learning events, a good assessment system should also take into account the wider assessment ‘environment’ in which the learning is located. For example, if you are working in an environment where assessment is normally carried out by particular people (e.g teachers, lecturers) in a particular institution (e.g school or university) where ‘criteria reference assessment is the norm, then if you undertake a radically different type of assessment you may find that your audience will be less receptive and your results less acceptable. Similarly, if the learning that is being assessed is part of a wider system and everyone else is using a different system then this could mean that your input is ignored simply because it is too difficult to integrate.

Also, if you are trying to compare performance from one year to the next or compare learning outcomes with other people, then this needs to be taken into account.

  • Search Pontydysgu.org

    News Bites

    Learning about technology

    According to the University Technical Colleges web site, new research released of 11 to 17-year-olds, commissioned by the Baker Dearing Educational Trust, the charity which promotes and supports University Technical Colleges (UTCs), reveals that over a third (36%) have no opportunity to learn about the latest technology in the classroom and over two thirds (67%) admit that they have not had the opportunity even to discuss a new tech or app idea with a teacher.

    When asked about the tech skills they would like to learn the top five were:

    Building apps (45%)
    Creating Games (43%)
    Virtual reality (38%)
    Coding computer languages (34%)
    Artificial intelligence (28%)


    MOOC providers in 2016

    According to Class Central a quarter of the new MOOC users  in 2016 came from regional MOOC providers such as  XuetangX (China) and Miríada X (Latin America).

    They list the top five MOOC providers by registered users:

    1. Coursera – 23 million
    2. edX – 10 million
    3. XuetangX – 6 million
    4. FutureLearn – 5.3 million
    5. Udacity – 4 million

    XuetangX burst onto this list making it the only non-English MOOC platform in top five.

    In 2016, 2,600+ new courses (vs. 1800 last year) were announced, taking the total number of courses to 6,850 from over 700 universities.


    Jobs in cyber security

    In a new fact sheet the Tech Partnership reveals that UK cyber workforce has grown by 160% in the five years to 2016. 58,000 people now work in cyber security, up from 22,000 in 2011, and they command an average salary of over £57,000 a year – 15% higher than tech specialists as a whole, and up 7% on last year. Just under half of the cyber workforce is employed in the digital industries, while banking accounts for one in five, and the public sector for 12%.


    Number students outside EU falls in UK

    Times Higher Education reports the number of first-year students from outside the European Union enrolling at UK universities fell by 1 per cent from 2014-15 to 2015-16, according to data released by the Higher Education Statistics Agency.

    Data from the past five years show which countries are sending fewer students to study in the UK.

    Despite a large increase in the number of students enrolling from China, a cohort that has grown by 12,500 since 2011-12, enrolments by students from India fell by 13,150 over the same period.

    Other notable changes include an increase in students from Hong Kong, Singapore and Malaysia and a fall in students from Saudi Arabia and Nigeria.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • RT @socialtheoryapp Call for papers, Special Issue "Social Stratification and Inequality in Access to Higher Education" - Social... fb.me/BXqzcJQD

    Yesterday from Cristina Costa's Twitter via Twitter for iPad

  • Sounds of the Bazaar AudioBoo

  • Recent Posts

  • Archives

  • Meta

  • Upcoming Events

      There are no events.
  • Categories