Archive for the ‘Assessment’ Category

Teaching, Learning and Assessment

February 3rd, 2021 by Graham Attwell
review, opinion, feedback

Tumisu (CC0), Pixabay

I went to a great inaugural professorial lecture today by Bob Harrison (@BobHarrisonEdu) on Teaching Learning and Assessment in a digital world. Organised by John Traxler at the University of Warwick, the session was informative, provocative, passionate and fun. Although delivered through Zoom, it seems Bob had insisted on prerecording his presentation to allow him to participate in the chat alongside the lecture. Its well worth watching and as soon as I get the address of the recording I will post it here.

I have a growing interest in e-assessment, which I see as  one of the crucial areas in which education needs to change and we have an outstanding funding bid for an international project in this area. Anyway Bob passed on a whole series of references during the presentation and in the chat. One of these was to the E-Assessment Association.

According to its website the e-Assessment Association (eAA) is a not-for-profit membership body with three major goals.

  1. To provide professional support and facilitate debate/ discussion for everyone involved in this field of expertise;
  2. To create and communicate the positive contributions that technology makes to all forms of assessment and;
  3. To develop statements of good practice for suppliers and consumers of e-Assessment technologies. 

The eAA builds awareness of the benefits that technology brings to assessment, particularly around delivering better learning and assessment, rather than just greater efficiency.

The eAA also works to ensure that it has a strong voice and influence in the key policy debates involving the assessment of learning, training and competency.

Membership is free!

AI and Algorithms: the UK examination debacle

August 20th, 2020 by Graham Attwell

This article was originally published on the Taccle AI web site.

There’s a lot to think about in the ongoing debacle over exam results in the UK. A quick update for those who have not been following the story. Examinations for young people, including the O level and A level academic exams and the more vocationally oriented Btec were cancelled this year due to the Covid19 pandemic. Instead teachers were asked to firstly provide an estimated grade for each student in each subject and secondly to rank order the students in their school.

These results were sent to a government central agency, the Office of Qualifications known as Ofqual. But instead of awarding qualifications to students based on the teachers’ predicted grades, it was decided by Ofqual, seemingly in consultation or more probably under pressure, by the government to use an algorithm to calculate grades. This was basically based on the previous results achieved by the school in each subject, with adjustments made for small class cohorts and according to the rankings.

The results from the A levels were released last week. They showed massive irregularities at an individual level with some students seemingly downgraded from predicted A* *the highest grade, to a C or D. Analysis also showed that those students from expensive private schools tended to do better than expected, whilst students from public sector schools in working class areas did proportionately worse than predicted. In other words, the algorithm was biased.

As soon as the A level results were announced there were protest from teachers, schools and students. Yet the government stuck to its position, saying there would be no changes. The Prime Minister Boris Johnson said “Let’s be in no doubt about it, the exam results wed have got today are robust, they’re good, they’re dependable for employers”. However, concern quickly grew about the potential of numerous appeals and indeed at the time it would take teachers preparing such appeals. Meanwhile the Scottish government (which is autonomous in education policy) announced that they would revert of the teachers’ predicted grades. In England while the government stood firm demonstrations by school students broke out in most cities. By the weekend it was clear that something had to change and on Monday the UK government, responsible for exams in England and Wales, announced that they too would respect teacher predicted grades.

The political fallout goes on. The government is trying to shift the blame to Ofqual, despite clear evidence that they knew what was happening.  Meanwhile some of the universities who are reliant on the grades for the decision over who to offer places to, are massively oversubscribed as a result of the upgrades.

So, what does this mean for the use of AI in education. One answer maybe that there needs to be careful thinking about how data is collected and used. As one newspaper columnist put it as the weekend “Shit in, shit out”. Essentially the data used was from the exam results of students at a collective school level in previous years. This has little or no relevance as to how an individual student might perform this year. In fact, the algorithm was designed with the purpose not of awarding an appropriate grade for a student to reflect their learning and work, but to prevent what is known as grade inflation. Grade inflation is increasing numbers of students getting higher grades each year. The government sees this as a major problem.

But this in turn has sparked off a major debate, with suspicions that the government does in fact support a bias in results, aiming to empower the elite to attend university with the rest heading for a second class vocational education and training provision. It has also been pointed out that the Prime Ministers senior advisor, Dominic Cummings, has in the past written articles appearing to suggest that upper class students are more inherently intelligent than those from the working class.

The algorithm, although blunt in terms of impact, merely replicated processes that have been followed for many years (and certainly preceding big data). Many years ago, I worked as a project officer for the Wales Joint Education Committee (WJEC). The WJEC was the examination board for Wales. At that time there were quite a number of recognized examination boards, although since then the number has been reduced by mergers. I was good friends with a senior manager in the exam board. And he told me that every year, about a week before the results were announced, each exam board shared their results, including the number of students to be awarded each grade. The results were then adjusted to fit the figures that the boards had agreed to award in that year.

And this gets to the heart of the problems with the UK assessment system. Of course, one issue is the ridiculous importance placed on formal examinations. But it also reflects the approach to assessment. Basically, there are three assessment systems. Criteria based assessment means that any students achieving a set criterion are awarded accordingly. Ipsative based assessment, assesses achievement based on the individuals own previous performance. But in the case of UK national exams the system followed is norm referenced, which means that a norm is set for passing and for grading. This is fundamentally unfair, in that if the cohort for one year is high achieving the norm will be raised to ensure that the numbers achieving any particular grade meet the desired target. The algorithm applied by Ofqual weas essentially designed to ensure results complied with the norm, regardless of individual attainment. It has always been done this way, the difference this year was the blatant crudeness of the system.

So, there is a silver lining, despite the stress and distress caused for thousands of students. At last there is a focus on how the examination system works, or rather does not. And there is a focus on the class-based bias of the system which has always been there. However, it would be a shame if the experience prevents people from looking at the potential of AI, not for rigging examination results, but for supporting the introduction of formative assessment or students to support their learning.

If you are interested in understanding more about how the AI based algorithm worked there is an excellent analysis by Tom Haines in his blog post ‘A levels: the Model is not the Student‘.

 

AI cloud computing to support formative assessment in vocational education and training

June 30th, 2020 by Graham Attwell

geralt (CC0), Pixabay

I have written before abut the the great work being done around AI by Bolton College in the UK and particularly their ADA Chatbot.

One of my main interests about the use of AI in vocational education and training is the potential for freeing  up teachers for more personalized learning support for both those students who are struggling and also for the advanced students. At the moment too many teachers are forced by workloads to teaxh to the middle.

My second big hope is around assessment. Vocational students need, I think, regular feedback and that can come form formative assessment. However, at present teacher do not have the time to prepare and provide feedback on regular formative assessments. But with AI this become possible.

Bolton College previously received VocTech seed funding to prove the concept of using Artificial Intelligence (AI) to analyse short and long form answers and to demonstrate that real-time feedback can be offered to vocational learners as they respond to online open-ended formative assessment tasks.

Their FirstPass tool provided an initial introduction to AI cloud computing technologies which are able to support vocational students and their teachers with open-ended formative assessment tasks.

Now according to Ufi who provide Voctech fundiing, a new project :will provide further development of FirstPass to ensure that it is effective and robust in use and can demonstrably improve the teaching, learning and assessment experience of vocational learners. It will provide teachers with a richer medium for assessing students due to its ability to pose open-ended questions that can be automatically analysed and assessed by a computer, giving students real-time feedback and the opportunity to qualify and clarify their responses.”

Case study. The Ada chatbot: personalised, AI-driven assistant for each student.

March 31st, 2020 by Graham Attwell

As part of the AI and vocational education and training project funded through the EU Erasmus plus project we are producing a series of case studies of the use of AI in VET in five European countries. Here is my first case study – the Ada chatbot developed at Bolton College.

About Bolton College

Bolton College is one of the leading vocational education and training providers in the North West of England, specialising in delivering training – locally, regionally and nationally – to school leavers, adults and employers. The college employs over 550 staff members who teach over 14,500 full and part time students across a range of centres around Bolton. The college’s Learning Technology Team has a proven reputation for the use of learning analytics, machine learning and adaptive learning to support students as they progress with their studies.

The Ada Chatbot

The Learning Technology Team has developed a digital assistant called Ada which went live in April 2017. Ada, which uses the IBM Watson AI engine, can respond to a wide range of student inquiries across multiple domains. The college’s Learning Technology Lead, Aftab Hussain, says “It transforms the way students get information and insights that support them with their studies.” He explains: “It can be hard to find information on the campus. We have an information overload. We have lots of data but it is hard to manage. We don’t have the tools to manage it – this includes teachers, managers and students.” Ada was first developed to overcome the complexity of accessing information and data.

Student questions

Ada is able to respond to student questions including:

  1. General inquiries from students about the college (for example: semester dates, library opening hours, exam office locations, campus activities, deadline for applying for university and more);
  2. Specific questions from students about their studies (for example: What lessons do I have today/this afternoon/tomorrow? Who are my teachers? What’s my attendance like? When is my next exam? When and where is my work placement? What qualifications do I have? What courses am I enrolled in? etc.)
  3. Subject specific inquiries from students. Bolton College is teaching Ada to respond to questions relating to GCSE Maths, GCSE English and the employability curriculum.

Personalised and contextualised learning

Aftab Hussein explains: “We are connecting all campus data sets. Ada can reply to questions contextually. She recognises who you are and is personalised according to who you are and where you are in the student life cycle. The home page uses Natural Language Processing and the Watson AI engine. It can reply to 25000 questions around issues such as mental health or library opening times etc. It also includes subject specific enquiries including around English, Mathematics and business and employability. All teachers have been invited to submit the top 20 queries they receive. Machine learning can recognise the questions. The technical process is easy.” However, he acknowledges that inputting data into the system can be time consuming and they are looking at ways of automatically reading course documentation and presentations.

All the technical development has been undertaken in house. As well as being accessible through the web, Ada, has both IOS and Android apps and can also be queried though smart speakers.

The system also links to the college Moodle installation and can provide access to assignments, college information services and curriculum materials. The system is increasingly being used in online tutorials providing both questions for participants and access to learning materials for instance videos including for health and social care.

It is personalised for individuals and contextualised according to what they are doing or want to find out. Aftab says: “We are looking at the transactional distance – the system provides immediate feedback reducing the transactional distance. “

Digital assessment

Work is also being undertaken in developing the use of the bot for assessment. This is initially being used for the evaluation of work experience, where students need to provide short examples of how they are meeting objectives – for example in collaboration or problem solving. Answers can uploaded, evaluated by the AI and feedback returned instantly.

Nudging

Since March 2019, the Ada service has provided nudges to students with timely and contextualised information, advice and guidance (IAG) to support their studies. The service nudges students about forthcoming exams, their work placement feedback and more. In the following example, a student receives feedback regarding his work placement from his career coach and employer.

The College is currently implementing ProMonitor, a service which will offer teachers and tutors with a scalable solution for managing and supporting the progress made by their students. Once ProMonitor is in place, Ada will be in a position to nudge students about forthcoming assignments and the grades awarded for those assignments. She will also offer students advice and guidance about staying on track with their studies. Likewise, Ada will nudge teachers and student support teams to inform them about student progress; allowing for timely support to be put in place for students across the College.

A personal lifelong learning companion

For Aftab Hussein the persona of the digital agent is important.

For Aftab Hussein the persona of the digital agent is important. He  thinks that in the future that chatbot will morph into a personal cognitive assistant that supports students throughout their entire educational life, from nursery school to university and beyond.

“The personal assistant will learn from each student throughout their life and adapt according to what they like, while guiding them through studies. It could remind when homework is due, book appointments with tutors, and point towards services and events that might support studies, for example.”

 

 

 

Changing the role of Assessment

February 11th, 2020 by Graham Attwell

Front cover of future of assessment reportFormative assessment should provide a key role in all education and particularly in vocational education and training. Formative assessment can give vital feedback to learners and guidance in the next steps of their learning journey. It can also help teachers in knowing what is effective and what is not, where the gaps are and help in planning learning interventions.

Yet all too often it does not. Assessment is all too often seen at best as something to overcome and at worst as a stress inducing nightmare. With new regulations in England requiring students in further education to pass tests in English and Mathmatics, students are condemned to endless retaking the same exams regardless of achievement in vocational subjects.

For all these reasons a new report published by Jisc today is very welcome.

Jisc say:

Existing and emerging technologies are starting to play a role in changing assessment and could help address these issues, both today and looking further ahead into the future, to make assessment smarter, faster, fairer and more effective.

The report sets five targets for the next five years to progress assessment towards being more authentic, accessible, appropriately automated, continuous and secure.

  • AuthenticAssessments designed to prepare students for what they do next, using technology they will use in their careers

  • AccessibleAssessments designed with an accessibility-first principle

  • Appropriately automatedA balance found of automated and human marking to deliver maximum benefit to students

  • ContinuousAssessment data used to explore opportunities for continuous assessment to improve the learning experience

  • SecureAuthoring detection and biometric authentication adopted for identification and remote proctoring

The report: ‘The future of assessment: five principles, five targets for 2025’ can be downloaded from the Jisc website.

 

Recognising competence and learning

November 16th, 2015 by Graham Attwell

As promised some further thoughts on the DISCUSS conference, held earlier this week in Munich.

One of the themes for discussion was the recognition of (prior) learning. The theme had emerged after looking at the main work of Europa projects, particularly in the field of lifelong learning. The idea and attraction of recognising learning from different contexts, and particularly form informal learning is hardly new. In the 1990s, in the UK, the National Council for Vocational Qualifications (as it was then called) devoted resources to developing systems for the Accreditation of Prior Learning. One of the ideas behind National Vocational Qualifications was teh decoupling of teaching and learning from learning outcomes, expressed in terms of competences and performance criteria. Therefore, it was thought, anyone should be able to have their competences recognised (through certification) regardless of whether or not they had followed a particular formal training programme. Despite the considerable investment, it was only at best a limited success. Developing observably robust processes for accrediting such learning was problematic, as was the time and cost in implementing such processes.

It is interesting to consider why there is once more an upsurge of interest in the recognition of prior learning. My feeling was in the UK, the initiative wax driven because of teh weak links between vocational education and training and the labour market.n In countries liek Germany, with a strong apprenticeship training system, there was seen as no need for such a procedure. Furthermore learning was linked to the work process, and competence seen as the internalised ability to perform in an occupation, rather than as an externalised series of criteria for qualification. However the recent waves of migration, initially from Eastern Europe and now of refugees, has resulted in large numbers of people who may be well qualified (in all senses of the word) but with no easily recognisable qualification for employment.

I am unconvinced that attempts to formally assess prior competence as a basis for the fast tracking of  awarding qualifications will work. I think we probably need to look much deeper at both ideas around effective practice and at what exactly we mean my recognition and will write more about this in future posts. But digging around in my computer today I came up with a paper I wrote together with Jenny Hughes around some of these issues. I am not sure the title helped attract a wide readership: The role and importance of informal competences in the process of acquisition and transfer of work skills. Validation of competencies – a review of reference models in the light of youth research: United Kingdom. Below is an extract.

“NVQs and the accreditation of informal learning

As Bjørnåvold (2000) says the system of NVQs is, in principle, open to any learning path and learning form and places a particular emphasis on experience-based learning at work, At least in theory, it does not matter how or where you have learned; what matters is what you have learned. The system is open to learning taking place outside formal education and training institutions, or to what Bjørnåvold terms non-formal learning. This learning has to be identified and judged, so it is no coincidence that questions of assessment and recognition have become crucial in the debate on the current status of the NVQ system and its future prospects.

While the NVQ system as such dates back to 1989, the actual introduction of “new” assessment methodologies can be dated to 1991. This was the year the National Council for Vocational Qualifications (NCVQ) and its Scottish equivalent, Scotvec, required that “accreditation of prior learning” should be available for all qualifications accredited by these bodies (NVQs and general national qualifications, GNVQs). The introduction of a specialised assessment approach to supplement the ordinary assessment and testing procedures used when following traditional and formal pathways, was motivated by the following factors:

1. to give formal recognition to the knowledge and skills which people already possess, as a route to new employment;
2. to increase the number of people with formal qualifications;
3. to reduce training time by avoiding repetition of what candidates already know.

The actual procedure applied can be divided into the following steps. The first step consists of providing general information about the APL process, normally by advisers who are not subject specialists, often supported by printed material or videos. The second and most crucial step includes the gathering and preparation of a portfolio. No fixed format for the portfolio has been established but all evidence must be related to the requirements of the target qualification. The portfolio should include statements of job tasks and responsibilities from past or present employers as well as examples (proofs) of relevant “products”. Results of tests or specifically-undertaken projects should also be included. Thirdly, the actual assessment of the candidate takes place. As it is stated:”The assessment process is substantially the same as that which is used for any candidate for an NVQ. The APL differs from the normal assessment process in that the candidate is providing evidence largely of past activity rather than of skills acquired during the current training course.”The result of the assessment can lead to full recognition, although only a minority of candidates have sufficient prior experience to achieve this, In most cases, the portfolio assessment leads to exemption from parts of a programme or course. The attention towards specialised APL methodologies has diminished somewhat in the UK during recent years. It is argued that there is a danger of isolating APL, and rather, it should be integrated into normal assessments as one of several sources of evidence.”The view that APL is different and separate has resulted in evidence of prior learning and achievement being used less widely than anticipated. Assessors have taken steps to avoid this source of evidence or at least become over-anxious about its inclusion in the overall evidence a candidate may have to offer.”We can thus observe a situation where responsible bodies have tried to strike a balance between evidence of prior and current learning as well as between informal and formal learning. This has not been a straightforward task as several findings suggest that APL is perceived as a “short cut”, less rigorously applied than traditional assessment approaches. The actual use of this kind of evidence, either through explicit APL procedures or in other, more integrated ways, is difficult to overview. Awarding bodies are not required to list alternative learning routes, including APL, on the certificate of a candidate. This makes it almost impossible to identify where prior or informal learning has been used as evidence.

As mentioned in the discussions of the Mediterranean and Nordic experiences, the question of assessment methodologies cannot be separated from the question of qualification standards. Whatever evidence is gathered, some sort of reference point must be established. This has become the most challenging part of the NVQ exercise in general and the assessment exercise in particular.We will approach this question indirectly by addressing some of the underlying assumptions of the NVQ system and its translation into practical measures. Currently the system relies heavily on the following basic assumptions: legitimacy is to be assured through the assumed match between the national vocational standards and competences gained at work. The involvement of industry in defining and setting up standards has been a crucial part of this struggle for acceptance, Validity is supposed to be assured through the linking and location of both training and assessment, to the workplace. The intention is to strengthen the authenticity of both processes, avoiding simulated training and assessment situations where validity is threatened. Reliability is assured through detailed specifications of each single qualification (and module). Together with extensive training of the assessors, this is supposed to secure the consistency of assessments and eventually lead to an acceptable level of reliability.

A number of observers have argued that these assumptions are difficult to defend. When it comes to legitimacy, it is true that employers are represented in the above-mentioned leading bodies and standards councils, but several weaknesses of both a practical and fundamental character have appeared. Firstly, there are limits to what a relatively small group of employer representatives can contribute, often on the basis of scarce resources and limited time. Secondly, the more powerful and more technically knowledgeable organisations usually represent large companies with good training records and wield the greatest influence. Smaller, less influential organisations obtain less relevant results. Thirdly, disagreements in committees, irrespective of who is represented, are more easily resolved by inclusion than exclusion, inflating the scope of the qualifications. Generally speaking, there is a conflict of interest built into the national standards between the commitment to describe competences valid on a universal level and the commitment to create as specific and precise standards as possible. As to the questions of validity and reliability, our discussion touches upon drawing up the boundaries of the domain to be assessed and tested. High quality assessments depend on the existence of clear competence domains; validity and reliability depend on clear-cut definitions, domain-boundaries, domain-content and ways whereby this content can be expressed.

As in the Finnish case, the UK approach immediately faced a problem in this area. While early efforts concentrated on narrow task-analysis, a gradual shift towards broader function-analysis had taken place This shift reflects the need to create national standards describing transferable competences. Observers have noted that the introduction of functions was paralleled by detailed descriptions of every element in each function, prescribing performance criteria and the range of conditions for successful performance. The length and complexity of NVQs, currently a much criticised factor, stems from this “dynamic”. As Wolf says, we seem to have entered a “never ending spiral of specifications”. Researchers at the University of Sussex have concluded on the challenges facing NVQ-based assessments: pursuing perfect reliability leads to meaningless assessment. Pursuing perfect validity leads towards assessments which cover everything relevant, but take too much time, and leave too little time for learning. This statement reflects the challenges faced by all countries introducing output or performance-based systems relying heavily on assessments.

“Measurement of competences” is first and foremost a question of establishing reference points and less a question of instruments and tools. This is clearly illustrated by the NVQ system where questions of standards clearly stand out as more important than the specific tools developed during the past decade. And as stated, specific approaches like, “accreditation of prior learning” (APL), and “accreditation of prior experiential learning” (APEL), have become less visible as the NVQ system has settled. This is an understandable and fully reasonable development since all assessment approaches in the NVQ system in principle have to face the challenge of experientially-based learning, i.e., learning outside the formal school context. The experiences from APL and APEL are thus being integrated into the NVQ system albeit to an extent that is difficult to judge. In a way, this is an example of the maturing of the system. The UK system, being one of the first to try to construct a performance-based system, linking various formal and non-formal learning paths, illustrates the dilemmas of assessing and recognising non-formal learning better than most other systems because there has been time to observe and study systematically the problems and possibilities. The future challenge facing the UK system can be summarised as follows: who should take part in the definition standards, how should competence domains be described and how should boundaries be set? When these questions are answered, high quality assessments can materialise.”

Recommenders or e-Portfolios

September 24th, 2015 by Graham Attwell

I was interested by a comment by Stephen Downes in yesterdays OLDaily. Stephen says:

(People rating) will replace grades and evaluations. Because, when you have an entire network evaluating people, why on earth would you fall back on something as imprecise as a test? (p.s. smart network-based evaluations are what finally break up ‘old boy networks’ that mutually support each other with positive recommendations).

He was commenting on an article on the CBCNews website about the development of a new App being developed in Calgary. The CEO and co-founder of the people App Julia Cordray said: “You’re going to rate people in the three categories that you can possibly know somebody — professionally, personally or romantically.”

As Stephen commented there is really nothing new about this App. And we have experimented with this sort of thing in the MatureIP project. But instead of asking people to rate other people we rather asked them to list other peoples skills and competences. Despite my misgivings it worked well in a six month trial in a careers company in north England. What we found, I think, was that official records of what people can do and what their skills are are scanty and often not accessible and that people are often too shy to promote their own abilities.

But coming back to Stephens comments, I tend to think that smart network based recommendations may only strengthen old boys networks, rather than break them up. In research into Small and Medium Enterprises we found that owner . managers rarely worried too much bout qualifications, preferring to hire based on recommendations form existing employees or from their own social (off line at that time) networks.Yes of course tests are imprecise. But tests are open(ish) to all and were once seen as a democratising factor in job selection. Indeed some organisations are moving away from degrees as a recruitment benchmark – given their poor performance as a predictor of future success. But it doesn’t seem to em that recommendation services such as LinkedIn already deploy are going to help greatly even with smart network based evaluations. I still see more future in e-Portfolios and Personal Learning Networks, which allow users to show and explain their experience. I am a bit surprised about how quiet the ePortfolio scene has been of late but then again the Technology Enhanced Learning community are notorious for dropping ideas to jump on the latest trendy bandwagon.

Talking sense about assessment

September 24th, 2015 by Graham Attwell

In recent years educations seems to have become obsessed with metrics and assessment, comparisons and league tables. I do not think teh outpourings, enquiries, research and recommendation and so on have done much if anything to improve the standard of education, let alone learning and certainly nothing to improve the experience of learners.

So I found this blurb for the Brian Simon memorial lecture by Alison Peacock refreshing:

Alison will focus on her experience of leading the Wroxham School, a primary school where children and adults are inspired by the principles of ‘Learning without Limits’ to establish a learning community which is creative, inclusive and ambitious for all.  She will talk about ways of enabling assessment to be an intrinsic part of day to day teaching that builds a culture of trust in which children can think and talk about their learning confidently.  Alison will also discuss her role as a member of the government’s Commission on Assessment without Levels,  and her optimism that the removal of levels provides space for a more equitable approach to assessment within classrooms.

Years ago, I was part of a working team for the UK National Council for Vocational Qualifications looking at the whole issue of levels. We concluded that levels were an unworkable construct and that the *then new) national Vocational Qualifications would work much better without them. Needless to say our advice was ignored. So it is nice to see the proposal coming round again.

A flyer on Alison’s lecture can be downloaded here.

Issues in developing and implementing e-Portfolios

February 7th, 2013 by Graham Attwell

Diagramme: @lee74 (some rights reserved) http://www.flickr.com/photos/lee8/7164889790/

One of the issues driving the adoption of technology for learning in organisations – particularly in sectors and occupations such as teaching and the medial sector – is the need to show continuing professional development as a requirement for continuing registration.

Many organisations are looking to some form of e-Portfolio to meet this need. Yet there is a tension between the use of e-portfolios to record and reflect on learning, as a tools for learning itself and as a means to assessment.

A recently published study, (lif)e-Portfolio: a framework for implementation (PDF downlaod) by Lee D Ballantyne, from Cambridge International Examinations (CIE) and University of Cambridge ESOL Examinations (ESOL) , examines some of these issues.

Ballantyne says:

There has been much recent discussion (e.g. Barrett, 2009; JISC, 2012d) concerning the dichotomy of e-portfolios which have the primary purpose of learning versus those which have the primary purpose of assessment. E-portfolio systems developed specifically for assessment purposes often forgo key elements of the learner-centred e-portfolio: social tools, longevity, and personalisation. By contrast, e- portfolios primarily for learning often lack the award-specific structure and reporting tools required for assessment (see Appendix II). A suitable e-portfolio solution must take into consideration the backwash of assessment and that ―from the students‘ point of view assessment always defines the actual curriculum‖ (Ramsden, 1992, p 187), and when the purpose of an e-portfolio changes from a learning tool to summative assessment it becomes ―something that is done to them rather than something they WANT to maintain as a lifelong learning tool‖ (Barrett, 2004a). There is a clear link between an assessment purpose and lack of engagement (Tosh et al., 2005) and yet CIE and ESOL both have stakeholder groups (teachers and trainee teachers) who straddle both learner (professional development) and candidate (teaching awards). The main challenge is to convey the value of the whole e-portfolio to all stakeholders; to find the right balance between assessment-driven (institution-centric) requirements and learner-driven (user-centric) requirements; and to achieve a level of standardisation yet allow for personalisation and creativity (Barrett, 2009). This unprecedented link between teaching, learning and high stakes assessment is fundamentally disruptive: pedagogically, organisationally and technologically (Baume cited Taylor & Gill, 2006, p 4; Cambridge, 2012; Eynon cited Shada et al., 2011. p 75), and planning for successful implementation is critical (JISC, 2012e; Joyes et al., 2010; Meyer & Latham, 2008; Shada at el., 2011).

Innovating Pedagogy

August 19th, 2012 by Graham Attwell

The UK Open University have launched an interesting new series, Innovating Pedagogy. The series of reports is intended to explore new forms of teaching, learning and assessment for an interactive world, to guide teachers and policy makers in productive innovation.

Mike Sharples explains:

We wanted to distinguish our perspective from that of the EDUCAUSE Horizon reports, which start from a consideration of how technologies may influence education. I would argue that ours aren’t ‘technology-driven opportunities’, but are rather an exploring of new and emerging forms of teaching, learning and assessment in an age of technology. All innovations in education nowadays are framed in relation to technology, but that doesn’t mean they are ‘technology driven’. So, for example, personal inquiry learning is mediated and enhanced by technology, but not driven by it.

We had a long discussion over ‘pedagogies’. The problem is that there isn’t a word in English that means ‘the processes of teaching, learning and assessment’. I would argue that in current usage ‘pedagogy’ has broadened from a formal learning experience conducted by a teacher, as we have become more aware of the opportunities for peer learning, non-formal apprenticeship etc. See e.g. http://www.memidex.com/pedagogy+instr . The origin of the word isn’t ‘teacher’ but “slave who took children to and from school” We were careful to indicate in the Introduction our usage of the word: “By pedagogy we mean the theory and practice of teaching, learning, and assessment.” So, within that usage are practices that might contribute towards effective learning, such as creating and sharing annotations of textbooks.

The ten trends explored in the first report are:

Although the list may seem as little idiosyncratic, authors emphasise that the themes are often interlinked in practice. I wonder though, if there is something of a contradiction between Assessment for Learning and Learning Analytics?

I am also interested in the definition of rhizomatic learning: “supporting rhizomatic learning requires the creation of a context within which the curriculum and knowledge are constructed by members of a learning community and which can be reshaped in a dynamic manner in response to environmental conditions. The learning experience may build on social, conversational processes, as well as personal knowledge creation, linked into unbounded personal learning networks that merge formal and informal media.”

  • Search Pontydysgu.org

    Social Media




    News Bites

    Cyborg patented?

    Forbes reports that Microsoft has obtained a patent for a “conversational chatbot of a specific person” created from images, recordings, participation in social networks, emails, letters, etc., coupled with the possible generation of a 2D or 3D model of the person.


    Racial bias in algorithms

    From the UK Open Data Institute’s Week in Data newsletter

    This week, Twitter apologised for racial bias within its image-cropping algorithm. The feature is designed to automatically crop images to highlight focal points – including faces. But, Twitter users discovered that, in practice, white faces were focused on, and black faces were cropped out. And, Twitter isn’t the only platform struggling with its algorithm – YouTube has also announced plans to bring back higher levels of human moderation for removing content, after its AI-centred approach resulted in over-censorship, with videos being removed at far higher rates than with human moderators.


    Gap between rich and poor university students widest for 12 years

    Via The Canary.

    The gap between poor students and their more affluent peers attending university has widened to its largest point for 12 years, according to data published by the Department for Education (DfE).

    Better-off pupils are significantly more likely to go to university than their more disadvantaged peers. And the gap between the two groups – 18.8 percentage points – is the widest it’s been since 2006/07.

    The latest statistics show that 26.3% of pupils eligible for FSMs went on to university in 2018/19, compared with 45.1% of those who did not receive free meals. Only 12.7% of white British males who were eligible for FSMs went to university by the age of 19. The progression rate has fallen slightly for the first time since 2011/12, according to the DfE analysis.


    Quality Training

    From Raconteur. A recent report by global learning consultancy Kineo examined the learning intentions of 8,000 employees across 13 different industries. It found a huge gap between the quality of training offered and the needs of employees. Of those surveyed, 85 per cent said they , with only 16 per cent of employees finding the learning programmes offered by their employers effective.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

  • Recent Posts

  • Archives

  • Meta

  • Categories