Archive for the ‘Assessment’ Category

Case study. The Ada chatbot: personalised, AI-driven assistant for each student.

March 31st, 2020 by Graham Attwell

As part of the AI and vocational education and training project funded through the EU Erasmus plus project we are producing a series of case studies of the use of AI in VET in five European countries. Here is my first case study – the Ada chatbot developed at Bolton College.

About Bolton College

Bolton College is one of the leading vocational education and training providers in the North West of England, specialising in delivering training – locally, regionally and nationally – to school leavers, adults and employers. The college employs over 550 staff members who teach over 14,500 full and part time students across a range of centres around Bolton. The college’s Learning Technology Team has a proven reputation for the use of learning analytics, machine learning and adaptive learning to support students as they progress with their studies.

The Ada Chatbot

The Learning Technology Team has developed a digital assistant called Ada which went live in April 2017. Ada, which uses the IBM Watson AI engine, can respond to a wide range of student inquiries across multiple domains. The college’s Learning Technology Lead, Aftab Hussain, says “It transforms the way students get information and insights that support them with their studies.” He explains: “It can be hard to find information on the campus. We have an information overload. We have lots of data but it is hard to manage. We don’t have the tools to manage it – this includes teachers, managers and students.” Ada was first developed to overcome the complexity of accessing information and data.

Student questions

Ada is able to respond to student questions including:

  1. General inquiries from students about the college (for example: semester dates, library opening hours, exam office locations, campus activities, deadline for applying for university and more);
  2. Specific questions from students about their studies (for example: What lessons do I have today/this afternoon/tomorrow? Who are my teachers? What’s my attendance like? When is my next exam? When and where is my work placement? What qualifications do I have? What courses am I enrolled in? etc.)
  3. Subject specific inquiries from students. Bolton College is teaching Ada to respond to questions relating to GCSE Maths, GCSE English and the employability curriculum.

Personalised and contextualised learning

Aftab Hussein explains: “We are connecting all campus data sets. Ada can reply to questions contextually. She recognises who you are and is personalised according to who you are and where you are in the student life cycle. The home page uses Natural Language Processing and the Watson AI engine. It can reply to 25000 questions around issues such as mental health or library opening times etc. It also includes subject specific enquiries including around English, Mathematics and business and employability. All teachers have been invited to submit the top 20 queries they receive. Machine learning can recognise the questions. The technical process is easy.” However, he acknowledges that inputting data into the system can be time consuming and they are looking at ways of automatically reading course documentation and presentations.

All the technical development has been undertaken in house. As well as being accessible through the web, Ada, has both IOS and Android apps and can also be queried though smart speakers.

The system also links to the college Moodle installation and can provide access to assignments, college information services and curriculum materials. The system is increasingly being used in online tutorials providing both questions for participants and access to learning materials for instance videos including for health and social care.

It is personalised for individuals and contextualised according to what they are doing or want to find out. Aftab says: “We are looking at the transactional distance – the system provides immediate feedback reducing the transactional distance. “

Digital assessment

Work is also being undertaken in developing the use of the bot for assessment. This is initially being used for the evaluation of work experience, where students need to provide short examples of how they are meeting objectives – for example in collaboration or problem solving. Answers can uploaded, evaluated by the AI and feedback returned instantly.

Nudging

Since March 2019, the Ada service has provided nudges to students with timely and contextualised information, advice and guidance (IAG) to support their studies. The service nudges students about forthcoming exams, their work placement feedback and more. In the following example, a student receives feedback regarding his work placement from his career coach and employer.

The College is currently implementing ProMonitor, a service which will offer teachers and tutors with a scalable solution for managing and supporting the progress made by their students. Once ProMonitor is in place, Ada will be in a position to nudge students about forthcoming assignments and the grades awarded for those assignments. She will also offer students advice and guidance about staying on track with their studies. Likewise, Ada will nudge teachers and student support teams to inform them about student progress; allowing for timely support to be put in place for students across the College.

A personal lifelong learning companion

For Aftab Hussein the persona of the digital agent is important.

For Aftab Hussein the persona of the digital agent is important. He  thinks that in the future that chatbot will morph into a personal cognitive assistant that supports students throughout their entire educational life, from nursery school to university and beyond.

“The personal assistant will learn from each student throughout their life and adapt according to what they like, while guiding them through studies. It could remind when homework is due, book appointments with tutors, and point towards services and events that might support studies, for example.”

 

 

 

Changing the role of Assessment

February 11th, 2020 by Graham Attwell

Front cover of future of assessment reportFormative assessment should provide a key role in all education and particularly in vocational education and training. Formative assessment can give vital feedback to learners and guidance in the next steps of their learning journey. It can also help teachers in knowing what is effective and what is not, where the gaps are and help in planning learning interventions.

Yet all too often it does not. Assessment is all too often seen at best as something to overcome and at worst as a stress inducing nightmare. With new regulations in England requiring students in further education to pass tests in English and Mathmatics, students are condemned to endless retaking the same exams regardless of achievement in vocational subjects.

For all these reasons a new report published by Jisc today is very welcome.

Jisc say:

Existing and emerging technologies are starting to play a role in changing assessment and could help address these issues, both today and looking further ahead into the future, to make assessment smarter, faster, fairer and more effective.

The report sets five targets for the next five years to progress assessment towards being more authentic, accessible, appropriately automated, continuous and secure.

  • AuthenticAssessments designed to prepare students for what they do next, using technology they will use in their careers

  • AccessibleAssessments designed with an accessibility-first principle

  • Appropriately automatedA balance found of automated and human marking to deliver maximum benefit to students

  • ContinuousAssessment data used to explore opportunities for continuous assessment to improve the learning experience

  • SecureAuthoring detection and biometric authentication adopted for identification and remote proctoring

The report: ‘The future of assessment: five principles, five targets for 2025’ can be downloaded from the Jisc website.

 

Recognising competence and learning

November 16th, 2015 by Graham Attwell

As promised some further thoughts on the DISCUSS conference, held earlier this week in Munich.

One of the themes for discussion was the recognition of (prior) learning. The theme had emerged after looking at the main work of Europa projects, particularly in the field of lifelong learning. The idea and attraction of recognising learning from different contexts, and particularly form informal learning is hardly new. In the 1990s, in the UK, the National Council for Vocational Qualifications (as it was then called) devoted resources to developing systems for the Accreditation of Prior Learning. One of the ideas behind National Vocational Qualifications was teh decoupling of teaching and learning from learning outcomes, expressed in terms of competences and performance criteria. Therefore, it was thought, anyone should be able to have their competences recognised (through certification) regardless of whether or not they had followed a particular formal training programme. Despite the considerable investment, it was only at best a limited success. Developing observably robust processes for accrediting such learning was problematic, as was the time and cost in implementing such processes.

It is interesting to consider why there is once more an upsurge of interest in the recognition of prior learning. My feeling was in the UK, the initiative wax driven because of teh weak links between vocational education and training and the labour market.n In countries liek Germany, with a strong apprenticeship training system, there was seen as no need for such a procedure. Furthermore learning was linked to the work process, and competence seen as the internalised ability to perform in an occupation, rather than as an externalised series of criteria for qualification. However the recent waves of migration, initially from Eastern Europe and now of refugees, has resulted in large numbers of people who may be well qualified (in all senses of the word) but with no easily recognisable qualification for employment.

I am unconvinced that attempts to formally assess prior competence as a basis for the fast tracking of  awarding qualifications will work. I think we probably need to look much deeper at both ideas around effective practice and at what exactly we mean my recognition and will write more about this in future posts. But digging around in my computer today I came up with a paper I wrote together with Jenny Hughes around some of these issues. I am not sure the title helped attract a wide readership: The role and importance of informal competences in the process of acquisition and transfer of work skills. Validation of competencies – a review of reference models in the light of youth research: United Kingdom. Below is an extract.

“NVQs and the accreditation of informal learning

As Bjørnåvold (2000) says the system of NVQs is, in principle, open to any learning path and learning form and places a particular emphasis on experience-based learning at work, At least in theory, it does not matter how or where you have learned; what matters is what you have learned. The system is open to learning taking place outside formal education and training institutions, or to what Bjørnåvold terms non-formal learning. This learning has to be identified and judged, so it is no coincidence that questions of assessment and recognition have become crucial in the debate on the current status of the NVQ system and its future prospects.

While the NVQ system as such dates back to 1989, the actual introduction of “new” assessment methodologies can be dated to 1991. This was the year the National Council for Vocational Qualifications (NCVQ) and its Scottish equivalent, Scotvec, required that “accreditation of prior learning” should be available for all qualifications accredited by these bodies (NVQs and general national qualifications, GNVQs). The introduction of a specialised assessment approach to supplement the ordinary assessment and testing procedures used when following traditional and formal pathways, was motivated by the following factors:

1. to give formal recognition to the knowledge and skills which people already possess, as a route to new employment;
2. to increase the number of people with formal qualifications;
3. to reduce training time by avoiding repetition of what candidates already know.

The actual procedure applied can be divided into the following steps. The first step consists of providing general information about the APL process, normally by advisers who are not subject specialists, often supported by printed material or videos. The second and most crucial step includes the gathering and preparation of a portfolio. No fixed format for the portfolio has been established but all evidence must be related to the requirements of the target qualification. The portfolio should include statements of job tasks and responsibilities from past or present employers as well as examples (proofs) of relevant “products”. Results of tests or specifically-undertaken projects should also be included. Thirdly, the actual assessment of the candidate takes place. As it is stated:”The assessment process is substantially the same as that which is used for any candidate for an NVQ. The APL differs from the normal assessment process in that the candidate is providing evidence largely of past activity rather than of skills acquired during the current training course.”The result of the assessment can lead to full recognition, although only a minority of candidates have sufficient prior experience to achieve this, In most cases, the portfolio assessment leads to exemption from parts of a programme or course. The attention towards specialised APL methodologies has diminished somewhat in the UK during recent years. It is argued that there is a danger of isolating APL, and rather, it should be integrated into normal assessments as one of several sources of evidence.”The view that APL is different and separate has resulted in evidence of prior learning and achievement being used less widely than anticipated. Assessors have taken steps to avoid this source of evidence or at least become over-anxious about its inclusion in the overall evidence a candidate may have to offer.”We can thus observe a situation where responsible bodies have tried to strike a balance between evidence of prior and current learning as well as between informal and formal learning. This has not been a straightforward task as several findings suggest that APL is perceived as a “short cut”, less rigorously applied than traditional assessment approaches. The actual use of this kind of evidence, either through explicit APL procedures or in other, more integrated ways, is difficult to overview. Awarding bodies are not required to list alternative learning routes, including APL, on the certificate of a candidate. This makes it almost impossible to identify where prior or informal learning has been used as evidence.

As mentioned in the discussions of the Mediterranean and Nordic experiences, the question of assessment methodologies cannot be separated from the question of qualification standards. Whatever evidence is gathered, some sort of reference point must be established. This has become the most challenging part of the NVQ exercise in general and the assessment exercise in particular.We will approach this question indirectly by addressing some of the underlying assumptions of the NVQ system and its translation into practical measures. Currently the system relies heavily on the following basic assumptions: legitimacy is to be assured through the assumed match between the national vocational standards and competences gained at work. The involvement of industry in defining and setting up standards has been a crucial part of this struggle for acceptance, Validity is supposed to be assured through the linking and location of both training and assessment, to the workplace. The intention is to strengthen the authenticity of both processes, avoiding simulated training and assessment situations where validity is threatened. Reliability is assured through detailed specifications of each single qualification (and module). Together with extensive training of the assessors, this is supposed to secure the consistency of assessments and eventually lead to an acceptable level of reliability.

A number of observers have argued that these assumptions are difficult to defend. When it comes to legitimacy, it is true that employers are represented in the above-mentioned leading bodies and standards councils, but several weaknesses of both a practical and fundamental character have appeared. Firstly, there are limits to what a relatively small group of employer representatives can contribute, often on the basis of scarce resources and limited time. Secondly, the more powerful and more technically knowledgeable organisations usually represent large companies with good training records and wield the greatest influence. Smaller, less influential organisations obtain less relevant results. Thirdly, disagreements in committees, irrespective of who is represented, are more easily resolved by inclusion than exclusion, inflating the scope of the qualifications. Generally speaking, there is a conflict of interest built into the national standards between the commitment to describe competences valid on a universal level and the commitment to create as specific and precise standards as possible. As to the questions of validity and reliability, our discussion touches upon drawing up the boundaries of the domain to be assessed and tested. High quality assessments depend on the existence of clear competence domains; validity and reliability depend on clear-cut definitions, domain-boundaries, domain-content and ways whereby this content can be expressed.

As in the Finnish case, the UK approach immediately faced a problem in this area. While early efforts concentrated on narrow task-analysis, a gradual shift towards broader function-analysis had taken place This shift reflects the need to create national standards describing transferable competences. Observers have noted that the introduction of functions was paralleled by detailed descriptions of every element in each function, prescribing performance criteria and the range of conditions for successful performance. The length and complexity of NVQs, currently a much criticised factor, stems from this “dynamic”. As Wolf says, we seem to have entered a “never ending spiral of specifications”. Researchers at the University of Sussex have concluded on the challenges facing NVQ-based assessments: pursuing perfect reliability leads to meaningless assessment. Pursuing perfect validity leads towards assessments which cover everything relevant, but take too much time, and leave too little time for learning. This statement reflects the challenges faced by all countries introducing output or performance-based systems relying heavily on assessments.

“Measurement of competences” is first and foremost a question of establishing reference points and less a question of instruments and tools. This is clearly illustrated by the NVQ system where questions of standards clearly stand out as more important than the specific tools developed during the past decade. And as stated, specific approaches like, “accreditation of prior learning” (APL), and “accreditation of prior experiential learning” (APEL), have become less visible as the NVQ system has settled. This is an understandable and fully reasonable development since all assessment approaches in the NVQ system in principle have to face the challenge of experientially-based learning, i.e., learning outside the formal school context. The experiences from APL and APEL are thus being integrated into the NVQ system albeit to an extent that is difficult to judge. In a way, this is an example of the maturing of the system. The UK system, being one of the first to try to construct a performance-based system, linking various formal and non-formal learning paths, illustrates the dilemmas of assessing and recognising non-formal learning better than most other systems because there has been time to observe and study systematically the problems and possibilities. The future challenge facing the UK system can be summarised as follows: who should take part in the definition standards, how should competence domains be described and how should boundaries be set? When these questions are answered, high quality assessments can materialise.”

Recommenders or e-Portfolios

September 24th, 2015 by Graham Attwell

I was interested by a comment by Stephen Downes in yesterdays OLDaily. Stephen says:

(People rating) will replace grades and evaluations. Because, when you have an entire network evaluating people, why on earth would you fall back on something as imprecise as a test? (p.s. smart network-based evaluations are what finally break up ‘old boy networks’ that mutually support each other with positive recommendations).

He was commenting on an article on the CBCNews website about the development of a new App being developed in Calgary. The CEO and co-founder of the people App Julia Cordray said: “You’re going to rate people in the three categories that you can possibly know somebody — professionally, personally or romantically.”

As Stephen commented there is really nothing new about this App. And we have experimented with this sort of thing in the MatureIP project. But instead of asking people to rate other people we rather asked them to list other peoples skills and competences. Despite my misgivings it worked well in a six month trial in a careers company in north England. What we found, I think, was that official records of what people can do and what their skills are are scanty and often not accessible and that people are often too shy to promote their own abilities.

But coming back to Stephens comments, I tend to think that smart network based recommendations may only strengthen old boys networks, rather than break them up. In research into Small and Medium Enterprises we found that owner . managers rarely worried too much bout qualifications, preferring to hire based on recommendations form existing employees or from their own social (off line at that time) networks.Yes of course tests are imprecise. But tests are open(ish) to all and were once seen as a democratising factor in job selection. Indeed some organisations are moving away from degrees as a recruitment benchmark – given their poor performance as a predictor of future success. But it doesn’t seem to em that recommendation services such as LinkedIn already deploy are going to help greatly even with smart network based evaluations. I still see more future in e-Portfolios and Personal Learning Networks, which allow users to show and explain their experience. I am a bit surprised about how quiet the ePortfolio scene has been of late but then again the Technology Enhanced Learning community are notorious for dropping ideas to jump on the latest trendy bandwagon.

Talking sense about assessment

September 24th, 2015 by Graham Attwell

In recent years educations seems to have become obsessed with metrics and assessment, comparisons and league tables. I do not think teh outpourings, enquiries, research and recommendation and so on have done much if anything to improve the standard of education, let alone learning and certainly nothing to improve the experience of learners.

So I found this blurb for the Brian Simon memorial lecture by Alison Peacock refreshing:

Alison will focus on her experience of leading the Wroxham School, a primary school where children and adults are inspired by the principles of ‘Learning without Limits’ to establish a learning community which is creative, inclusive and ambitious for all.  She will talk about ways of enabling assessment to be an intrinsic part of day to day teaching that builds a culture of trust in which children can think and talk about their learning confidently.  Alison will also discuss her role as a member of the government’s Commission on Assessment without Levels,  and her optimism that the removal of levels provides space for a more equitable approach to assessment within classrooms.

Years ago, I was part of a working team for the UK National Council for Vocational Qualifications looking at the whole issue of levels. We concluded that levels were an unworkable construct and that the *then new) national Vocational Qualifications would work much better without them. Needless to say our advice was ignored. So it is nice to see the proposal coming round again.

A flyer on Alison’s lecture can be downloaded here.

Issues in developing and implementing e-Portfolios

February 7th, 2013 by Graham Attwell

Diagramme: @lee74 (some rights reserved) http://www.flickr.com/photos/lee8/7164889790/

One of the issues driving the adoption of technology for learning in organisations – particularly in sectors and occupations such as teaching and the medial sector – is the need to show continuing professional development as a requirement for continuing registration.

Many organisations are looking to some form of e-Portfolio to meet this need. Yet there is a tension between the use of e-portfolios to record and reflect on learning, as a tools for learning itself and as a means to assessment.

A recently published study, (lif)e-Portfolio: a framework for implementation (PDF downlaod) by Lee D Ballantyne, from Cambridge International Examinations (CIE) and University of Cambridge ESOL Examinations (ESOL) , examines some of these issues.

Ballantyne says:

There has been much recent discussion (e.g. Barrett, 2009; JISC, 2012d) concerning the dichotomy of e-portfolios which have the primary purpose of learning versus those which have the primary purpose of assessment. E-portfolio systems developed specifically for assessment purposes often forgo key elements of the learner-centred e-portfolio: social tools, longevity, and personalisation. By contrast, e- portfolios primarily for learning often lack the award-specific structure and reporting tools required for assessment (see Appendix II). A suitable e-portfolio solution must take into consideration the backwash of assessment and that ―from the students‘ point of view assessment always defines the actual curriculum‖ (Ramsden, 1992, p 187), and when the purpose of an e-portfolio changes from a learning tool to summative assessment it becomes ―something that is done to them rather than something they WANT to maintain as a lifelong learning tool‖ (Barrett, 2004a). There is a clear link between an assessment purpose and lack of engagement (Tosh et al., 2005) and yet CIE and ESOL both have stakeholder groups (teachers and trainee teachers) who straddle both learner (professional development) and candidate (teaching awards). The main challenge is to convey the value of the whole e-portfolio to all stakeholders; to find the right balance between assessment-driven (institution-centric) requirements and learner-driven (user-centric) requirements; and to achieve a level of standardisation yet allow for personalisation and creativity (Barrett, 2009). This unprecedented link between teaching, learning and high stakes assessment is fundamentally disruptive: pedagogically, organisationally and technologically (Baume cited Taylor & Gill, 2006, p 4; Cambridge, 2012; Eynon cited Shada et al., 2011. p 75), and planning for successful implementation is critical (JISC, 2012e; Joyes et al., 2010; Meyer & Latham, 2008; Shada at el., 2011).

Innovating Pedagogy

August 19th, 2012 by Graham Attwell

The UK Open University have launched an interesting new series, Innovating Pedagogy. The series of reports is intended to explore new forms of teaching, learning and assessment for an interactive world, to guide teachers and policy makers in productive innovation.

Mike Sharples explains:

We wanted to distinguish our perspective from that of the EDUCAUSE Horizon reports, which start from a consideration of how technologies may influence education. I would argue that ours aren’t ‘technology-driven opportunities’, but are rather an exploring of new and emerging forms of teaching, learning and assessment in an age of technology. All innovations in education nowadays are framed in relation to technology, but that doesn’t mean they are ‘technology driven’. So, for example, personal inquiry learning is mediated and enhanced by technology, but not driven by it.

We had a long discussion over ‘pedagogies’. The problem is that there isn’t a word in English that means ‘the processes of teaching, learning and assessment’. I would argue that in current usage ‘pedagogy’ has broadened from a formal learning experience conducted by a teacher, as we have become more aware of the opportunities for peer learning, non-formal apprenticeship etc. See e.g. http://www.memidex.com/pedagogy+instr . The origin of the word isn’t ‘teacher’ but “slave who took children to and from school” We were careful to indicate in the Introduction our usage of the word: “By pedagogy we mean the theory and practice of teaching, learning, and assessment.” So, within that usage are practices that might contribute towards effective learning, such as creating and sharing annotations of textbooks.

The ten trends explored in the first report are:

Although the list may seem as little idiosyncratic, authors emphasise that the themes are often interlinked in practice. I wonder though, if there is something of a contradiction between Assessment for Learning and Learning Analytics?

I am also interested in the definition of rhizomatic learning: “supporting rhizomatic learning requires the creation of a context within which the curriculum and knowledge are constructed by members of a learning community and which can be reshaped in a dynamic manner in response to environmental conditions. The learning experience may build on social, conversational processes, as well as personal knowledge creation, linked into unbounded personal learning networks that merge formal and informal media.”

Recognising learning with badges

June 19th, 2012 by Graham Attwell

Moving into uncharted waters: are open badges the future for skills accreditation?

I am ever more interested in the idea of badges in generla and the Mozilla Badges project in particular.

Having said this I think some of the pilot work has been on the wrong track – in providing accreditation for vocational competence in fields with pre-existing qualifications, rather than looking at areas lacking existing froms of recognition.

Badges should be about recognising learning. And it is probably more important in motivating learners that they are able to recognise their own learning. So I see badges as an extension tot he assessment for learning movement. In this respect the sample badge on the Mozilla Open Badges project site is unhelpful. I know it is up to the provider to determine the forms of assessment and that Mozilla does not determine who can become a provider. But the example inevitably will influence how potential providers view badges. Assessment needs to be an active process, contirbuting both to the leaners’s understanding and facilitating the process fo recogniciton. Simple check boxes as in the example above do neither.

L like the Mozilla Backpack and obviously a great deal of effort is being put into developing a robust architecture. But just as important as the electronic badges is something learners can display. Jenny Hughes has suggested we should provide learners with a badge holder (at least for younger learners) and that they should be allowed to select one badge to wear to school each day.

The badges could look very similar to the popular football cards being distributed by German supermarkets. If youlook at the back of the card (below) thereis even space for several metadata fields.

In a follow up post I will discuss several practical ideas for piloting the badges.

Open Learning Analytics or Architectures for Open Curricula?

February 12th, 2012 by Graham Attwell

George Siemen’s latest post, based on his talk at TEDxEdmonton, makes for interesting reading.

George says:

Classrooms were a wonderful technological invention. They enabled learning to scale so that education was not only the domain of society’s elites. Classrooms made it (economically) possible to educate all citizens. And it is a model that worked quite well.

(Un)fortunately things change. Technological advancement, coupled with rapid growth of information, global connectedness, and new opportunities for people to self-organized without a mediating organization, reveals the fatal flaw of classrooms: slow-developing knowledge can be captured and rendered as curriculum, then be taught, and then be assessed. Things breakdown when knowledge growth is explosive. Rapidly developing knowledge and context requires equally adaptive knowledge institutions. Today’s educational institutions serve a context that no longer exists and its (the institution’s) legacy is restricting innovation.

George calls for the development of an open learning analytics architecture based on the idea that: “Knowing how schools and universities are spinning the dials and levers of content and learning – an activity that ripples decades into the future – is an ethical and more imperative for educators, parents, and students.”

I am not opposed to what he is saying, although I note Frances Bell’s comment about privacy of personal data. But I am unsure that such an architecture really would improve teaching and learning – and especially learning.

As George himself notes, the driving force behind the changes in teaching and learning that we are seeing today is the access afforded by new technology to learning outside the institution. Such access has largely rendered irrelevant the old distinctions between formal, non formal and informal learning. OK – there is still an issue in that accreditation is largely controlled by institutions who naturally place much emphasis on learning which takes place within their (controlled and sanctioned) domain. yet even this is being challenged by developments such as Mozilla’s Open Badges project.

Educational technology has played only a limited role in extending learning. In reality we have provided access to educational technology to those already within the system. But the adoption of social and business software for learning – as recognised in the idea of the Personal Learning Environment – and the similar adaption of these technologies for teaching and learning through Massive Open Online Courses (MOOCs) – have moved us beyond the practice of merely replicating traditional classroom architectures and processes in technology.

However there remain a series of problematic issues. Perhaps foremost is the failure to develop open curricula – or, better put, to rethink the role of curricula for self-organized learning.

For better or worse, curricula traditionally played a role in scaffolding learning – guiding learners through a series of activities to develop skills and knowledge. These activities were graded, building on previously acquired knowledge in developing a personal knowledge base which could link constituent parts, determining how the parts relate to one another and to an overall structure or purpose.

As Peter Pappas points out in his blog on ‘A Taxonomy of Reflection’, this in turn allows the development of what Bloom calls ‘Higher Order Reflection’ – enabling learners to combine or reorganize elements into a new pattern or structure.

Vygostsky recognised the importance of a ‘More Knowledgeable Other’ in supporting reflection in learning through a Zone of Peripheral Development. Such an idea is reflected in the development of Personal Learning Networks, often utilising social software.

Yet the curricula issue remains – and especially the issue of how we combine and reorganise elements of learning into new patterns and structure without the support of formal curricula. This is the more so since traditional subject boundaries are breaking down. Present technology support for this process is very limited. Traditional hierarchical folder structures have been supplemented by keywords and with some effort learners may be able to develop their own taxonomies based on metadata. But the process remains difficult.

So – if we are to go down the path of developing new open architectures – my priority would be for an open architecture of curricula. Such a curricula would play a dual role in supporting self organised learning for individuals but also at the same time supporting emergent rhizomatic curricula at a social level.

 

Using technology to develop assessment for learning

January 21st, 2012 by Graham Attwell

Assessment isn’t really my thing. That doesn’t mean I do not see it as important. I am interested in learning. Assessment for learning should help teachers and learners alike in developing their learning. But all too often assessment has little to do with learning. Indeed assessment has emerged as a barrier to the development of effective teaching and learning strategies especially collaborative learning using web 2.0 and social software tools.

This presentation by Luis Tinoca follows the present trend of adding 2.0 on the end of everything but is a useful exploration of how we can use technologies to support assessment for learning

  • Search Pontydysgu.org

    Social Media




    News Bites

    Erasmus+

    The European Commission has published an annual report of the Erasmus+ programme in 2018. During that time the programme funded more than 23,500 projects and supported the mobility of over 850,00 students, of which 28,247 were involved in UK higher education projects, though only one third of these were UK students studying abroad while the remainder were EU students studying in the UK. The UK also sent 3,439 HE staff to teach or train abroad and received 4,970 staff from elsewhere in the EU.


    Skills Gaps

    A new report by the Learning and Work Institute for the Local Government Association (LGA) finds that by 2030 there could be a deficit of 2.5 million highly-skilled workers. The report, Local Skills Deficits and Spare Capacity, models potential skills gaps in eight English localities, and forecasts an oversupply of low- and intermediate -skilled workers by 2030. The LGA is calling on the government to devolve the various national skills, retraining and employment schemes to local areas. (via WONKHE)


    Innovation is male dominated?

    Times Higher Education reports that in the UK only one in 10 university spin-out companies has a female founder, analysis suggests. And these companies are much less likely to attract investment too, raising concerns that innovation is becoming too male-dominated.


    Open Educational Resources

    BYU researcher John Hilton has published a new study on OER, student efficacy, and user perceptions – a synthesis of research published between 2015 and 2018. Looking at sixteen efficacy and twenty perception studies involving over 120,000 students or faculty, the study’s results suggest that students achieve the same or better learning outcomes when using OER while saving a significant amount of money, and that the majority of faculty and students who’ve used OER had a positive experience and would do so again.


    Other Pontydysgu Spaces

    • Pontydysgu on the Web

      pbwiki
      Our Wikispace for teaching and learning
      Sounds of the Bazaar Radio LIVE
      Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.

      We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.

  • Twitter

    RT @OwenJones84 Incredibly grim. Hope he recovers quickly, thoughts with his family at an unimaginably horrific time, and to everyone struck by this horrible illness. twitter.com/bbclaurak/stat…

    Yesterday from Graham Attwell's Twitter via Tweetbot for Mac

  • RT @mweller blog post - @OpenUniversity sector drop-in on student support blog.edtechie.net/onlinepivot… (this wednesday - Assessment)

    Yesterday from Cristina Costa's Twitter via Twitter for Android

  • Recent Posts

  • Archives

  • Meta

  • Categories