Vol. Six – Issue One – May 2014 - Special 10th Anniversary Issue

Published: May 2014

Editorial: Avoiding the road to hell: the importance of BeJLT and pedagogic research at Brookes

This year, we are celebrating two anniversaries – Brookes’ 150 years as an institution and BeJLT’s more modest 10 years – and we are marking both events in this special edition of BeJLT by publishing invited updates of six of its eight most cited papers. Not surprisingly, these papers address issues that are all still very important – assessment, plagiarism, research-based learning, and technology-supported learning. In addition, there is also a paper giving a more general, international perspective on the importance, and impact outside the institution, of pedagogic research at Brookes, plus a ‘thought-piece’ about the value of learning and teaching conferences.

For me, the overarching message, and what I think these papers demonstrate, is the importance of rigorous research into our pedagogic practice in order that vital strategic decisions and proposed changes in practice are evidence-based. And also that, once a piece of research is completed, we should never assume that the matter is over and resolved but rather we should continue to monitor and constantly keep our practice under review.

As someone in their last year before retirement, who has just celebrated twenty five years at Brookes, and almost thirty in educational development, I have also been looking back reflectively on changes in the sector, many of which I have been a part of, and I find a somewhat depressing theme emerges – what I am summarising as the road to hell. According to the proverb, the road to hell is paved with good intentions, and it seems to me that there is a growing amount of pedagogic research evidence showing that many of the pedagogic problems and issues that we are facing now can actually be traced back to very well intentioned and logically advocated innovations which have suffered from the law of unintended consequences. And often, it seems to me, the logic of those changes was based on thinking that there are easy answers to what in fact are highly complex problems. Don’t get me wrong; I am not advocating inertia as preferable to action in the absence of evidence. Sometimes inaction may be the worst possible choice and maybe we have to take an educated step in to the dark. But even more important in such situations is that we monitor and evaluate (research) the effects of any changes that we make.

Continuous assessment is one of the innovations I have in mind. Compared with a system of final exams, which is very high-stakes and consequently stressful, and may also encourage students to leave serious study until too late, how much better to have regular assessment, structuring learning and giving students regular feedback on how well they are doing? These are powerful and well-meaning arguments.

In a similar vein, came the case for modularity – the chunking-up of knowledge packages into bite-size units of learning. Rather than one, centrally imposed linear curriculum with little if any choice either as to what is studied, or how or when, modularity could give the students flexibility and choice. In addition, there may be economies of scale with students from a variety of programmes taking the same module, making it viable. And with the introduction of the concept of CATS (Credit Accumulation and Transfer), there may be the additional flexibility of being able to dip in and out of learning, as has long happened in the US, and providing greater mobility too through the ease of transference from one institution to another. What could possibly be wrong with that?

But what has been the reality of these changes? The combination of continuous assessment and credit accumulation through modular/unitised systems has arguably significantly contributed to students becoming largely, if not totally, focussed on the marks received for any given piece of work, rather than the feedback on the learning achieved that they are meant to represent (Newstead, 2002). Instrumentalist students in modular systems are likely to adopt a “tick box” mentality, moving on to the next unit or module without seeing any connection between past and future learning (Rust, 2000).

Continuous assessment also encourages students to “play safe” in the work that they do. If it is going to count, do you really want to risk losing marks by doing something a bit different or in an area you are less good at? From a course programme point of view, it arguably has made it much harder to ensure continuity and development within the curriculum, or to allow for “slow learning” (Claxton, 1998), as well as making it harder to assess complex outcomes or to assess integrated learning (McDowell, 2012).

Modularity/unitisation has certainly increased summative assessment to the point that it is now a cliché in the sector to say that students are over-assessed while, because of reductions in resourcing, and the shortness of modules, and pressures on staff time there is arguably less formative assessment, with feedback focussing on justifying the mark given (Price et al, 2008). None of these outcomes would have been wanted by the original advocates for the changes and ironically, very few modular systems in the UK ever really succeeded in providing meaningful course choices. And the transferability and flexibility potentially offered by CATS has hardly been utilised – although we may now, at least according to some enthusiasts, finally be entering an era of MOOCs and ‘edupunks’ when it will be.

It is also important to acknowledge that both ideas aren’t without some merit, and it’s unlikely they would have been advocated if they were. Continuous formative assessment providing regular feedback is evidently good practice – it is excessive continuous summative assessment that causes these problems with learning. Modularity too undoubtedly has positive aspects. Its problems stem from a lack of programme level design. Both innovations had positive elements but were introduced piecemeal, without consideration of their wider effects and the ‘bigger picture’.

Learning outcomes, assessment criteria and greater clarity and transparency generally are another set of innovations we have seen in the last 25 years – and again, in response to certainly very good arguments. Staff will deliver better courses if they are clear about what they are doing and students have a right to know what the course is about and trying to achieve and the outcomes expected of them. It will also be easier for it to be assessed and for assessment feedback to be focussed around the outcomes and with reference to the criteria. And it will all be easier to hold to account for quality assurance. So what on earth is wrong with all that?! Well, when it comes to assessment I know with embarrassment (given the work I did in the 90s trying to develop and promote these very ideas) that this desire for explicitness has led to the creation of assessment grids that try to define each possible grade for each criterion sometimes, at worst, with set numbers of marks allocated for each criterion. While such grids may be useful as a checklist for a marker, and also in the focussing of feedback to a student their more rigid application may ignore the holistic nature of assessment (Sadler, 2005), and the tacit components within the assessment decision. The use of such grids will not, on their own, improve students’ understanding of what is required of them or improve the quality of their work, can distract from focussing on the learning outcomes, and when coupled with marks may totally distort the assessment outcome and bear little if any relationship to the learning outcomes supposedly being assessed (Price & Rust, 1999; O’Donovan et al, 2000; Rust, 2011).

There is also evidence that students knowing too much about what is expected of them in terms of assessment may lead to them doing only that, safe in the knowledge of what they need to do to meet the minimum requirements required to pass the module, thereby reducing the amount of student effort and subsequent learning (Gibbs & Dunbar-Godet, 2007)

Again, arguably none of these issues are inherently the fault of the original ‘good idea’. The positive arguments for learning outcomes identified above certainly do exist. The problem is the way they have been (mis)used, their over zealous use in QA procedures and in trying to control processes and behaviours – and also, again, in terms of course planning and design.

A further associated change in assessment practices has been an increase in the variety of assessment methods used. This happened for a number of reasons – the recognition that traditional assessment methods such as exams and essays can’t assess everything we should be assessing and a desire for greater validity and authenticity. There was also an argument that every assessment practice disadvantaged somebody so variety would at least even out that disadvantage. So what’s wrong with variety? Well there’s probably nothing necessarily wrong with variety itself but there can be, and especially too much variety. If students are to make use of the feedback they receive, and to learn from their mistakes, they need an opportunity, relatively soon, to put that feedback into practice and to have another go at a similar piece of work, and maybe more than once, to refine their understanding and skills (Ibid).

In summation, I think the common thread in what I’m saying is that many innovations of the last thirty years or so have been made, in the absence of hard evidence, on the basis of well-reasoned arguments attempting to address identifiable problems. They have had negative, unintended consequences, not because the reasoning was necessarily flawed but often because they have tried to offer simple solutions to complex problems, and then not been sufficiently monitored and judiciously ‘tinkered with’ after their introduction. Neither has their effect on the ‘bigger picture’ been effectively monitored and evaluated from the beginning, and they may well have also been confounded by higher-level factors, and in some cases they have been used in other initially unforeseen ways.

The most recent example of what I am trying to say is the current campaign, vigorously supported by the NUS, for anonymous marking. Again, there are good reasons. Some studies have undoubtedly shown that markers who know whose work it is are capable of bias, and that this may especially have negative consequences for certain groups of students. Clearly this is an issue that needs to be addressed. But the campaign for anonymous marking is ignoring the growing research data that emphasises the importance of a relationship between marker and student if feedback is to be seriously engaged with and attended to (Handley et al, 2008; Nicol, 2010). So instead of a knee-jerk move to universal anonymity, more subtle and nuanced solutions need to be developed such as anonymous marking with personalised feedback: i.e. the work is originally marked anonymously and that unchangeable mark entered into the system which then, and only then, reveals the student’s identity so that the marker can then write personalised feedback.

And I guess this last example illustrates what I see as the conclusion to these reflections. If we are to avoid unintended consequences and to continue paving the road to hell, we must acknowledge that it is not sufficient just to have good reasons to want to change something – we must insist on pedagogic research evidence that supports the change regarding the potential consequences. And if none exists we need pilot studies and to publish and disseminate the results of those studies, including our failures as well as our successes. We also need to make sure that we are tackling the right problem.

It is in addressing these issues where publications like BeJLT come in, along with the importance of institutions encouraging a pedagogic research culture. Long may Brookes continue to do so and long may BeJLT continue to publish the outcomes. Happy anniversaries to both!

Chris Rust

References

Claxton, G. (1998) Hare brain, tortoise mind, London; Fourth Estate

Gibbs, G. & Dunbar-Godet, H. (2007) The effects of programme assessment environments on student learning, York; HEA available at: http://www.heacademy.ac.uk/projects/detail/projectfinder/projects/pf2656lr

Handley, K., Price, M. and Millar, J. (2008) Engaging Students with Assessment Feedback: Final Report for FDTL5 Project 144/03, Oxford; Oxford Brookes University

McDowell, Liz. (2012) Programme Focused Assessment: a short guide, Bradford; PASS Project, available at: http://www.pass.brad.ac.uk/short-guide.pdf

Newstead, S. (2002) Examining the examiners: why are we so bad at assessing students? Psychology Learning and Teaching, 2 (2) pp. 70-75

Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in mass higher education, Assessment & Evaluation in Higher Education 35 (5) pp. 501-517

O’Donovan, B., Price, M. & Rust, C. (2000) The student experience of criterion-referenced assessment (through the introduction of a common criteria assessment grid), Innovation in Education and Teaching International, 38 (1) pp. 74-85.

Price, M., O’Donovan, B., Rust, C. & Carroll, J. (2008) Assessment Standards: A Manifesto for Change, Brookes eJournal of Learning and Teaching. 2 (3) available at: http://bejlt . brookes .ac.uk/article/assessment_standards_a_manifesto_for_change

Price, M. and Rust, C. (1999) The experience of introducing a common criteria assessment grid across an academic department, Quality in Higher Education, 5 (2) pp. 133-144.

Rust, C. (2000) An opinion piece: A possible student-centred assessment solution to some of the current problems of modular degree programmes, Active Learning in Higher Education, 1 (2) pp. 126-131

Rust, C. (2011) The Unscholarly Use of Numbers in Our Assessment Practices: What Will Make Us Change? International Journal for the Scholarship of Teaching and Learning, 5 (1) available at http://digitalcommons.georgiasouthern.edu/int_jtl/76/

Sadler, D. R. (2005) Interpretations of criteria-based assessment and grading in higher education, Assessment and Evaluation in Higher Education, 30 pp. 175 – 194

From this issue:

Quality enhancement of Undergraduate Research – further strategies to increase student engagement in research dissemination

This article updates and extends the range of strategies available to increase student engagement with undergraduate research dissemination. It evaluates the ten strategies suggested in 2008 and then extends the range of approaches in an attempt to share the benefits of research communication with a wider range of student researchers. The paper acknowledges the rapid progress that has been made over the last eight years in disseminating undergraduate research and the possibilities for staff to frame these as a structured progression for students.

Posted in Academic Paper

Reflections on ‘The student experience of e-learning in higher education: a review of the literature’

Summary of the original paper ‘The student experience of e-learning in higher education: a review of the literature’ was published in the Brookes eJournal of Learning and Teaching, Vol 1, Issue 3 in 2005. The paper arose at a time

Posted in Academic Paper

Where is the new blended learning? Whispering corners of the forum

with thanks to Mary Deane for reading and suggesting many improvements. Introduction: the future is now Where is the new blended learning? Looking back to 2005, when BeJLT published “Blended Learning Landscapes” by Richard Francis and John Raftery, we see

Posted in Academic Paper

Online formative MCQs to supplement traditional teaching: improving retention, progression and performance – the longer view

Following on from the publication of One Lecturer’s Experience of Blending E-learning with Traditional Teaching, further analysis of the long-term impact of the online quizzes on student performance was undertaken. Engagement with formative online MCQs was explored generally and the

Posted in Academic Paper

Revisiting the management of student plagiarism in the light of ideas outlined in 2005

In 2005, I published an account of the work done at Oxford Brookes University to manage cases involving student plagiarism. The article describes efforts by many people at Oxford Brookes, including my colleague Jon Appleton, with whom I had written

Posted in Academic Paper

ASKe Manifesto seven years on: so what did change?

Assessment continues to be a major challenge to institutions around the world. A challenge in terms of student satisfaction, a challenge in terms of resourcing (there are few economies of scale in assessment (Gibbs 2006) and a challenge in terms of transparency, reliability and validity to name but a few. One of the major problems is that there are very varied levels of understanding of assessment and its processes among stakeholders. This is unsurprising given the complex nature of assessment but it causes difficulties and unintended consequences especially when lack of understanding is found among all stakeholder groups: students, staff, management, quality assurance experts, government. If we are going to meet the challenges we face, the assessment literacy (see Price et al, 2012) of all these groups needs to improve.

As detailed in the original BeJLT paper (Price et al 2008) ASKe brought together a group of international assessment experts to envisage a new future for assessment; the result was the ASKe Assessment Manifesto. Through the Manifesto we hoped to stimulate debate across the sector and, by implication, influence ways of thinking about and practising assessment. So what happened? This paper will reflect on the impact of the Manifesto both locally and in terms of its reach beyond the institution, as well as considering its continued relevance.

Posted in Academic Paper

An international perspective: Not just a top cited article

It was no surprise to hear that Assessment Standards: A Manifesto for Change, by Margaret Price, Berry O’Donovan, Chris Rust and Jude Carroll et al (2008), was one of the most-cited papers in BeJL&T’s first decade. This commentary seeks to

Posted in Academic Paper

Thought-piece: What is the value of an institutional conference on learning and teaching?

The aim of the Brookes Learning and Teaching Conference is to bring together staff and students from across the organisation in order to celebrate and share practices and ideas about enhancing the learning experience. The conference this year ran for

Posted in Perspective

Book Review: Neuro-Linguistic Programming: A Critical Appreciation for Managers and Developers

Neuro-Linguistic Programming: A Critical Appreciation for Managers and Developers by Paul Tosey and Jane Mathison

Tagged with:
Posted in Book Review

Subscribe to BeJLT

Get email alerts when there is a new issue.
* = required field