This year, we are celebrating two anniversaries – Brookes’ 150 years as an institution and BeJLT’s more modest 10 years – and we are marking both events in this special edition of BeJLT by publishing invited updates of six of its eight most cited papers. Not surprisingly, these papers address issues that are all still very important – assessment, plagiarism, research-based learning, and technology-supported learning. In addition, there is also a paper giving a more general, international perspective on the importance, and impact outside the institution, of pedagogic research at Brookes, plus a ‘thought-piece’ about the value of learning and teaching conferences.
For me, the overarching message, and what I think these papers demonstrate, is the importance of rigorous research into our pedagogic practice in order that vital strategic decisions and proposed changes in practice are evidence-based. And also that, once a piece of research is completed, we should never assume that the matter is over and resolved but rather we should continue to monitor and constantly keep our practice under review.
As someone in their last year before retirement, who has just celebrated twenty five years at Brookes, and almost thirty in educational development, I have also been looking back reflectively on changes in the sector, many of which I have been a part of, and I find a somewhat depressing theme emerges – what I am summarising as the road to hell. According to the proverb, the road to hell is paved with good intentions, and it seems to me that there is a growing amount of pedagogic research evidence showing that many of the pedagogic problems and issues that we are facing now can actually be traced back to very well intentioned and logically advocated innovations which have suffered from the law of unintended consequences. And often, it seems to me, the logic of those changes was based on thinking that there are easy answers to what in fact are highly complex problems. Don’t get me wrong; I am not advocating inertia as preferable to action in the absence of evidence. Sometimes inaction may be the worst possible choice and maybe we have to take an educated step in to the dark. But even more important in such situations is that we monitor and evaluate (research) the effects of any changes that we make.
Continuous assessment is one of the innovations I have in mind. Compared with a system of final exams, which is very high-stakes and consequently stressful, and may also encourage students to leave serious study until too late, how much better to have regular assessment, structuring learning and giving students regular feedback on how well they are doing? These are powerful and well-meaning arguments.
In a similar vein, came the case for modularity – the chunking-up of knowledge packages into bite-size units of learning. Rather than one, centrally imposed linear curriculum with little if any choice either as to what is studied, or how or when, modularity could give the students flexibility and choice. In addition, there may be economies of scale with students from a variety of programmes taking the same module, making it viable. And with the introduction of the concept of CATS (Credit Accumulation and Transfer), there may be the additional flexibility of being able to dip in and out of learning, as has long happened in the US, and providing greater mobility too through the ease of transference from one institution to another. What could possibly be wrong with that?
But what has been the reality of these changes? The combination of continuous assessment and credit accumulation through modular/unitised systems has arguably significantly contributed to students becoming largely, if not totally, focussed on the marks received for any given piece of work, rather than the feedback on the learning achieved that they are meant to represent (Newstead, 2002). Instrumentalist students in modular systems are likely to adopt a “tick box” mentality, moving on to the next unit or module without seeing any connection between past and future learning (Rust, 2000).
Continuous assessment also encourages students to “play safe” in the work that they do. If it is going to count, do you really want to risk losing marks by doing something a bit different or in an area you are less good at? From a course programme point of view, it arguably has made it much harder to ensure continuity and development within the curriculum, or to allow for “slow learning” (Claxton, 1998), as well as making it harder to assess complex outcomes or to assess integrated learning (McDowell, 2012).
Modularity/unitisation has certainly increased summative assessment to the point that it is now a cliché in the sector to say that students are over-assessed while, because of reductions in resourcing, and the shortness of modules, and pressures on staff time there is arguably less formative assessment, with feedback focussing on justifying the mark given (Price et al, 2008). None of these outcomes would have been wanted by the original advocates for the changes and ironically, very few modular systems in the UK ever really succeeded in providing meaningful course choices. And the transferability and flexibility potentially offered by CATS has hardly been utilised – although we may now, at least according to some enthusiasts, finally be entering an era of MOOCs and ‘edupunks’ when it will be.
It is also important to acknowledge that both ideas aren’t without some merit, and it’s unlikely they would have been advocated if they were. Continuous formative assessment providing regular feedback is evidently good practice – it is excessive continuous summative assessment that causes these problems with learning. Modularity too undoubtedly has positive aspects. Its problems stem from a lack of programme level design. Both innovations had positive elements but were introduced piecemeal, without consideration of their wider effects and the ‘bigger picture’.
Learning outcomes, assessment criteria and greater clarity and transparency generally are another set of innovations we have seen in the last 25 years – and again, in response to certainly very good arguments. Staff will deliver better courses if they are clear about what they are doing and students have a right to know what the course is about and trying to achieve and the outcomes expected of them. It will also be easier for it to be assessed and for assessment feedback to be focussed around the outcomes and with reference to the criteria. And it will all be easier to hold to account for quality assurance. So what on earth is wrong with all that?! Well, when it comes to assessment I know with embarrassment (given the work I did in the 90s trying to develop and promote these very ideas) that this desire for explicitness has led to the creation of assessment grids that try to define each possible grade for each criterion sometimes, at worst, with set numbers of marks allocated for each criterion. While such grids may be useful as a checklist for a marker, and also in the focussing of feedback to a student their more rigid application may ignore the holistic nature of assessment (Sadler, 2005), and the tacit components within the assessment decision. The use of such grids will not, on their own, improve students’ understanding of what is required of them or improve the quality of their work, can distract from focussing on the learning outcomes, and when coupled with marks may totally distort the assessment outcome and bear little if any relationship to the learning outcomes supposedly being assessed (Price & Rust, 1999; O’Donovan et al, 2000; Rust, 2011).
There is also evidence that students knowing too much about what is expected of them in terms of assessment may lead to them doing only that, safe in the knowledge of what they need to do to meet the minimum requirements required to pass the module, thereby reducing the amount of student effort and subsequent learning (Gibbs & Dunbar-Godet, 2007)
Again, arguably none of these issues are inherently the fault of the original ‘good idea’. The positive arguments for learning outcomes identified above certainly do exist. The problem is the way they have been (mis)used, their over zealous use in QA procedures and in trying to control processes and behaviours – and also, again, in terms of course planning and design.
A further associated change in assessment practices has been an increase in the variety of assessment methods used. This happened for a number of reasons – the recognition that traditional assessment methods such as exams and essays can’t assess everything we should be assessing and a desire for greater validity and authenticity. There was also an argument that every assessment practice disadvantaged somebody so variety would at least even out that disadvantage. So what’s wrong with variety? Well there’s probably nothing necessarily wrong with variety itself but there can be, and especially too much variety. If students are to make use of the feedback they receive, and to learn from their mistakes, they need an opportunity, relatively soon, to put that feedback into practice and to have another go at a similar piece of work, and maybe more than once, to refine their understanding and skills (Ibid).
In summation, I think the common thread in what I’m saying is that many innovations of the last thirty years or so have been made, in the absence of hard evidence, on the basis of well-reasoned arguments attempting to address identifiable problems. They have had negative, unintended consequences, not because the reasoning was necessarily flawed but often because they have tried to offer simple solutions to complex problems, and then not been sufficiently monitored and judiciously ‘tinkered with’ after their introduction. Neither has their effect on the ‘bigger picture’ been effectively monitored and evaluated from the beginning, and they may well have also been confounded by higher-level factors, and in some cases they have been used in other initially unforeseen ways.
The most recent example of what I am trying to say is the current campaign, vigorously supported by the NUS, for anonymous marking. Again, there are good reasons. Some studies have undoubtedly shown that markers who know whose work it is are capable of bias, and that this may especially have negative consequences for certain groups of students. Clearly this is an issue that needs to be addressed. But the campaign for anonymous marking is ignoring the growing research data that emphasises the importance of a relationship between marker and student if feedback is to be seriously engaged with and attended to (Handley et al, 2008; Nicol, 2010). So instead of a knee-jerk move to universal anonymity, more subtle and nuanced solutions need to be developed such as anonymous marking with personalised feedback: i.e. the work is originally marked anonymously and that unchangeable mark entered into the system which then, and only then, reveals the student’s identity so that the marker can then write personalised feedback.
And I guess this last example illustrates what I see as the conclusion to these reflections. If we are to avoid unintended consequences and to continue paving the road to hell, we must acknowledge that it is not sufficient just to have good reasons to want to change something – we must insist on pedagogic research evidence that supports the change regarding the potential consequences. And if none exists we need pilot studies and to publish and disseminate the results of those studies, including our failures as well as our successes. We also need to make sure that we are tackling the right problem.
It is in addressing these issues where publications like BeJLT come in, along with the importance of institutions encouraging a pedagogic research culture. Long may Brookes continue to do so and long may BeJLT continue to publish the outcomes. Happy anniversaries to both!
Claxton, G. (1998) Hare brain, tortoise mind, London; Fourth Estate
Gibbs, G. & Dunbar-Godet, H. (2007) The effects of programme assessment environments on student learning, York; HEA available at: http://www.heacademy.ac.uk/projects/detail/projectfinder/projects/pf2656lr
Handley, K., Price, M. and Millar, J. (2008) Engaging Students with Assessment Feedback: Final Report for FDTL5 Project 144/03, Oxford; Oxford Brookes University
McDowell, Liz. (2012) Programme Focused Assessment: a short guide, Bradford; PASS Project, available at: http://www.pass.brad.ac.uk/short-guide.pdf
Newstead, S. (2002) Examining the examiners: why are we so bad at assessing students? Psychology Learning and Teaching, 2 (2) pp. 70-75
Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in mass higher education, Assessment & Evaluation in Higher Education 35 (5) pp. 501-517
O’Donovan, B., Price, M. & Rust, C. (2000) The student experience of criterion-referenced assessment (through the introduction of a common criteria assessment grid), Innovation in Education and Teaching International, 38 (1) pp. 74-85.
Price, M., O’Donovan, B., Rust, C. & Carroll, J. (2008) Assessment Standards: A Manifesto for Change, Brookes eJournal of Learning and Teaching. 2 (3) available at: http://bejlt . brookes .ac.uk/article/assessment_standards_a_manifesto_for_change
Price, M. and Rust, C. (1999) The experience of introducing a common criteria assessment grid across an academic department, Quality in Higher Education, 5 (2) pp. 133-144.
Rust, C. (2000) An opinion piece: A possible student-centred assessment solution to some of the current problems of modular degree programmes, Active Learning in Higher Education, 1 (2) pp. 126-131
Rust, C. (2011) The Unscholarly Use of Numbers in Our Assessment Practices: What Will Make Us Change? International Journal for the Scholarship of Teaching and Learning, 5 (1) available at http://digitalcommons.georgiasouthern.edu/int_jtl/76/
Sadler, D. R. (2005) Interpretations of criteria-based assessment and grading in higher education, Assessment and Evaluation in Higher Education, 30 pp. 175 – 194