Revisiting the management of student plagiarism in the light of ideas outlined in 2005


Jude Carroll

In 2005, I published an account of the work done at Oxford Brookes University to manage cases involving student plagiarism. The article describes efforts by many people at Oxford Brookes, including my colleague Jon Appleton, with whom I had written a Good Practice Guide for managing student plagiarism in 2001. In the 2001 Guide, Jon and I described policy and procedures which we believed would be effective in dealing with student plagiarism and by 2005, I was writing about how we were putting the ideas from 2001 into action at Oxford Brookes. The 2005 article was called ‘Moving to Mainstream’ and that more or less describes what did go on to happen: Oxford Brookes was managing plagiarism as just one of the many things that universities did and that they continue to do. We were also doing so in ways that were fairer, more consistent and more defensible. The effort was collaborative, drawing upon efforts by many colleagues, some of whom are cited and acknowledged below. Now, a decade later, I have selected two of the ideas in the BEJLT paper that went on to be influential within the UK and across the world. These two ideas proved to be especially significant in my own work to develop more equitable and sustainable ways to manage student plagiarism.

The first of the ideas that stood out when I re-read the 2005 article was the importance of institutions taking a holistic, joined-up approach to the problem of student plagiarism. I was arguing back then and continue to argue now, that management of plagiarism is primarily about supporting students as learners. Yes, some students do cheat, and yes, they cheat using plagiarism, but managing the issue should not focus on trying to catch and punish cheaters. Back in 2005, arguing for a learning-centered approach was not common but it has since gone on to be the way that most universities in the UK and in Australia and New Zealand manage the issue, and it has proved possible to use a learner-centered approach in managing cheating, too. I describe below how what we were doing at Oxford Brookes in 2005 turned out to be influential in promoting this type of approach.

The second idea I selected for follow-up was the introduction by Oxford Brookes of a referral system for managing cases, once they had been spotted during assessment. We called the people who received and managed cases Academic Conduct Officers (ACOs) and by 2005, the ACO system had been in full operation for several years but the BeJLT article was the first time it had been described in action, in print. By 2005 and in the years that followed, we could see improvements that were making the ACO system work better and we were realising additional and unpredicted benefits from what was already in place. Now, in 2014, I can revisit events and pressures which prompted the ACO system in the first place and I can share some of the subsequent lessons. This may be of interest to anyone considering introducing or reviewing the use of such a system.

Setting the scene: 2004

The years between about 2000 and 2004 were a time of significant worry about student plagiarism in the UK as well as in other countries around the world. There were several factors which fuelled rising concerns, some of which were triggered by the way universities were changing. For example, there was more coursework for grades/accreditation plus we had growing class sizes, increased teacher workload, and more student diversity. We had widening participation within the UK plus international students arriving in ever-larger numbers, meaning students had widely different educational backgrounds, varied skills in academic writing and many were still developing their English language capability. Teachers assumed students understood requirements to ‘do their own work’ and ‘write in their own words’ but in fact, none of these specifications were well understood – not by teachers, who had never had to be explicit about them, nor by students, many of whom had come from learning contexts where use, acknowledgement and even ideas about originality were managed quite differently. Sometimes, students were told about specific acknowledgment conventions in their discipline, sometimes not. By and large, students were left to puzzle plagiarism out: what constituted ‘other people’s work’? What was common knowledge? Who owned what and how could that ownership be acknowledged? How original did words have to be in order to qualify as ‘my own’? and so on. Most learned the answers to such puzzles by trial and error, and for some, being accused of cheating by using plagiarism was part of their learning process.

And then, there was the Internet. Resources were being developed at eye- watering speed (especially in English) and search engine improvements meant that finding and harvesting these riches became easier, week-by-week; they also facilitated student networking and sharing of resources. It was clear as I said in my 2005 BeJLT article that a growing number of students used copy-paste strategies to generate texts – exactly how many at Brookes was unclear. They also recycled coursework from other students and accessed essay banks to generate assessments. All in all, the 2005 teaching and learning context around student plagiarism was characterised by confusion, concern and more than a bit of panic. People were genuinely worried about whether universities could assure the reliability and integrity of the awards they issued. There were also worries about how to stop students from using plagiarism to cheat.

Into this mix of worry and opportunity, I dropped the 2005 BeJLT paper on ways we at Brookes were addressing the issues. At that time, Oxford Brookes was similar to most UK tertiary institutions in recognising we needed to become more effective in how we dealt with student plagiarism but the university was unusual in that the Brookes senior management allowed us (and specifically, me) to talk and write about what we were doing. I describe in the 2005 article how reluctant institutions were to admit to home grown plagiarism. Back then, few wanted to expose themselves to the prowling media, hungry for a story about cheating students. If you spoke about the issue, this was seen as not having it under control and further, as evidence that you had admitted students who lacked integrity. However, things were beginning to change. In 2004, the UK-based Plagiarism Advisory Service organised its first conference, supported at that time by the Joint Information Systems Committee (JISC). The Plagiarism Advisory Service (PAS) was best known for being the conduit into the UK of the text-matching software, Turnitn, but the PAS was also advocating wider use of the Good Practice Guide (2001)[1] which Jon Appleton and I had written. The PAS conference proceedings , the Good Practice Guide, and a growing number of commentators publishing in pedagogic journals (many listed later in this article) were calling for a more joined up and systematic approach to dealing with plagiarism. Nevertheless, through 2005 and 2006, I watched the ‘keep quiet’ pressure continue to grow but then the mood changed. What was previously seen as negative – for example, having a rising number of cases and being quick in managing them – began to be seen as positives. Rising numbers of recorded cases was taken to mean you were treating the issue seriously and doing something about it. Looking back, a key contributor to that change of mood and message grew out of some of the ideas and activities which I was exploring in my 2005 article, starting with arguments for different policies and procedures for dealing with student plagiarism.

Adjusting policy and procedures to 21st century realities

Before 2000, policies and procedures for managing plagiarism were based on the assumption that institutions could expect to handle a small number of cases, all of which would be instances of serious cheating. Policy makers assumed that responsibility for following academic regulations lay with the student and that a student’s own sense of integrity and moral probity was sufficient to ensure he or she followed the rules. However, by 2005, we realised that student plagiarism was actually a relatively high-volume and normal part of academic life and that students, teachers and institutions shared responsibility for ensuring it did not happen. We had realised we could expect relatively large numbers of students to copy and/or to use others’ work without sufficient acknowledgment because some were opportunists, most were confused and many were operating in an unfamiliar teaching and learning environment. Of course, expecting something is not the same as accepting it, meaning we were determined to treat all instances of plagiarism as important. Even if students had not cheated (that is, even if they had not sought learning benefits unfairly), their plagiarism mattered because they were not acting in accordance with behaviours that encouraged learning and they were not creating their work in a scholarly way. In effect, I was arguing that all plagiarism mattered first and foremost because it was a learning issue.

Looking back, I am surprised to see how inexplicit I was about the link between plagiarism and learning and I subsequently teased out the links much more explicitly (see, for example, Carroll 2009). The link between plagiarism and [not] learning explains the rules against copying much more cogently I think, than references to copyright issues for students who are not actually publishing their use of sources. It also explains the rules against copying more inclusively than references to honesty or integrity. Too much focus on integrity implies that students lack it – when in fact, most just lack skills and knowledge. This is especially excluding discourse for international students who are often assumed to lack the valued characteristics of ‘academic integrity’. Instead of assuming that other ways of managing use of sources and use of others’ words is cheating, it seems to me better to treat using sources as a situation-specific requirement, dependent on tacit and implicit beliefs that are specific to some universities (including Oxford Brookes). We should not be telling student to avoid copying because it shows lack of integrity, but rather we should be telling them not to because at its most basic, copying allows students to bypass the hard work of making sense of what others have written/ solved/ created/ designed etc. At a more complex level, unless students change, use or apply others’ ideas, then they cannot show they have understood them. In a constructivist learning paradigm such as was present at Oxford Brookes (and is present in most Western universities, at least part of the time if perhaps not as universally as many claim) – in a constructivist pedagogy, unless students show understanding, they cannot claim academic credit for their learning. This is one sense in which plagiarism is a learning issue and one I was not fully thinking through in 2005. A second link between plagiarism and learning was outlined in the 2005 paper where I did discuss them having to learn the skills we expected them to show as tertiary scholars. On that point, I subsequently explored just what these scholarly skills might be and drilled down into the complexities of using sources, sometimes in collaboration with Oxford Brookes colleagues. For example, we investigated how to use Turnitin formatively to help students learn to acknowledge sources (Davis and Carroll 2009). This work confirmed how much broader the skills were than just applying the formatting rules of a specific referencing system.

Documenting the shift from a cheating focus to a learning focus

Returning to the shift from a cheating-based approach to a learning-based approach, the 2005 BeJLT article turned out to have furthered the move to the latter in several ways. Firstly, we were using procedures that allowed us to collect data on the type and frequency of cases at Oxford Brookes (or at least, the type and frequency of those we were identifying and we were identifying many more each year). Our data showed a strong predominance of cases where misunderstanding was the likely cause although about 10 – 20 percent of cases were cheating, and a very small number were significant fraud. Secondly, we could see that the alterative, i.e. a focus on just catching and punishing cheaters, was ineffective. This conviction strengthened as I began to travel to so many different institutions advocating a learning-based approach. I did not know back then in 2005 and do not know now in 2014 of any institution or indeed, any national system, where a cheating- based approach to managing plagiarism is effective. I have travelled to or worked in dozens of universities in many countries over the years where people assume plagiarism and misconduct are synonyms, and in such places, it was/is usual to encounter sour relationships with students, damage to students’ learning, and teacher reluctance to use procedures. For example, in one university I discovered that all cases of plagiarism had to be referred to a panel comprising (amongst others) a retired judge, the Vice Chancellor of the university and two lawyers (Carroll and Zetterling 2009). The consequence was that in a university the same size as Oxford Brookes with, in many ways, a similar student demographic, they were dealing with reported cases in numbers that barely reached into double figures; in one year, there were none at all. By comparison, at Brookes where we were able to treat cases on a sliding scale of severity and where we could handle them quickly within the student’s department and with minimal negative consequences for teachers, we were dealing with numbers in the high hundreds. Our high numbers were unlikely to be because our students plagiarized more but rather, because teachers at Oxford Brookes started to be willing to see the policy and procedures as proportionate, appropriate and fair. We had also introduced specific measures to make referral less painful, some of which are described in the next section.

Overall, by 2005 and in subsequent years, we had changed how we managed student plagiarism and we were beginning to see the positive consequences. A similar finding was reported amongst other places making similar changes, many of which were reported at the 2006 PAS conference and subsequently in the growing literature on policies and procedures for managing plagiarism. See, for example, Park (2004) in the UK and Walker (1998) in New Zealand. In time, commentators and policy makers began to describe the connections and synergy involved in linking learning with skills development, with redesigning assessment, using a wide range of detection strategies, and introducing procedures to expedite case management. This interconnected approach came to be known as ‘the holistic approach’ to reviewing and rethinking policy and procedures and the following years saw many people advocating it. See, for example, Devlin 2006, Macdonald and Carroll 2006, Yeo and Chen 2007, Sutherland-Smith 2008, Bretag et al, 2011, Morris 2011, Martin and Haeringen 2011.

Back in 2005, I had not yet identified the most motivational elements of the package. Experience later showed that one key change that seemed to unlock the whole holistic enterprise was to create a policy that could convince teachers to get involved. This required procedures that removed detriments on teachers and especially removed demands on their time, if they identified a case as plagiarism and then took action. My 2005 paper described how Oxford Brookes encouraged teacher engagement by setting up a system for referral of cases, using Academic Conduct Officers. This is the second idea I have singled out in re-reading the paper as being of long term significance and I now describe why we introduced the referral system back then and how it gained in significance in the years following my first ‘outing’ of an ACO system in practice.

Procedures that discourage ‘blind eye’ turning

As early as 2001, we thought it highly likely that assessors were ignoring plagiarism. If this was the case, then they were acting in line with how most assessors apparently behaved as was shown by the responses to a large US survey which found that 41% of respondents said they regularly ignored or disregarded plagiarism when they identified it (McCabe 2005). At the beginning, Jon Appleton and I hypothesised that a referral system would be more likely to gain teachers’ trust and enhance their willingness to engage. By 2005, I could describe how the system worked. Later that year, we published evidence that using ACOs plus encouraging everyone to use criteria-based decision had made case management faster, fairer and more defensible (Carroll and Appleton 2005). Referral also seemed to be addressing the ‘blind eye’ issue as numbers of referrals rose steadily, doubling years on year from the introduction of ACOs and continuing to do so for several more years, whereupon numbers levelled out for reasons I do not fully understand. Now, in 2014 when I advocate such an approach (and I regularly do, although usually now outside of the UK rather than within it because this is often where people are coming new to these ideas) then I am able to list lessons learned and to point to important additional information which has emerged post-2005. It is to these later lessons and resources which I now turn, discussing the ACO system aims and organisation in some detail then pointing to new insights that postdate the 2005 publication.

The ACO system in a nutshell

ACOs were (and are, though I use the past tense to describe actions back in 2005) senior members of their department or school, numbering 17 in all when we began. They were nominated by their school or department and needed to be close to and aware of the requirements and concerns of the discipline. They needed to be trusted by colleagues, and to be resourced to take on the role alongside their other academic tasks. In time, it was decided that an ACO workload plan would allocate 2.5 hours multiplied by the number of cases identified in the previous year across the unit of responsibility (school, department, faculty etc). The aim was to ensure case loads stayed within manageable limits and that there was sufficient capacity to manage peaks of demand around assessment periods.

As to the procedures, the ACO system relied on an individual teacher/assessor deciding whether a piece of student work should be marked or referred. For assessors, the bottom line was deciding whether or not the work was ‘the student’s own work’. If the work was likely to be unoriginal in places where originality was required[2] and/or used others’ work without acknowledgement, then it must be referred. The assessor should also include a brief note on the grounds for referral, at which point the assessor’s role was completed – no more quiet marking down, quiet words, or quiet ignoring.
On receiving a referral, the ACO was responsible for

  • investigating, including calling the student in for interview
  • confirming or dismissing a case
  • awarding an outcome or penalty
  • recording the decision
  • ensuring that the student’s work then returns to the normal assessment process so the penalty can be carried out.

For a fuller description of the ACO system, see Carroll, 2007.

Lessons learned from implementing an ACO system

Post 2005, one key issue needing resolution turned out to be ensuring clarity about the strength of an academic judgment. It took time to convince teachers that if you make an academic judgment (and we make them all the time as teachers in HE), then this cannot be challenged. On the other hand, whether or not you followed procedures can be challenged and where procedures are not followed, cases can be overturned. Another confusion was that academic judgments are treated under civil law and therefore require a ‘balance of probabilities’ level of proof. Teachers needed to refer if the balance tips to more likely that it was not than that it was the student’s own work. Evidence need to support a ‘balance of probabilities’ decision was usually significantly less than teachers feared. You did not need to find the original source, or identify all instances, or name the paid author and so on. Perhaps referral was down to a changing ‘voice’ across the piece; perhaps the submission was too complex or ‘perfect’ an answer; perhaps the assessor spotted where the student had copied an unattributed section; perhaps the work had a strangely familiar tone – all would warrant referral and most required only minimal time to establish. Intent and cheating were not for the assessor to decide.

Training in making these distinctions is an important support for consistency and looking back to 2005, we could have done more. We ought to have been more specific, offered more examples, and above all, we should have fostered occasions where departments could discuss the cases and examples and reach consensus. Ironically, what I was advocating for helping students to understand plagiarism – activity based interaction – was also useful for their teachers seeking to manage it. For examples of activities involving students, see Carroll 2007. Looking back, there was guidance around on using criteria consistently and reliably, much of it being developed by my educational development colleagues for assessment in general (see, for example, Rust et al 2005, Price et al 2008). These colleagues were showing the importance of interaction to develop a shared understanding of what criteria mean.

However, despite inadequate training and efforts at seeking consensus in when to refer and when not to refer cases to ACOs, we saw numbers steadily rising and took this as evidence of engagement with new procedures. It became clear that teachers were using the procedures and some small-scale evaluation of the new procedures supported confidence that they and the students were starting to see the system as reasonable, transparent and likely to be fair. We subsequently published our findings in this matter (Carroll and Seymour, 2006).

Making consistent, criteria-based decisions

My 2005 article was a first attempt at describing how we dealt with the challenge of make decisions flexible and responsive to circumstances yet capable of being explained and defended as fair and proportionate. The solution we settled on was limiting the number of criteria we asked ACOs to consider then suggesting that they should think about the criteria in a ranked order of importance. The first decision an ACO must make was whether or not to proceed with a case and here, as with training of assessors, experience post-2005 showed that we were not specific enough in guiding the decision. We did eventually create a regular ACO Forum where this issue was often on the agenda (see below for more benefits of the Forum). By 2005, ACOs had to record the number of cases they managed and these were compiled into an annual report but we had not yet made record keeping sufficiently systematic and across the board. For example, we had not yet started recording the number of cases dismissed at the ACO level and post 2005, began doing so. This allowed for review if annual reports showed any particular anomalies. However, once the ‘proceed or dismiss’ decision was made, procedures became more specific and here, criteria-based decision making began to be important.

From the point where there was a decision to proceed, an ACO took responsibility for investigating the case (sometimes with others, sometimes alone) and for communicating with the student. The complexities of investigation and communication are beyond the scope of this outline but how they occurred and how the student was managed was not up to the ACO – or it was not up to them once we created a few more resources. We needed to write a Handbook and supply guidance notes specifying how interviews were to be conducted. Above all, we needed standard letter templates and common forms for keeping records. The work of creating templates and forms was done by a particularly organised and systematic ACO Forum Chair and without her, things would have operated far less effectively. We learned by experience the vital importance of creating standard documentation, setting explicit time limits and providing step-by-step guides was.

To go back to how the ACO system worked, all cases required an interview involving the student, the ACO and occasionally, others. Interviews usually lasted about an hour and students could bring a support person who had a limited role, as described in procedures. The ACO began the interview by investigating the grounds for determining if a breach of academic regulations had occurred and if so, the basis for deciding the degree of its severity. At some point (usually 20 minutes or so into the interview), the ACO confirms or not whether a breach has occurred and decides (but does not discuss) the level of the breach and what penalty is appropriate. At this point, the interview is suspended and the student has a short time to reflect on whether to continue or to adjourn to a later date. If the interview continues, the ACO then informs the student of the level of severity of the case (we used three levels), awards a penalty, and records the decision using standardised templates. The ACO uses the remaining time to either provide additional guidance or to set requirements for additional guidance or tutorial support, to answer and resolve any immediate questions, and to ensure the student knows where and how he or she can appeal. At this point, the ACO uses set procedures to inform others such as the referring teacher or an examination board, then returns the case to the normal assessment process. Consequences are then implemented.

As the reader can see and as we came to realise, this step-by-step process was complex and offered many opportunities for individualistic and therefore inconsistent decision-making. To guide and shape decisions about how to treat a case, we settled on four criteria, to be used in combination. Criteria become especially important in determining the level of severity of the breach of regulations that is being managed. By using criteria, different people in different and complicated situations can come to more consistent and fairer conclusions. We asked ACOs to set the level of severity of the breach of academic regulations about using and acknowledging others’ work by looking at these criteria:

  • the amount of work that is not the student’s own.
  • the significance of unacknowledged work from a source within the student’s piece of work.
    For example, is inauthentic or unoriginal work in an appendix or in the body of the work? in the ‘discussion’ or in another less important section? and so on.
  • the stage in the student’s academic ‘journey’.
    Particular care is needed in the early stage/months of a student’s enrolment. Care is also needed with students in a ‘top up’ programme or who enter postgraduate study from a very different place of pedagogic practices and conventions.
  • a judgment as to whether or not the student has had sufficient opportunity to understand and apply the rules for use of sources.
    Evidence of ‘sufficient opportunity’ might be attendance on a study skills course, recorded attendance at a departmental briefing, followed by documented feedback in previous work, evidence of correct use in the majority of the piece of work, and so on. A judgment about whether or not the student has had sufficient opportunity to understand and develop appropriate skills usually stands in as a proxy for decisions as to whether or not the breach was intentional. Both intent and level of understanding are academic judgments but in practice, it seems to be easier to judge the latter than the former.
  • whether there is evidence of cheating, meaning any deliberate attempts to deceive or to claim benefits unfairly.
    Evidence of cheating is treated as an as an exacerbating factor for misconduct.

A second development post-2005 was undertaken by Jon Appleton in conjunction with the ACO Forum and sought to match available penalties with level of severity. We referred to eventual document as a tariff and even after agreement, there was on-going discussion on the tariff. Penalties in particular needed revision as practice demonstrated which were feasible.

Once agreed, the tariff offered sufficient flexibility to deal with highly varied cases yet constrained ACOs by giving them limited options as to levels of seriousness and suitable penalties. In cases where an ACO judges a breach as being so serious that it is likely to warrant a penalty greater than a decision to fail the student in a unit of study (the most serious penalty available to an ACO), then the case must be referred to the University Disciplinary Committee. Referral was the usual outcome in instances of fraud (including commissioned work), for serial misconduct, and for serious misconduct in high value, high status work such as a doctoral thesis or published research. ACOs also referred students who used tactics designed to make detection unlikely such as the use of back translation[3].

We discovered the power of practice and experience. After perhaps a dozen or so cases, most of the common circumstances and options usually popped up and we found that ACOs learned to deal with them using criteria and applying the tariff with relative ease. We began to be reassured we were seeing defensible, transparent decision-making rather than habits and hunches. Later in 2005, we published the criteria, the tariff and the impact of using them in decision-making (Carroll and Appleton 2005).

Keeping the issues alive; keeping practice within guidelines

The 2005 article was written with some optimism and looking back, I sense an air of (incorrectly ) believing we had sorted the issues. One thing that was clear about the time of publication is that were only starting to realise how challenging it was to keep things moving in the direction we wanted. We needed to go on training and inducting, checking and promoting interventions, bringing new people on board and keeping the acquired knowledge and insights available for future ACOs and future students. New ACOs in particular needed help with criteria-based decision making. They needed training in using procedures and a rationale for why this was important. To that end, we strengthened induction into the role (including peer observation), and gave them access to a closed discussion board where ACOs might share questions and exchange views on especially demanding cases. Regularly scheduled ACO meetings, at least one per term, and professional conversations with university experts helped create a shared approach. It also helped that the annual reports became more detailed and more likely to trigger evaluation and comment from the University committee on quality and standards where annual reports began to be submitted. In 2006, we analysed and wrote about the issue of on-going consistency, showing that where ACOs operated in isolation, consistency suffered (Carroll and Seymour, 2006).

The benefits from having numbers

The previous section referred to one consequence of having annual reports where numbers were collected systematically across all schools and faculties then combined and analysed annually.  This turns out to be the final ‘lesson learned’ after the 2005 paper.  Numbers, it turns out, were powerful motivators for change.  When we had numbers, we could start asking questions of ourselves and our own practices as a University.  Why were cases so prevalent in this school?  or at this level of study?  or amongst students with these characteristics?  Why were the number of cases of collusion, for example,  rising or falling, year-on-year?  Why so much misconduct this year? Why so few/many appeals?  and so on.  I think, looking back, that it was the ability to underpin new developments and preferred changes with some data, however imperfect or incomplete in the early years, that made as yet untouched parts of the university began to come into line with central initiatives.  Numbers were persuasive and yet not too dangerous or risky for individual departments or schools.  They prompted questions, not accusations.

With numbers, monitoring and review, I could see how we had really come full circle and locked manage=ment of plagiarism into general quality assurance principles.  In my 2005 article, I wrote about a survey by the BBC in 2003 ‘where five of the 31 universities who responded stated they had not referred a single student for plagiarism in the previous year’.   I then went on the say in the article that this ‘nul result’ might be a result of different understandings of the word ‘referred’ but less than ten years later, we could not only say clearly what we meant by ‘refer’ but also, we could challenge parts of the university who had also reported the same.  We  now had open eyes, engaged teachers, relatively satisfied students and defensible ways of managing cases when they appeared.  More or less.

I know there is more work to do though I now do this work outside of Oxford Brookes and usually, outside of the UK where the ideas I have been writing about here are still regarded with considerable skepticism.  I now go to these contexts with lessons learned, such as the following:

  • A policy which foregrounds learning can be managed effectively and can deal with cheating, too.
  • A complex issue like student plagiarism needs an equally complex and joined-up solution.
  • Policies are important but detailed procedures for managing cases are more important still.
  • If you want policies to be effective, you need teacher engagement and you won’t get teacher engagement unless you have procedures that are safe, trustworthy and perceived as fair.
  • To handle a large numbers of cases, you need a well-managed referral system with devolved decision making on the consequences for students who breach academic regulations.
  • Criteria-based decisions are more likely to result in consistent decisions – and more likely if people meet regularly and discuss what the criteria mean in practice.
  • Keeping records is important for defensibility; having good data can drive the process forward.

And finally, the lesson is very clear that managing student plagiarism is a collaborative all-university affair. This follow-up is only my version and I suspect that others, telling the same story, would have stressed different aspects or highlighted different developments. I regret not having the chance to write this account collaboratively so I could ‘walk the talk’ of my last lesson: that no one person, department or occupational role can manage student plagiarism. It requires a shared, consensual and long-term approach if any university is to ‘move [managing plagiarism] to mainstream’.


[1] This service has since been reorganised and renamed. You can find all conference proceedings, including the 2006 publication referred to in the text at
[2] Here originality is not referring to the work being unique, never before seen or novel. ‘Original’ refers to the work being derived by the student’s own efforts and from the student’s individual work. The submission ‘originated’ with the student and where other’s efforts are used, these are acknowledged transparently. Much subsequent work went into defining and explaining these meanings, culminating in what I think is the most insightful definition of plagiarism, offered by Fishman (2009).
[3] ‘Back-translation refers to the practice of selecting material for misuse, running it through software such as Google translate into another language then in turn, running the new text back into English. The resulting text, it is claimed, has been sufficiently altered to go ‘under the radar’ of text matching software.



Bretag, T., Mahmud, S., Wallace, M., Walker, R., Green, M., East, J., James, C., McGowan, U., Partridge, L. (2011) Core elements of exemplary academic integrity policy in Australian higher education, International Journal for Educational Integrity, 7 (2), pp. 3-12.

Carroll, J. (2009) Plagiarism as a threat to learning:  an educational response, In G. Joughlin (ed) Assessment, Learning and Judgement in Higher Education, Dordrecht, Netherlands; Springer, pp. 115-131

Carroll, J. (2007) A Handbook for Deterring Plagiarism in Higher Education, Oxford Centre for Staff and Learning Development, Oxford Brookes University, Oxford; UK.

Carroll, J. and Appleton, J. (2001) A Good Practice Guide for managing student plagiarism (, accessed 20 March 2014).

Carroll, J. and Appleton, J. (2005) Towards consistent penalty decisions for breaches of academic regulations in one UK university, The International Journal for Educational Integrity, 1 (1)

Carroll, J. and Seymour, D. (2006) The effect of a penalty tariff on consistent decision-making in cases of student plagiarism, In Duggan, F. (ed) 2nd International Plagiarism Conference 2006 Proceedings, JISC Plagiarism Advisory Service, Newcastle; Northumbria Learning Press

Carroll, J. and Zetterling, C-M. (2009) Guiding Students away from Plagiarism / Hjälp studenterna att undvika plagiering, Learning Lab, Royal Technical University, Stockholm; Sweden.

Davis, M. and Carroll. J. (2009) Formative feedback within plagiarism education: Is there a role for text-matching software? International Journal for Educational Integrity, 5(2).

Devlin, M. (2006) Policy, preparation, and prevention: Proactive minimization of student plagiarism, Journal of Higher Education Policy and Management, 28 pp. 45–58

Fishman, T. (2009) “We know it when we see it” is not good enough: toward a standard definition of plagiarism that transcends theft, fraud, and copyright,

4th Asia Pacific Conference on Educational Integrity (4APCEI), 28–30 September, University of Wollongong, NSW, Australia.

McCabe, D. (2005) Cheating among college and university students: a North American perspective, International Journal for Educational Integrity, 1 (1)

Macdonald, R. and Carroll, J. (2006) Plagiarism – a complex issue requiring a holistic institutional approach, Assessment and Evaluation in Higher Education, 31 (2) pp. 233–45.

Martin, J. and Haeringen, K. (2011) Can a policy change practice? An evidence-based approach to developing policy’, The International Journal for Educational Integrity, 7 (2) pp. 13 – 22.

Morris, E. (2011) Policy Works: Recommendations for reviewing policy to manage unacceptable academic practice in higher education, York; Higher Education Academy, available at

Park, C. (2004) Rebels Without a Clause: towards an institutional framework for dealing with plagiarism by students, Journal of Further and Higher Education, 28 (3) pp. 291-306.

Price, M., O’Donovan, B., Rust, C. and Carroll, J (2008) Assessment standards: a manifesto for change, Brookes eJournal of Learning and Teaching, 2 (3),

Rust, C., O’Donovan, B and Price, M. (2005) A social constructivist assessment process model: how the research literature shows us this could be best practice, Assessment and Evaluation in Higher Education, 30 (3) pp. 233-41.

Sutherland-Smith, W. (2010) Retribution, deterrence and reform: the dilemmas of plagiarism, Journal of Higher Education Policy and Management, 32 (1) pp. 5-16.

Walker, J. (1998) Student Plagiarism in Universities: What are we Doing About it?, Higher Education Research & Development, 17(1) pp. 89- 106

Yeo, S. and Chien, R. (2007) Evaluation of a process and proforma for making consistent decisions about the seriousness of plagiarism incidents, Quality in Higher Education, 13 (2) pp. 187-204.

Jude Carroll

Jude Carroll worked for several decades at Oxford Brookes University, where she took on a wide range of responsibilities linked to professional development of teachers, developing specialisms in managing student plagiarism and in effective teaching of international students. In 2009, Jude was awarded a National Teaching.  Between 2005-2008, Jude was Deputy Director of the ASKe centre for excellence in assessment standards. She is the author of A Handbook for Deterring Plagiarism in Higher Education (OCSLD, 2nd edition 2007), was co-editor of Teaching International Students: improving learning for all (Routledge 2005) and is the author of Tools for Teaching in an educationally mobile world (Routledge 2015).  Jude currently works as an independent consultant with many universities around the world.

Creative Commons License
This paper is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Double Blind Review
This paper has been subject to a double blind peer review by at least two reviewers. For more information about our double blind review process please visit:

How to cite this paper.
Posted in Academic Paper

Leave a Reply

Your email address will not be published. Required fields are marked *


Subscribe to BeJLT

Get email alerts when there is a new issue.
* = required field

Send this page to Kindle

your kindle user name:
(, without
Approved E-mail:
(Approved E-mail that kindle will accept)
Kindle base email |
(Use to download on wispernet or wifi, use for wifi only.)
using may incur charges)