Assessment standards lie at the heart of the assessment process underpinning assessment strategy, design and marking. For students, assessment standards provide guidance for their learning and allow them to monitor their progress, and ultimately, the standards will be used to judge their performance. Yet there has been little serious consideration of the nature of assessment standards or their effective communication between stakeholders (primarily staff and students but also employers, professional bodies and government). However, amid accusations from the media concerning ‘dumbing down,’ there have been wider concerns expressed about assessment and assessment standards, not least in the Burgess Report, which identified a range of problems, including marking reliability and communication of assessment standards.
In response to these concerns the Assessment Standards Knowledge exchange (ASKe), a Centre for Excellence in Teaching and Learning, brought together an international forum of experts on assessment in higher education (the Weston Manor Group) to consider issues around assessment standards and make proposals about changes to policy and practice. Members of the group (Appendix 1) were invited to participate based on their research and publication on assessment issues. The Group is clear that learners who engage in assessment are entitled to fair and transparent assessment practices and standards. However, it is the Group’s view that assessment standards reside in the practices of academic and professional communities and in the nature and level of the tasks in which learners engage, rather than being fully describable in statements of learning outcomes or marking criteria. In addition, the Group believes that currently there is an over-emphasis on the measurement of learning, often at the expense of assessment for learning. These fundamental evidence-based beliefs underpin a Manifesto for Change which the Group is putting forward with a view to stimulating debate across the sector and, more importantly, as a step towards bringing about necessary changes in policy and practice. The rationale and evidence for the tenets of the manifesto are presented in this paper.
The debate on standards needs to focus on how high standards of learning can be achieved through assessment. This requires a greater emphasis on assessment for learning rather than assessment of learning.
Knight (2002) reviewed and discussed the assessment process in UK higher education and concluded that summative assessment processes are ‘in disarray’ (p. 275). Confronting such disarray requires a holistic review and reappraisal of curricula and, in particular, the role of assessment in HE learning. It is often the case that when assessment is considered in the literature, or amongst our colleagues, the focus is on marking. This is arguably because the role of assessment is seen primarily as an instrument of measurement rather than to promote and support learning. However, the importance of assessment in student learning is well established:
‘Assessment is at the heart of the student experience.’ (Brown, S. and Knight, P., 1994)
‘From our students’ point of view, assessment always defines the actual curriculum.’ (Ramsden, P.,1992)
‘Assessment defines what students regard as important, how they spend their time and how they come to see themselves as students and then as graduates…If you want to change student learning then change the methods of assessment.’ (Brown, et al., 1997).
And yet there is considerable evidence that current assessment practices often do not set standards that are appropriate for the high-level complex learning that one would expect to be central to higher education:
‘The types of assessment we currently use do not promote conceptual understanding and do not encourage a deep approach to learning…Our means of assessing them seems to do little to encourage them to adopt anything other than a strategic or mechanical approach to their studies.’ (Newstead, 2002, p. 3)
‘Conventional assessment procedures are unable to do justice to the most important outcomes of any educational process worth the name.’ (Raven, 1991, p. 1)
‘Even when lecturers say that they want students to be creative and thoughtful, students often recognise that what is really necessary, or at least what is sufficient, is to memorise.’ (Gibbs, 1992, p. 10)
An intense focus on marking has led to a reduced emphasis on the other aspects of assessment that underpin the success of assessment as a vehicle for learning. Consequently, we need to emphasise the role of assessment in developing learning.
When it comes to the assessment of learning, we need to move beyond systems focused on marks and grades towards the valid assessment of the achievement of intended programme outcomes.
While acknowledging that assessment retains a role for measuring performance, it must also be acknowledged that our current systems, which focus on marks and grades, are not working. Rust (2007) puts forward a number of embedded beliefs that evidence the fallibility of marking systems:
- Belief that it is possible to distinguish the quality of work to a precision of one percentage point (Elander and Hardman, 2002)
- Belief that double-marking will ensure fairness and reliability
- Belief that consistency can be achieved through conformity and simple numerical rules (e.g., level 1 essay 3,000 words, level 3 essay 5,000; or no more than two pieces of assessment per module)
- The combination of scores, which obscures the different types of learning outcome represented by the separate scores
- The distortion of marks by the type of assessment (e.g., coursework compared to examination) and the actual subject discipline/s studied (Yorke et al., 2002; Bridges et al., 2002)
- The distortion of resulting degree classifications by the application of idiosyncratic institutional rules (e.g., Armstrong et al., 1998).
Arguably, reliability in marking has been so sought after within HE assessment practices that this has had a detrimental effect on the role of assessment standards in promoting and developing learning:
‘This quest for reliability tends to skew assessment towards the assessment of simple and unambiguous achievements, and considerations of cost add to the skew away from judgements of complex learning.’ (Knight, 2002, p. 278)
‘…summative judgement itself is the problem.’ (Burgess, 2007, p. 8)
‘Many research findings indicate a declining use of deep and contextual approaches to study as students progress through their degree programmes.’ (Watkins and Hattie, 1985; Kember et al., 1997; Zhang and Watkins, 2001)
‘…students become more interested in the mark and less interested in the subject over the course of their studies.’ (Newstead, 2002, p. 2)
Consequently, to support the achievement of high-level learning outcomes we need to recognise the difficulties inherent in our marking systems and address the imbalance between validity and reliability through an increased emphasis on assessment validity. Validity and student achievement can be enhanced by assessment strategies that are programme focused (Havnes, 2007).
Limits to the extent that standards can be articulated explicitly must be recognised since ever more detailed specificity and striving for reliability, all too frequently, diminish the learning experience and threaten its validity. There are important benefits of higher education which are not amenable either to the precise specification of standards or to objective assessment.
The role and effectiveness of assessment standards in supporting assessment for learning is underpinned by a shared understanding of the standards. As Sadler points out, an indispensable condition for improvement in student learning is that ‘the student comes to hold a concept of quality roughly similar to that held by the teacher’ (Sadler, 1989, p. 121). However, the dominant model of sharing of standards in higher education relies on explicit criteria and level descriptors which have been shown to be inadequate because many aspects of quality cannot be communicated through explicit criteria (Rust et al, 2003). Admittedly, there are some standards that are amenable to explicit articulation. Sadler labels these as regulative and logical criteria where ‘standards can be defined in terms of well-defined outcomes’ (1987, p. 70). However, he also identifies prescriptive and constitutive criteria which are less dualistic and refer to achievement in ‘matters of degree,’ and ‘It would be difficult or impossible to guess the educational level at which they are applicable…’ (Sadler, 1987, p. 70). Such types of criteria are often interdependent and can only be assessed using holistic/professional judgment (Sadler, 2008). It is, however, these types of complex and pluralistic outcomes that are arguably central to higher education. Such complex criteria and standards are socially constructed and based on the sharing of tacit and explicit knowledge over time (O’Donovan et al., 2004; Rust et al., 2005). Whilst these important benefits of higher education are not amenable to precise articulation, this should not mean that we abandon their pursuit. Instead we must rely on other ways of communicating and evidencing standards involved in their achievement.
Assessment standards are socially constructed so there must be a greater emphasis on assessment and feedback processes that actively engage both staff and students in dialogue about standards. It is when learners share an understanding of academic and professional standards in an atmosphere of mutual trust that learning works best.
‘Coming to know’ assessment standards requires the development of tacit knowledge, which involves the sharing of experience and ‘making sense of the world’ and as such is a social and collaborative activity (Vygotsky, 1978). For both staff and students, developing an understanding of assessment standards applied to high-level complex learning can only be understood through active engagement with members of the disciplinary community that ‘holds’ those standards. We know that some processes, such as feedback, exist to support the development of this understanding, but passive receipt of feedback has little effect on future performance (Fritz et al., 2000). It requires active engagement to foster in-depth understanding, and research shows that dialogue, trust and participatory relationships are key elements of engaging students with assessment feedback (ESwAF FDTL, 2007). Staff may more readily have the opportunity to actively engage with standards through marking processes involving discussions with colleagues, but students’ opportunities to purposefully engage with the community in which the standards are held are not so obvious. However, there is clear evidence that where students do have the opportunity and take it up, it is active involvement and participation that fosters engagement, understanding and achievement:
‘…participation, as a way of learning, enables the student to both absorb, and be absorbed in the culture of practice.’ (Elwood and Klenowski, 2002, p. 246)
‘The only common factor in a study of departments deemed excellent in both research and learning and teaching is high levels of student involvement.’ (Gibbs, 2007)
‘The most significant factor in student academic success is student involvement fostered by student–staff interactions and student–student interactions.’ (Astin, 1997)
Explicit articulation on its own is not enough to share or engage students with the assessment standards and the community in which they are established, and without this engagement students are less likely to achieve academic success.
Active engagement with assessment standards needs to be an integral and seamless part of course design and the learning process in order to allow students to develop their own, internalised, conceptions of standards and monitor and supervise their own learning.
Developing an understanding of complex standards through a process of socialisation requires time. Consequently, important aspects of complex, high-level learning outcomes can only be achieved when students are allowed time to come to know the standards in use by the community. Tacit understandings of standards are developed over time through practice, observation and imitation (O’Donovan et al, 2004). Slowly learnt academic literacies need rehearsal and practice throughout a programme (Knight and Yorke, 2004), requiring students to engage as interactive partners in a learning community and relinquish the passive role of ‘the instructed’ within processes controlled by academic experts (Gibbs et al., 2004). To support the slow learning that leads to high-level complex learning there needs to be integrated and coherent progression based on programme outcomes. Recent research indicates that a focus on programme outcomes and standards rather than on the incremental outcomes of a fragmented programme provides students with a greater sense of the holistic programme and they thereby more readily internalise disciplinary standards. This in turn supports academic success as students are more likely to engage in complex and valid learning (Havnes, 2007).
Assessment is largely dependent upon professional judgement and confidence in such judgement requires the establishment of appropriate forums for the development and sharing of standards within and between disciplinary and professional communities.
Assessment standards underpin the assessment process and yet are often difficult to articulate explicitly and require active participation over time to develop deep understanding. However, their significant role in both guiding learning and measuring achievement requires safeguards and mechanisms for consistency. Such safeguards could be delivered through the greater professionalization of university teachers, but there has been slow progress in this regard; in particular there has been limited attention paid to professional assessment practice. In addition, changes in higher education (e.g., massification, reduced units of resource, expectations of increased staff productivity) threaten the ‘health’ of disciplinary communities and their ability to share and exemplify professional judgement. It could be argued that in the UK the external examiner system protects and promotes the consistency of assessment standards, but Newstead and Dennis (1994) show that reliance on the external examiner system to mediate standards within the system is misplaced. Consequently, there must be processes established that will provide confidence in the assessment standards dependent upon professional judgements. Assuring standards based on professional judgement entails dialogue and sharing practice across disciplinary communities.
The Weston Manor Group believes that the current dominant logic of the explicit approach to assessment standards is inadequate and must be challenged. The current single-minded quest for reliability is impeding student learning when our main concern should be enabling students to engage in and achieve high-level, complex learning. Active involvement by staff and students in the learning community is essential to reach common understandings of assessment standards. To achieve this there will need to be a review and evaluation of the allocation of time and resources within learning, teaching and assessment in higher education and the establishment of dialogic processes to assure standards. Such changes may be difficult but, the Weston Manor Group believes, they are both necessary and, with the right resourcing, achievable.
Armstrong, M., Clarkson, P. and Noble, M. (1998), Modularity and credit frameworks: the NUCCAT survey and 1998 conference report, Newcastle-upon-Tyne, Northern Universities Consortiuum for Credit Accumulation and Transfer.
Astin, A. (1997), What matters in college? Four critical years revisited. San Francisco: Jossey Bass.
Bridges, P., Cooper, A., Evanson, P., Haines, C., Jenkins, D., Scurry, D., Woolf, H. and Yorke, M (2002), ‘Coursework marks high examination marks low: discuss’, Assessment and Evaluation in Higher Education, vol. 27, no. 1, pp. 35-48.
Brown, G., Bull, J. and Pendlebury, M. (1997), Assessing student learning in higher education. London: Routledge.
Brown, S and Knight, P. (1994), Assessing Learners in Higher Education. London: Kogan Page.
Burgess, R. (2007), Beyond the Honours Degree Classification: Burgess Group Final Report. Universities UK.
Elander, J. and Hardman, D. (2002), ‘An application of judgement analysis to examination marking in psychology’, British Journal of Psychology 93, pp. 303-328.
Elwood, J. and Klenowski, V. (2002), ‘Creating communities of shared practice: the challenges of assessment use in learning and teaching’, Assessment and Evaluation in Higher Education, 27, pp. 243-256.
ESwAF FDTL (2007), ‘Final Report’. Available online at: https://mw.brookes.ac.uk/display/eswaf/Home.
Fritz, C.O., Morris, P.E., Bjork, R.A., Gelman, R. and Wickens, T.D. (2000), ‘When further learning fails: stability and change following repeated presentation of text’, British Journal of Psychology, 91, pp. 493-511.
Gibbs, G. (1992), Improving the quality of student learning. Bristol: TES.
Gibbs, G (2007) ‘Departmental leadership of teaching’ Presented at The Oxford Learning Institute research seminar series, University of Oxford. Hilary term
Gibbs, P., Angelides, P. and Michaelides, P. (2004), ‘Preliminary thoughts on a praxis of higher education teaching’, Teaching in Higher Education, 9, pp. 183-194.
Havnes, A. (2007), ‘What can feedback practices tell us about variation in grading across fields?’ Presented at the ASKe Seminar Series, Oxford Brookes University, 19th September.
Kember, D., et al. (1997), ‘Case studies in improving teaching and learning from the action learning project’, Action learning project, Hong Kong.
Knight, P. T. (2002), ‘Summative assessment in higher education: practices in disarray’, Studies in Higher Education, vol. 27, no. 3, pp. 275–286.
Knight, Pand Yorke, M (2004),Learning, Curriculum and Employability in Higher Education. London: Routledge.
Newstead, S. (2002), ‘Examining the examiners: why are we so bad at assessing students?’, Psychology Learning and Teaching, vol. 2, no. 2, pp. 70-75.
Newstead, S. E. and Dennis, I. (1994), ‘Examiners examined: the reality of exam marking in psychology’, The Psychologist, 7, pp. 216-19.
Nonaka, I. (1991), ‘The knowledge-creating company’, The Harvard Business Review, November-December, pp. 96-104.
O’Donovan, B., Price, M. and Rust, C. (2004), ‘Know what I mean? Enhancing student understanding of assessment standards and criteria’, Teaching in Higher Education, 9, pp. 325-335.
Ramsden, P. (1992), Learning to teach in higher education. London: Routledge.
Raven, J. (1991), The tragic illusion: educational testing. New York: Trillium Press.
Rust, C., Price, M. and O’Donovan, B. (2003), ‘Improving students’ learning by developing their understanding of assessment criteria and processes’, Assessment and Evaluation in Higher Education. 28, pp. 147-164.
Rust, C. (2007), ‘Towards a scholarship of assessment’, Assessment and Evaluation in Higher Education, vol. 32, no. 2, pp. 229-237.
Rust, C., O’Donovan, B. and Price, M. (2005), ‘A social constructivist assessment process model: how the research literature shows us this could be best practice’, Assessment and Evaluation in Higher Education, vol. 30, no. 3, pp. 231-240.
Sadler, D. R. (1987), ‘Specifying and Promulgating Achievement Standards’, Oxford Review of Education, 13, pp. 191–209.
Sadler, D. R. (1989), ‘Formative assessment and the design of instructional systems’, Instructional Science, 18, pp. 119-144.
Sadler, D. R. (2008), ‘Indeterminacy in the use of preset criteria for assessment and grading’, Assessment and Evaluation in Higher Education, April.
Watkins, D., and Hattie, J. (1985), ‘A longitudinal study of the approaches to learning of Australian tertiary students’, Human Learning, 4, pp. 127-41.
Vygotsky, L. S. (1978), Mind in society: the development of higher psychological processes, MA: Harvard University Press.
Yorke, M., Bridges, P and Woolf, H. (2000), ‘Mark distributions and marking practices in UK higher education; some challenging issues’, Active Learning in Higher Education, vol. 1, no. 1, pp. 7-27.
Zhang, L. F. and Watkins, D. (2001), ‘Cognitive development and student approaches to learning: an investigation of Perry’s theory with Chinese and US university students’, Higher Education, 41, pp. 236-261.
Weston Manor Group
- Prof Tom Angelo
- Prof Trudy Banta
- Dr Simon Barrie
- Prof Sally Brown
- Miss Cordelia Bryan
- Dr Colin Bryson
- Ms Jude Carroll
- Prof Sue Clegg
- Dr Linda Drew
- Prof Graham Gibbs
- Dr Anton Havnes
- Dr Mary Lea
- Dr Janet Macdonald
- Prof Ranald Macdonald
- Mrs Debra Macfarlane
- Dr Susan Martin
- Prof Marcia Mentkowski
- Dr Stephen Merry
- Dr David Nicol
- Prof Andy Northedge
- Prof Lin Norton
- Ms Berry O’ Donovan
- Dr Thomas Olsson
- Dr Susan Orr
- Mr Paul Orsmond
- Prof Margaret Price
- Prof Phil Race
- Mr Clive Robertson
- Mr Mark Russell
- Dr Chris Rust
- Prof Gilly Salmon
- Prof Kay Sambell
- Prof Brenda Smith
- Prof Stephen Swithenby
- Prof Mantz Yorke