In the UK, there is growing pressure both within and across higher education institutions to make assessment standards and processes more transparent to students and other stakeholders. What follows is a brief account of our continuing quest to develop student understanding of assessment standards and processes. The evolution of our research and practice, undertaken at The Business School, Oxford Brookes University, is then related to a suggested framework of four generic approaches to developing student understanding of assessment standards and processes, culminating in a community of practice approach.
Berry O’Donovan, Chris Rust, and Margaret Price, with Jude Carroll
Arguably, in the past, academic communities were more stable, homogeneous and close-knit (Ecclestone, 2001). Academic programmes were less fragmented with components running over longer time periods during which understanding about all manner of things, including assessment standards, could be transferred between staff and students. Students gradually ‘came to know’ academic standards serendipitously through trial and error via feedback and informal discussions with tutors about the nature and values of their subjects of study. Assessment judgments relied on the tacit professional expertise of teachers who functioned as an elite ‘guild’ whose conclusions were inexplicable and inaccessible to the layman (O’Donovan et al., 2004).
Perhaps the past was never as this idealised memory describes, but, certainly in the UK, since the 1980s opportunities for informal sharing of assessment requirements and standards have been eroded. Academic communities themselves and the, often modular, courses they offer are more fragmented with less contact time between students and tutors. One response has been to formalise and codify expectations and standards with increasing dependence on explicit systems and procedures such as learning outcomes and disciplinary benchmark statements (Winter, 1994). Those who invested energy in such measures a decade ago did so believing, or at least hoping, that explicit articulation would be sufficient explanation for students and that they would use this information to improve their learning and performance.
We certainly believed this when, in 1996/7, we created a criterion referenced assessment grid for use across the School by students and staff (Price and Rust, 1999). The grid which is available here sets out grade descriptors against each of dozens of criteria used to assess undergraduate work. We created the grid to encourage the use of appropriate criteria and to provide students and markers with information about the standard to be applied for each criterion. Perhaps naively, we hoped this would lead to markers being consistent in the standards they applied and students being guided in their learning by referring to transparent statements about what was expected.
We were, however, disappointed. These first attempts to clarify and make standards transparent, while rationally sound, proved not to work in practice (Price and Rust, 1999). Students either did not refer to the information or could not satisfactorily interpret it. As a consequence, we initially sought to be more precise in the language we used but as Sadler argues, verbal descriptions of standards are context dependent and always somewhat vague or fuzzy. Such verbal descriptions are often a matter of degree – indicative of relative rather than absolute positions (Sadler, 1987). What the words ‘highly evaluative’ or ‘reasonably coherent’ mean will depend on the assessor’s expectations and knowledge of the context. It quickly became clear to us that each grade descriptor would have to be reworked anew for each assessment. We concluded that this ‘ever more specific’ route was never going to lead to a balance between precision and utility, a point later confirmed by Yorke (2002). As Snowdon (2002) reasons there is a cost (in terms of time and resources) to codifying knowledge that comes from the need to create a shared context and that this cost increases the more diverse an audience’s experience and language. So whilst this focus on explicit articulation could be considered currently the dominant logic of UK higher education, it is arguably only of limited use in today’s context of more fragmented programmes and overworked academics who serve an increasingly diverse student population.
We still wanted to find a way of making assessment standards transparent to students, so we tried another tack: an alternative to ever greater precision in articulation is to actively engage students (or other stakeholders) in using and applying standards (e.g., through guided practice in assessing exemplar assignments), thus enabling students to make sense of standards within what Vygotsky (1978) calls personal and cognitive constructs. This is a more student-centered approach that links with Lea and Street’s ‘academic socialisation’ approach to teaching students’ academic literacies (1998), and recognizes that meaningful knowledge of assessment standards is derived from tacit as well as explicit knowledge (O’Donovan et al., 2004). Tacit knowledge is considered to be experience based and only communicated through the sharing of experience through active socialisation processes involving observation, imitation and practice (Nonaka, 1991). Sadler suggests that academic standards are highly tacit and reside ‘essentially in unarticulated form inside the heads of assessors, and are normally transferred, expert to novice, by joint participation in evaluative activity’ (1987, p.199). Consequently, we gave our students the opportunity to participate in evaluative activity by running optional workshops in which students discussed the meaning of marking criteria, actively marked exemplar assignments and then compared their assessment to that of tutors. Undertaken with large classes of 300+ students this action research showed that we could make a difference in only a very modest amount of contact time. This was more encouraging. Our findings, measured over three consecutive years, show students who undertake an optional 90-minute assessment marking workshop demonstrate a significant improvement in performance compared to those who do not, even though baseline comparison of the performance of participants and non-participants, undertaken prior to the intervention, shows no significant difference in performance (Rust et al., 2003). One year later, participants still demonstrate improved performance although, not surprisingly, with a minor reduction in the effect.
More recently, and with less obvious success, (Price et al., in press) we have experimented with different ways of encouraging students to engage with assessment feedback, in the belief that this also should help to improve their future performance.
However, if it works, such a social constructivist approach to assessment should not be limited to pre-assessment processes, or just offered to students. The entire assessment process at every stage could benefit from such an approach. In a recent paper we argue for a ‘social constructivist process model’ of assessment (Rust et al., 2005) in which students and staff are actively engaged with every stage of the assessment process in order that they better understand the requirements of the assessment process, the criteria and the standards being applied.
We have not stopped thinking about better ways of sharing standards. Lave and Wenger (1991) propose that deep learning requires an environment that includes social relationships and co-participation, calling this ‘a community of practice’. Northedge also emphasises the importance of knowledge communities, accentuating the socio-cultural nature of learning, modeling learning as ‘acquiring the capacity to participate in the discourses of an unfamiliar knowledge community, and teaching as supporting that participation’ (2003, p.17).
Recently we have seen considerable interest in the concept of communities of practice within higher education both as a context for deep learning and for orientating students into the requirements of an academic community, but disappointingly, there has been little interest in how or even if such communities can be triggered and fostered. Indeed, Wenger (1998) suggests that higher education fails to cultivate thriving learning communities as learning within HE is largely viewed as an individual process, separated from the rest of an individual’s activities and resulting from an explicit ‘teaching’ process that has a beginning and an end. Findings, from research undertaken by Parker, also show a disturbing absence of student engagement with academic communities in that it revealed ‘a worrying number of students in young Hippocrates’ position – they had applied for university with high hopes and a very vague idea of what the course was for, and in their first and second year were still waiting for someone to show why they were there’ (p. 376). Enabling students to become active members of a learning community clearly requires more than the mere delivery of disciplinary knowledge or even a student-centred approach (Northedge, 2003). Gibbs et al., (2004) highlight the need for students to engage as interactive partners in a learning community and relinquish the passive student role of ‘the instructed’ within processes controlled by academic experts.
This fourth approach moves learning practices on from a social constructivist approach in which active social learning processes are devised to help students learn the literacies of a module or course, and with a subtle shift of focus, centres on how students and teachers input into and become part of the ‘academic literacy’ practices of their disciplinary community (Lea and Street, 1998). Consequently, for students to fully ‘come to know’ assessment standards arguably they must participate as partners in the assessment process, both formal and informal ‘where participation, as a way of learning, enables the student to both absorb, and be absorbed in the culture of practice’ (Elwood and Klenowski, 2002, p. 246). Anyone absorbed into such a culture will acquire tacit and explicit knowledge held by the community but more importantly, students who start off as peripheral participants can practice and imitate then move to fuller active engagement.
Interrelationship of approaches
The diagram below sets out for consideration these four approaches to sharing standards defined by whether the inputs and activities involved are formal or informal and the extent of student involvement in the processes.
Figure 1: Approaches to developing student understanding of assessment standards
It is important to note that the different approaches to students ‘coming to know’ assessment standards dominant within each quadrant are not mutually exclusive. Movement around our proposed matrix (as depicted by the curved arrow) does not mean that when moving into a new quadrant, all preceding approaches and insights are redundant and superseded. The matrix, and in particular the second, third and fourth quadrants, can be viewed as a ‘nested hierarchy’ in that each quadrant encapsulates the understandings of the preceding approach. This mirrors Lea and Street’s hierarchical model of approaches to academic literacy (1998) and reflects our ‘continuing journey’ through the intricacies of assessment research and practice.
Currently, we are still continuing our ‘journey’ with a focus on spreading social constructivist initiatives in assessment practice and academic conventions, as well as exploring the relatively uncharted territory of cultivating a community of assessment practice both through socialisation activities and through the creation of bespoke ‘social learning space’.
Our research and practice has recently received a real boost through the award of a ‘Centre for Excellence in Teaching and Learning’ (CETL) supported by £4.5 million funding from the Higher Education Funding Council for England. (More information on CETLs can be found at:http://www.hefce.ac.uk/learning/TInits/cetl/).
Our Centre for Excellence, ASKe ‘Assessment Standards Knowledge Exchange’, is based at the Business School at Oxford Brookes University and supported by Oxford Centre of Staff Learning Development. This exciting opportunity enables us to work with more students, colleagues (both within and outside of Brookes) and a wider spectrum of community stakeholders in the UK and internationally. Our goal, however, is the same as it has been for over a decade: to improve student learning through sharing and applying an understanding of assessment standards. Now we also want to do this within a broad community of assessment practice and add more specific attention to the problems of student misconduct and plagiarism (Carroll, 2004).
At an operational level the work of ASKe is divided into three strands. Strand 1 of ASKe focuses on exploring ways to share proven practices in sharing assessment standards and managing academic misconduct and plagiarism and encouraging the wider replication of these practices. But as well as involving others in the replication of proven practices we want to engage others to develop further evidence-based practice supported by ASKe funding (Strand 2). This provides an avenue for many more colleagues to be involved. Finally, we are experimenting with ways of cultivating a broad community of practice (Strand 3). This last ASKe strand is, we admit, a bit of a long shot and we would welcome the opportunity to hear from others working in a similar field or, indeed, anyone interested in contributing to the development of the Centre.
The ASKe website can be found at http://www.business.brookes.ac.uk/aske.html or contact Rebecca Bryant, the ASKe Centre Manager on
This article was originally published in HERDSA News (Higher Education Development Society of Australasia) in 2005 and is reproduced with the kind permission of Roger Landbeck, editor.
Carroll, J. (2004), ‘Reliability, validity and fairness: enhancing the student experience in Scottish Higher education’, Quality Enhancement Themes, http://www.enhancementthemes.ac.uk/uploads/documents/Carrollpost-workshoppaper-revised.pdf.
Ecclestone, K. (2001), ‘I know a 2:I when I see it: understanding criteria for degree classification in franchised university programmes’, Journal of Further and Higher Education, 25, 301-313.
Elwood, J. & Klenowski, V. (2002), ‘Creating Communities of Shared Practice: the challenges of assessment use in learning and teaching’, Assessment and Evaluation in Higher Education, 27, 243-256.
Gibbs, P., Angelides, P. and Michaelides, P. (2004), ‘Preliminary thoughts on a praxis of higher education teaching’ Teaching in Higher Education, 9, 183-194.
Lave, J. and Wenger, E. (1991). Situated Learning. Legitimate peripheral participation, Cambridge: University of Cambridge Press.
Lea, M. and Street, B. (1998), ‘Student Writing in Higher Education: an academic literacies approach’, Studies in Higher Education, 23, 157-172.
Nonaka, I. (1991), ‘The Knowledge-Creating Company’, The Harvard Business Review, Nov’Dec, 96-104.
Northedge, A. (2003), ‘Rethinking teaching in the context of diversity’. Teaching in Higher Education, 8, 17-32.
O’Donovan, B., Price, M. and Rust, C. (2004), ‘Know what I mean? Enhancing student understanding of assessment standards and criteria’, Teaching in Higher Education, 9, 325’335.
Parker, J. (2002), ‘A New Disciplinarity: communities of knowledge, learning and practice’, Teaching in Higher Education, 7, 373’386.
Price, M. and Rust, C. (1999), ‘The experience of introducing a common criteria assessment grid across an academic department’, Quality in Higher Education, 5, 133-144.
Price, M., O’Donovan, B. and Rust, C. (accepted for publication) ‘Putting a social-constructivist assessment process model into practice: building the feedback loop into the assessment process through peer-feedback’, Innovations in Education and Teaching International.
Rust, C., Price, M. and O’Donovan, B. (2003), ‘Improving students’ learning by developing their understanding of assessment criteria and processes’, Assessment and Evaluation in Higher Education, 28, 147-164
Rust, C., O’Donovan, B. and Price, M. (2005), ‘A social constructivist assessment process model: how the research literature shows us this could be best practice’, Assessment and Evaluation in Higher Education, 30(3), 233-241.
Sadler, D. R. (1987), ‘Specifying and promulgating achievement standards’, Oxford Review of Education, 13, 191-209.
Snowdon, D, (2002), ‘Complex acts of knowing: paradox and descriptive self-awareness’, Journal of Knowledge Management, 6, 100’111.
Vygotsky, L. (1978), Mind in society: the development of higher psychological processes. Cambridge, Mass.: Harvard University Press.
Wenger, E. (1998), Communities of practice: learning, meaning, and identity, Cambridge: Cambridge University Press.
Winter, R. (1994), ‘The problem of educational levels part 2: a new framework for credit accumulation in higher education’, Journal for Further and Higher Education, 18, 92-107.
Yorke, M. (2002), ‘Subject benchmarking and assessment of student learning’, Quality Assurance in Education, 10, 155-171.