Institutional research (IR) forms a very important part of the evidence base for decision-making in universities. Yet we suggest that often it is isolated research that neither draws upon existing educational research nor opens itself up for use by the broader educational research community. In part, this can be a result of its particular local context and the limited connections between current internal institutional research, policy and practice and the wider field of educational research. In particular this often means that IR is not informed by (or informing) wider higher education (HE) research, policy and practice. We suggest that there is a need to create more robust links between institutional research and mainstream research, policy and practice.
One of us is an institutional researcher (Brown) who also does more general higher education research and the other (Jones) is a higher education researcher. We argue that institutional research is often specifically practice based for a number of reasons, including perceived sensitivity of the data, lack of focus on publication, need for quick turnaround and clear, simple answers. Thus, there is sometimes a lack of opportunities for institutional research to be informed by wider theoretical and empirical perspectives that are available in published research. In addition, the wider field of educational research does not always have the benefit of access to valuable institutional research. Our challenge therefore was to identify what problems occur with grounding institutional research in literature, theory and empirical evidence.
The institutional research landscape
The issue of institutional research is a complex one and this is intensified because it sits within a body of research that is relatively new. Higher education research is often undervalued because it is an emerging and eclectic field with no history to draw upon (Locke, 2009; Scott, 2000; Shattock, 2003; Teichler, 2000). As Teichler suggests:
…if research on higher education were respected in principle as a valuable source of information for the field, its researchers could embark with greater ease on a dialogue with practitioners and with undoubted benefit to both. (Teichler, cited in Locke, 2009)
Locke (2009) suggests policy makers often perceive that research in HE can be providing ‘today’s answers to yesterday’s questions’ (p.120). Similarly, HE policymakers consider educational research to consist of ‘decision making on personal experiences and ‘arm chair’ analyses (p120). In addition, it can be argued that this type of research lacks stability and quality (Locke, 2009; Scott, 2000; Shattock, 2003; Teichler, 2000). This is partially as a result of the very contextual or situated nature of education itself and in part the eclectic methods (also, arguably, a strength). Although the finger can be pointed at policy-makers for not making good use of educational research, the blame also lies with researchers themselves. HE educational researchers sometimes distance themselves from the policy community for fear of it ‘compromising their academic autonomy’ (Locke, 2009, p.120). As a result, connections between research and policy in HE are minimal. Further, HE policy is a ‘research-free zone’ (Locke, 2009, p.119) lacking an evidence base. He argues that there needs to be enhanced associations established between the research, policy and practice nexus. Along similar lines, Terenzini (2012) believes that those working in the field of IR should be fully aware of the ‘external worlds…and the forces shaping what is…happening on our campuses’ (p.145). One step towards this would be to ensure that wider literature in HE research is taken into consideration by institutional researchers. Much can be learned from other work in the field (Terenzini 2012).
The ways in which institutional research is organised and carried out varies across the higher education sector. For some higher education institutions, there are well developed central IR teams, while others prefer a more devolved model of IR (Longden and Yorke, 2009). Central departments are often responsible for producing, analysing and reporting data relating to the student experience, including retention, progression and completion. In addition they will benchmark their institution against others in terms of performance and league tables. They will also be responsible for reporting to internal and external stakeholders. Departments such as this tend to provide highly valued quantitative data, they are often however unable to provide in depth analysis on more complex issues. There are a number of reasons for this. Conducting extensive literature reviews and dealing with complex qualitative data can be very time consuming and can provide answers that are nuanced and often open up yet more questions. A further issue is that in some cases, institutional researchers do not have a background in educational research and so may not be familiar with the literature or key issues.
In addition, analysis of large student surveys can be exceptionally time-consuming. As Zaitseva and Stewart (2014) note ‘processing and coding thousands of brief, disjointed and anonymous comments …is an onerous task’ (p.3). IR is often ‘short order’ with clear answers required in a short timeframe. In addition, what is not immediately obvious is how much this data drives decisions within organisations (Chester, 2014) which opens up a whole series of questions as to the ways in which IR is used. Moreover, the audience for institutional research are often managers and senior executives with limited time and who require straightforward answers that can be used to inform policy decisions.
Within our own university, institutional research is conducted using a hybrid model. In addition to having a specific Strategy and Planning department, who undertake data analysis and reporting for internal and external purposes, there is also a small institutional research team who undertake specific in-depth research tasks, approved by a central policy committee to answer specific topical questions. Institutional research within our own institution provides evidence at a local level which is a valuable mechanism for enhancing policy and practice within the university. For example a number of our projects have included: exploring the experiences of students who work whilst studying; why some students exit with an ordinary degree; how flexible learning policy at the university is being put into practice and the taught postgraduate student experience. The research outputs included recommendations which have enhanced policy and practice within the areas of learning and teaching and the student experience within the institution. This supports Swing’s (2009) view that IR can lead to awareness raising within institutions, test myths and hypothesis identified in other literature, lead institutional change, build trust in the evidence base, enhance knowledge; encourage transformative practice and improve on prior performances.
IR can support universities in generating evidence bases which in turn will support them in meeting a number of their key internal strategic and external objectives. Within our own institution IR supports particular drivers; including mission statements, identified Key Performance Indicators (KPIs), Scottish Funding Council (SFC) outcomes agreements, internal strategy documents and student experience initiatives. Externally drivers include; supporting the Quality Assurance Agency (QAA) Quality Enhancement Themes and the UK Quality code for HE. However, the pathways between the research findings and its enactment in practice are sometimes tenuous. Terenzini (2012) notes ‘it is recognizing that research is one compromise after another, and finding the balance between rigorous, thoughtful research and its practical and prudent application remains at the heart of both IR’s business and its challenging over the next decade’ (p.147).
Problematising institutional research
Within our own institution it is becoming gradually more prevalent to link IR to the wider literature on HE research; however this has not always been the case. Often IR requires brief and untheorised findings, to meet specific demands. In general, IR research is not always well grounded in the literature or theoretically bound (Locke, 2009). While Longden and Yorke (2009) query the worth of IR, ‘the university is awash with data, but is it awash with institutional intelligence’, Terenzini (2012) specifically suggests that this worth could be enhanced by the use of ‘multiple forms and sources of intelligence’ (p.138). Having this additional depth of knowledge to support IR would ensure that IR is not just about ‘somebody with an opinion’ (p.142). In particular he notes that institutional researchers must have ‘some knowledge of at least some of the research literature….and the more the better’ (p.147). A further issue with IR is that because it is often not published, it is not subject to the independent scrutiny that peer reviewed published research is and so cannot always claim to be rigorous.
A lack of wider knowledge does not prevent IR assisting in the formation of local policy (although it could be significantly enhanced), however it does prohibit informing broader policy making in HE. Although empirically based and producing relevant findings the outcomes of institutional research are often undervalued and overlooked. Arguably, this could be due to the localised nature of the research and its intended audience, thus excluding the research and subsequent findings from influencing broader educational research conversation. As such ‘a failure to know thyself is hurting UK universities…’ (Gill, 2008, p.1).
Acknowledging and contributing to the wider literature has the potential to strengthen IR. Watson (2008) asserts that there is a dark side to institutional research, claiming there are six ‘traps’ which are worthy of further consideration. These include:
- over simplification;
- the lure of change;
- benchmarking for comfort or for challenge;
- following the crowd;
- reputation over quality and
- only good news.
A serious look at IR – what informs it, its scope and the uses to which it is put would be one step towards avoiding some of these traps.
Joining the dots?
We suggest that there is a disconnect in the policy, practice, research loop in that policy and practice in higher education are not always informed by research and that research does not always cater to the needs of policy-makers and practitioners (Figure 1). Thus we argue that there is a need to join the dots between higher education research, institutional research, policy and practice. Policy in higher education is not always or well informed by good quality research (either mainstream or institutional) and that decisions are sometimes made in the absence of empirical and theoretical evidence (Jones, 2014). Moreover, we suggest that institutional research does not always draw upon mainstream research to provided a framework for its findings. In the same way, mainstream research cannot always access the valuable findings produced in institutional research.
Figure 1: The disconnect between Policy, Practice and Research
Exploring the ways in which IR is grounded in the literature, the uses to which it is put, its accessibility to the ‘outside world’ and the reasons behind inaccessibility examining the ways in which policy decisions are informed by evidence and the forms this evidence takes would be big steps towards joining some of the dots.
Chester, J. (2014). Five Pillars of Institutional Research. Higher Education Institutional Research Network (HEIR). Opinion piece series (3). March 2014.
Gill, J. (2008). Plenty of data, but little insight. Times Higher Education, 26 June.
Jones, A. (2014). Leading University Teaching: Exploring the uses of higher education research, SRHE Research Report. http://www.srhe.ac.uk/research/srhe_funded_projects.asp
Locke, W. (2009). Reconnecting the Research-Policy-Practice Nexus in Higher Education: ‘Evidence-Based Policy’ in practice in National and International Contexts. Higher Education Policy. (22), pp119-140.
Longden, B. and Yorke, M. (2009). Institutional research. What problems are we trying to solve? Perspectives: Policy and Practice in Higher Education. 13:3, pp. 66-70.
Scott, P. (2000). Higher Education Research in the Light of Dialogue between Policy-Makers and Practitioners. In Teichler, U. and Sadlak, J. (eds). Higher Education Research: Its Relationship to Policy and Practice, Oxford: Pergamon and IAU Press, pp.123-147.
Shattock, M. (2003). Research Administration and University Management: What can research contribute to policy? In Begg, R. (ed). The dialogue between Higher Education Research and Practice, Dordrecht: Kluwer, pp.55-66.
Swing, R. (2009). Institutional Researchers as Change Agents. New Directions for INSTITUTIONAL RESEARCH. (143).
Teichler. U. (2000). The Relationships between Higher Education Research and Higher Education Policy and Practice: The Researchers’ Perspective. In Teichler, U. and Sadlak, J. (eds). Higher Education Research: Its Relationship to Policy and Practice, Oxford: Pergamon and IAU Press, pp.3-34.
Terenzini, P.T. (2012). On the Nature of Institutional Research Revisited: Plus ca Change…?’ Research in Higher Education. 54. pp. 137-148.
Watson, D. (2008). The Dark Side of Institutional Research. Keynote Address – Institutional Research Conference, Beyond the Hinterlands. Southampton Solent University, June 2008. Retrieved 17 November 2014 from: http://eprints.ioe.ac.uk/7055/1/Watson2009TheDark71.pdf
Zaitseva, E. and Stewart, M. (2014). Triangulation in Institutional Qualitative Data Analysis: clarity from viewing through multiple lenses? Institutional Research Network (HEIR). Opinion piece series (2). March 2014.