A study of student engagement in first year undergraduate science modules through weekly self report

Authors

Abstract

An anonymous volunteer group of first year undergraduate students recorded their private study time through a self reporting computerised spreadsheet.  Average inputs were considerably lower than are expected from the notional 10 hours of effort per CAT point.   While preparation for tests showed an increased study time with greater test point value, writing reports on practical classes took the same time input regardless of the assessment worth.  At the end of the study the students were identified as able and motivated individuals. The potential of using such a spreadsheet to identify and intervene with failing students is discussed.

Biographies

Peter Grebenik is an inorganic chemist and long-serving member of the Brookes’ academic community. His interests and expertise in recent years have centred around using Excel as a multi-purpose data management tool.

Andrew Rosenthal is a Food Scientist with research interests in the functional properties of food components and how they contribute to food texture. His pedagogic interests originate from a passion for e-learning and the teaching of transferable skills and research methods to science undergraduates.

Contact: Department of Sport and Health Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Gipsy Lane, Oxford, OX3 0BP, ph: 01865 483258,

Introduction

There is considerable disagreement within the literature as to what the term ’student engagement‘ comprises.  The questions of who or what they are engaging with often varies between publications, yet the term ’student engagement‘ is used generically.  Furthermore, as measurable variables are often not underlying causes, many researchers have had to postulate as to how experimental factors influence ’student engagement‘ (Trowler, 2010).  In order to clarify this perplexity of meaning Trowler (2010) defines student engagement as being:

“…concerned with the interaction between the time, effort and other relevant resources invested by both students and their institutions intended to optimise the student experience and enhance the learning outcomes and development of students and the performance, and reputation of the institution” (Trowler, 2010, p.3)

While the student experience and learning outcomes are central in this definition, motivations are not exclusively student centred, with measures of retention and completion rates creating politically sensitive material with which to undertake interdepartmental and institutional comparisons (Krause, 2005).

For students to engage in their learning, they need to become involved beyond just attending timetabled classes and activities (behavioural engagement). Ideally their involvement would go beyond an interest or enjoyment in the subject (emotional engagement) to a point at which they relish the challenge of knowing about it (cognitive engagement) (Bloom, 1956).  Trowler (2010) points out that while non-engagement suggests a withdrawal, there can be both positive and negative attitudes towards engagement. For example a negative cognitive engagement might be undertaking an assignment but redefining the issues in a way that suits the student.  In many ways a cognitive level of engagement seems very much an ideal but one which is hard to recognise in many undergraduate science or technology students.  Indeed working with engineering students, Kolari and co-workers, identified wide variation in the time which students dedicate to their studies. This variation being to some extent cultural with students in Asian countries working around 50 hours per week while many in European countries would only undertake half the expected number of hours (Kolari Savander-Ranne & Viskari, 2008).

The majority of published work on student engagement originates in Australasia and North America. In the USA large scale engagement studies are regularly undertaken at high school level involving typically 90,000 participants in over 100 institutions, situated in more than 25 states (see for example Yazzie-Mintz, 2007).  This exercise has generally not been repeated at university or college level in the USA despite the recognition that “student engagement is considered an important predictor of student achievement” within higher education (Handelsman et al. 2005, p.184).

A number of time-on-task studies have been undertaken in higher education to evaluate student activities. Innis and Shaw (1997) used diaries over the course of a week to determine the time spent by students on various aspects of their studies. By far the largest single component of the student study time (35.5%) was taken on assessed work – the next largest component (a mere 12.8%) was time spent in classes.

Despite this apparent importance of assessed tasks on student engagement, most studies in higher education take a holistic approach trying to identify the linkages between student engagement and all aspects of student learning with no particular focus on assessed tasks (Carini, Kuh & Klein, 2006).

The current study used self reported time and consideration of grades obtained for each item of assessed work in a variety of modules studied across the School of Life Sciences during the first semester of the academic year 2008-9.  The initial key objective was to rationalise which types of assessment task most engaged students. Secondary objectives were to examine student perception of their own engagement, effort and learning.

The intention was to be able to rationalise student activity and learning in relation to student effort and engagement. From the student experience point of view we had hoped to identify where economy of effort pays. From a student learning perspective we hoped to identify assessed activities which most engaged and lead to understanding and learning.

Methodology

Ethics approval was obtained from the university ethics committee.  Particular care was taken to maintain anonymity during the data collection with an assurance that the identity of individual subjects would not be released until after module examination boards thus preventing potential for prejudicial treatment of students as a result of data collected.

University lecturers who ran first year Life Science modules during the first semester of 2008-9 were approached and asked for details of each of their assessment tasks including the weighting and deadline for submission.  A spreadsheet was created which identified students, using their student number as a primary reference. This reference was used to query the student management database, which identified which modules each of the anonymous subjects was taking.  Rows were generated on the spreadsheet for each subject for each task of assessment they were taking.  Each module was also given a row for general reading and work on nonassessed tasks.  The columns on the spreadsheet were set up to represent weeks of the semester.

Participating students were recruited through an email and announcements in lectures.  For ethical reasons students were told that participation was entirely voluntary.  Students were told that participation offered a financial reward at the end of the study.  Students were informed of the ethics and anonymity and a detailed explanation of the study was given.  Students were told that if they participated by completing the time logs, then it would be assumed that they were consenting the conditions as laid out in the information sheets provided.

Participants were given access to a personalised spreadsheet which enabled them to log the number of minutes they had spent on each assessment task during each week of the semester.  Effectively each student saw a personalised spreadsheet listing the modules they were taking and the assessed tasks of those modules.  The columns were numbered as the weeks of the semester and the matrix of cells within the sheet allowed the students to self report how many minutes they spent on each task during each week of the semester.  Obviously as deadlines passed work was no longer undertaken for assessed tasks.  Yet, opportunity existed for students to report study time for tasks in the distant as well as imminent future.

Every time participants had access to the spreadsheet they were given the option to withdraw from the study. Whenever an entry was made to any student spreadsheet, changes were logged on a master spreadsheet held in a secure and secret location. 

Results and discussion

Whether perceived as a carrot or a stick the literature suggests that students are motivated by assessment.  If this is true then it might follow that the point value of each assessment might have a bearing on the effort that is put into each task.  Figure 1 shows the relationship between coursework tasks with different assessment values compared with the average reported time in minutes on that task. There is a good deal of scatter within the data which does seem to suggest that this premise is untrue.

 

RosenthaGrebenikfig1

Figure 1: Reported time spent on tasks of different mark value

Despite around 200 students being eligible for the study, only 13 participated in the study.  It is possible that the incentive used was not great enough, but certainly recruiting to a study of this type is difficult – particularly when upholding the ethical requirements of free participation and ability to withdraw.

Data which are collected through pedagogic experiments as opposed to through activities embedded in a course, are subject to ethical approval, consent of the participants and the ability to withdraw.  A course requirement would need none of these and would consequently produce far higher levels of participation and compliance.  However, course requirements are not anonymous and it is less likely that participants would be honest in their reporting.

It is difficult to compare total numbers of minutes put in by the participating students because some of them reported for less than the total number of weeks of the semester.  Furthermore, some were taking more modules than others.  To combat these differences we have sought to standardise the reported time of study to 12 weeks and four modules studied.  This standardisation was achieved by multiplying the number of minutes reported by the fraction of 12 weeks to the number of weeks reported and the fraction of four modules to the number of modules taken.  These standardised results are presented in Table 1.

Reported minutes 549 12170 7269 1276 3671 996 5399 9968 3391 569 1525 2460 3121
Weeks reported 2 12 7 3 8 2 12 10 11 1 4 2 9
Minutes standardised to 12 weeks 3294 12170 12461 5104 5507 5978 5399 11962 3699 6832 4575 14762 4162
Number of modules taken* 4 4 4 4 4 4 4 5 3 3 4 4 3.52
Minutes standardised  to 4 modules 3294 12170 12461 5104 5507 5978 5399 9570 4932 9109 4575 14762 4756
Hours per semester 54.9 202.8 207.7 85.1 91.8 99.6 90.0 159.5 82.2 151.8 76.3 246.0 79.3

*Most students take one or two single modules running over two semesters,  such modules are counted as half a module in each semester.

Table 1: Total reported time for the semester with various adjustments to standardise for comparison.

Modules at Oxford Brookes University are nominally 15 CAT points and there is an expectation that students put 10 hours of effort into every CAT point studied.  In the school of Life Sciences, modules are typically taught with three hours of lecture and on average one hour of practical/workshop each week. Thus over a 12 week semester we would hope to see in the region of 100 hours of effort outside of timetable activities for each module taken.

The mean and standard deviation of the data in Table 1 equate to 125.1 ±61.4 hours of reported private study time for the four modules taken over the semester.  While the adjustments made to standardise the data are fairly crude and takes no account of the period in the semester when incomplete data was collected, it is clear that time inputs fall far short of the expected 400 hours of additional private study for the modules that the participating students are taking during the semester.  It is of course possible that the participants are under-reporting.

Of course this average of 125 hours is for private study only and in terms of total study to include contact time, it would likely equate to about 280 hours for the semester (four modules with three hours lecture each week and two three-hour practical sessions per module during the semester).  Over an eleven week teaching period this equates to about 25 hours of total study per week, which is less than the 31.3 hours reported by Kolari et al (2008) who investigated Finish engineering students and falls well short of the input reported for some Asian countries.

Having raised the issue of when in the semester the work is done, we can see from Figure 2 that there is an apparent increase in reported study time with assessments that occur as the semester progresses.  However, what at first sight looks like a promising relationship is dispelled if we ignore the week 12 time inputs which correspond to examination revision.  If week 12 is left out then the already weak relationship (R2 = 0.32) tends towards random scatter.

Each data point in Figure 2 represents a different assignment. While there is some inevitable clustering towards the end of the semester, there is a reasonable spread of deadlines through the first year modules during the semester. This is reassuring as it suggests the students will have ample time available to them to adequately complete each assignment to their own satisfaction, thus reputing the idea that the low reported study time is due to clustering of assessment deadlines

RosenthaGrebenikfig2

Figure 2: Reported average time per assessment task with week of submission during a 12 week semester

In an attempt to separate the influence of mark weighting and assessment type we have undertaken a 3-dimensional plot (Figure 3). This seems to show that practical reports receive comparatively similar time input (181 ±43 minutes) regardless of the mark allocation they have within the assessment of a module.  This raises the question of parity between practical reports, with some valued at 5% while others carry 40% of the modules’ marks all taking roughly the same amount of time for the student to complete.  In contrast to this, the time spent in preparing for multiple choice question tests or examinations seems higher with a trend which increases as the value of the assessment is greater.

From this limited set of student data it would appear that the time input to written assignments like essays actually seem to be independent of the mark allocation with some being given around six hours of student effort for 20% of the module while others seem to command three to four hours for 30-35% of the marks available.  The reported time on task for each of the assessments in this study are similar to expectations identified in similar subjects at other new universities in the UK, however there are discrepancies with expectations from some of the older universities which generally expect a greater time on task for similar activities (Fielding, 2008).

RosenthaGrebenikfig3

Figure 3: Three dimensional breakdown of time spent on tasks as a function of assignment value

The design of the spreadsheet allowed us to examine when in the semester work was undertaken for any of the assignments, thus giving us an idea of cogitation time.  Table 2 shows the average reported minutes for assignments in the weeks leading up to the assignment.  For this analysis we have only included data for students who were submitting time logs up to the due date (i.e. if a due date was in week 11 and a student only made time logs up to week 8, their data would not be included).

Week
-9
Week
-8
Week
-7
Week
-6
Week
-5
Week
-4
Week
-3
Week
-2
Week
-1
Due
week
Assignment
due
66 85 89 55 78 75 86 88 121 141 All assignments
77 83 133 128 90 92 76 106 188 275 week 12
46 82 56 36 104 76 106 94 94 60 week 11
120 43 36 35 75 130 112 74 101 Week 9
45 40 35 20 74 139 113 Week 8
30 60 30 30 45 60 Week 7
0 60 101 110 Week 6
38 60 178 240 Week 5
80 103 73 Week 4

Table 2: Work input prior to submission date

As might be expected there is generally a rise in time spent on an assessment immediately before submission, however what is good to see and rather less expected is the ongoing work in weeks often substantially before the assignment is due.  Of course with freedom to report a time input to assignments before the assignment is due, may conjure a sense of “having to report” and without doubt brooding over a task for the future may well be going on – whether this is preparation time is less clear, but taking time to frame ideas and think over consequences, even without putting pen to paper is surely part of learning.  In this respect it is likely that reported time has been under reported.

Breaking down the data on work input prior to submission date into the different types of assignment reveals no pattern between the type of assignment and the student time input.

Using a spreadsheet to collect student data is not a unique approach to investigating student time resources. Crook and Park (2004) used a spreadsheet to measure assessment loads and timing across courses as well as the time that students spent on each task within their undergraduate modules.  In their study as in ours, a low number of participants were involved and this is clearly raises concern as to the statistical validity of the findings.  We have attempted to maximise the data by normalising it to four modules over 12 weeks, yet we are cautious to draw generalisations from the small data set included in this study.

Without doubt the participating students are a self selected group – after de‑anonymising at the end of the study the average (and standard deviation) module marks for the participants was found to be 66.2 ±9.6 suggesting that these students are a moderately able and motivated.  It is essential when we consider this data to keep the nature of the group in mind, though it does beg the question as to how less able or less motivated students might behave.

As this study was set up as a voluntary experiment, we were mindful of ethical considerations of student participation into research into their own learning experience. While this type of research rightly requires control or anonymity so as not to prejudice staff who may be assessing the contributing students, if such a recording spreadsheet were to be used as course requirement it would not require ethical consideration as it would no longer be an experiment.  In such circumstances the spreadsheet could become a valuable tool to gauge student engagement in the course during the early weeks at university – enabling non-participants to be identified very easily..  The need for anonymity has meant that we have not been able to feed back to the participating students, but if as suggested the spreadsheet became a mandatory part of the course we could use it to gather information, feedback to encourage and motivate, as well as spotting failing students in time to attempt some kind of intervention.

Acknowledgement

We are grateful to the Brookes Student Learning Experience Strategy for funding this project.

 

References

  • Bloom, B.S. (1956) Taxonomy of educational objectives: the classification of educational goals. New York: D McKay & Co Inc.
  • Carini, R.M., Kuh, G.D. & Klein, S.P. (2006) Student engagement and student learning: testing the linkages. Research in Higher Education, 47 (1): 1-32.
  • Crook, A.C. & Park, J.R. (2004) Measuring assessment: a methodology for investigating undergraduate assessment. Biosciences Education 4 6 Handelsman, M.M., Biggs, WL, Sulivan, N & Towler, A. (2005) A measure of college student course engagement. Journal of Educational Research, 98 (3): 184.
  • Fielding, A. (2008) Student assessment workloads: a review. Learning and Teaching in Action, 7 (3): 7-15.
  • Innis, K. & Shaw, M. (1997) How do students spend their time? Quality Assurance in Education, 5 (2): 85-89.
  • Kolari, S., Savander-Ranne, C. & Viskari, E-L. (2008) Learning needs time and effort: a time-use study of engineering students. European Journal of Engineering Education, 33 (5-6): 483-498.
  • Krause, K-L. (2005) Engaged, inert or otherwise occupied? Deconstructing the 21st century undergraduate student. James Cook University Symposium. Sharing Scholarship in Learning and Teaching: Engaging Students. James Cook University, Townsville/Cairns, Queensland, Australia. 21-22 September 2005.
  • Trowler, V. (2010) Student engagement literature review. The Higher Education Academy, York.
  • Yazzie-Mintz, E. (2007) Voices of students on engagement: a report on the 2006 High School Survey of Student Engagement. Center for Evaluation and Education Policy, Indiana University.
Creative Commons License
This paper is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Double Blind Review
This paper has been subject to a double blind peer review by at least two reviewers. For more information about our double blind review process please visit: http://bejlt.brookes.ac.uk/about/double-blind-review/

How to cite this paper.
Posted in Research Paper

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Subscribe to BeJLT

Get email alerts when there is a new issue.
* = required field

Send this page to Kindle

your kindle user name:
(you@kindle.com, without @kindle.com)
Approved E-mail:
(Approved E-mail that kindle will accept)
Kindle base email kindle.com | free.kindle.com
(Use kindle.com to download on wispernet or wifi, use free.kindle.com for wifi only.)
using kindle.com may incur charges)