Various Ways of Using Online Testing in a ‘Virtual Learning Environment’

Abstract

This paper describes how the ‘quiz’ facility in WebCT, one of the leading names in ‘virtual learning environments’, is used, inter alia, to provide a series of short ‘required tests’. These can be taken up to three times and together contribute a small percentage of marks towards module assessment. Tests motivate students, in a non-threatening way, to use notes and other resources (possibly provided via WebCT), and the system can provide immediate feedback. The paper describes the various forms of quiz question available in WebCT, including multiple choice. The other forms such as matching and multi-response questions are much less time-consuming to prepare. The final section relates the use of quizzes to aspects of good practice described by Graham Gibbs (2002) in a lecture on ‘Thinking Radically about Assessment – Reducing Marking and Improving Learning’.

Introduction

What saves you time, can improve student learning and is usually enthusiastically received by students? What is equally at home with the needs of full-time, part-time and distance learning students? Online testing can meet all these needs, in my experience over the 15 months, there are sound pedagogical reasons for incorporating it amongst the wide range of assessment methods available. At Oxford Brookes University, WebCT has been the chosen Virtual Learning Environment (VLE) since 2001. One of its features is online testing, termed ‘quizzes’. WebCT also offers ‘self-tests’ which are ‘simple ungraded multiple-choice question tests with instant feedback’; however I have kept to using quizzes of various sorts, even when grades do not count and I want to give instant or very rapid feedback.

I had resolved to get started on using WebCT with modules from September 2002 but had made little progress beyond good intentions until February 21st, 2003 when I attended an internal course on WebCT quizzes. During this course I conceived the idea of converting an end-test, in multiple-choice format, which was already organised for 17th March 2003 to a WebCT quiz. This was just three working weeks away! I clearly had confidence in the VLE infrastructure and the support offered by the Media Workshop at Oxford Brookes and in my own ability to negotiate the web of approvals and procedures involved, as well as to prepare and upload the test.

It all came together, aided in no small part by the Media Workshop and other colleagues (and I must mention my friend Samuel Leung, a skilled WebCT enthusiast and, sadly, now at another university). Shortly after the test the students received their marks and were able to review their answers and the correct answers. Furthermore, test had succeeded in securing commitment to the module, learning (and attendance) right to the end, even though it was a coursework-only module (the test was designated as a coursework test, even though invigilated). The same day I added the test marks to the other assessment in the module and submitted the final mark sheet, ten days before the deadline. What a feeling of satisfaction!

For some years I had been an advocate of multiple choice tests (MCTs) in certain situations; in conjunction with optical mark readers, these were capable of providing rapid evaluation and feedback. In practice, paper-based MCTs and sorting out the internal transfers for payment, and the chore of delivering the answer sheets to the unit entrusted with operating the optical mark reader, waiting for the results to be processed (days) and finally returned to me somewhat reduced their appeal. I had also made limited use of Questionmark tests (Questionmark is software designed primarily for testing), although the computing infrastructure had frequently posed problems (for example, an answer file for an entire test getting lost in the system). With all the facilities available within the virtual learning environment, including a sophisticated system for managing courses and good security, WebCT renewed my interest.

Formative and Summative Assessment

The use of WebCT described above has provided summative assessment relating to the knowledge acquisition objectives for the course. Since then I have used WebCT to provide formative assessment on the majority of the modules for which I am responsible. I had been using a system of peer-assessment of periodic ‘required problems’ which students merely have to attempt and submit in order to gain a small percentage of the marks for a module (typically 7%). This is described in a case study published online by the Oxford Centre for Staff Learning and Development (OCSLD, 2004) and is based on a system originally devised by Dave Boud in Australia for teaching large numbers on a science course. The system relies on staff recording submissions and then handing work to other students with a mark scheme and allowing time for peer-assessment before collecting and returning work to the originator. The benefits derive from keeping students engaged with the subject as the module progresses and receiving very rapid feedback, be it rough and ready. Rapid feedback is a requirement for effectiveness – see below. The system works well and is well-received by students, but with contact time constantly under pressure the time taken for peer-assessment (in-class) and for recording and handling submissions is a problem. It seemed to be worth trying WebCT quizzes in place of the written submissions.

Using periodic ‘required’ WebCT quizzes during the module has been very well-received by students. Typically there are four short quizzes during the module and they are set to provide scores immediately after the quiz is taken. Students can take the quiz up to three times. This allows them to look at their notes or source material after the first attempt and aim for a higher score. Most students achieve 100% in the end. In some cases I have used the facility for the computer to ‘set’ the question or a group of questions. For example, one question requires students to calculate the critical path in a critical path network. I had provided three similar versions of the same question and each time a student attempted the question one version from the set would be offered to the student to solve. Clearly, the incentive was for students to learn the method of solution rather than rely on collaboration to find the right answer.

Another type of question allowed key points or theories to be reinforced. This is the type of question where a list of words, statements or other variables has to be matched with another list. For example, a list of writers with a list of theories (in my case about organisational theory). The process of matching involves becoming familiar with both the names and the theories. WebCT randomises both lists each time a test is taken, so again students need to develop the familiarity. One could argue that the medium could encourage students to learn a ‘potted’ version of the ideas covered, but reference to the course material (e.g. book, hand-out or notes) would lead to efficient reading to identify the theory which has been condensed into a short sentence or two in the question.

The multiple response type questions allows the limits of a topic to be explored. This type of question allows the author to write a series of statements, any number of which are true or generally true (!). For example, legislation imposes certain duties on designers of buildings (and other construction works) in relation to health and safety. The statements offered in the quiz can explore the limits of these duties. Again, after the first attempt at the question, there is likely to be a very focused (and efficient) reading of the course material which will almost certainly result in learning.

Using WebCT quizzes in this way encourages engagement with the subject without the stress that traditional assessment can involve and which may push students towards a surface approach to learning. Gibbs (1992) discusses the nature of the quality of learning and cites research which demonstrates that while at university students may shift from a deep approach to learning to towards a surface approach. One of the factors which can cause this is ‘a threatening and anxiety provoking assessment system’. However, student feedback has suggested that many would prefer marks for these ‘required quizzes’ to contribute to the module mark, rather than the system of simply giving credit for attempting the quizzes. In recent modules I have tried the approach suggested by student feedback. In practice, because three attempts are allowed one would imagine that the additional stress is minimal; also the result is little different because most students repeat the quizzes and achieve close to 100% for the required quiz component of the module.

Student response to the ‘required quizzes’ is good. Here are a number of statements entered into a the WebCT course evaluation tool (a form of quiz but anonymous); note that all the relevant comments received for this course are included and that I have only corrected spelling and some punctuation:

Question:

What were the best aspects of using WebCT for you?

Responses:

  1. The quick response time having completed a quiz rather than having to wait days for it to be manually marked.
  2. The benefit of Web CT was that it forced you to read through the notes which enabled the end quiz a lot easier (sic).
  3. It was quick and easy to do and reinforced the syllabus well.

Question

My comment:
on this module there were three intermediate tests, each of which could be taken up to three times within the seven-day availability period. The mean of these test/quiz scores was worth 10% of the module mark: in the first and final scores were released immediately. In the second test, the scores were not released until the end of the availability period and the last, not the highest, score counted. There was thus a little pressure to refer to course material and make sure that all answers were correct. Incidentally, the system does not prevent students from printing out the version of the test visible on their screens at the time.

What improvements to this WebCT course would you suggest?

Responses

  1. When a quiz question was answered incorrectly once the test was completed it would have been good to receive the correct answer and a short paragraph explaining why it was the correct answer with it.
  2. Perhaps give a small amount of marks for attendance. There were lots of people at the end of course test that I had never seen before!
  3. Felt it worked very well
  4. It depends on what you wanted for the WebCT. If it was a tool for reinforcing what was done in class test one and three were great. If it was to actually test our knowledge the second test was probably the best.

Use of Quizzes in a Distance Learning Course

One of the benefits of using WebCT is that students access the course facilities from wherever they have internet access, and at any time when access is available. On one course I set the deadline as 3 a.m. so that students could ‘return home from a party and still complete the test in time’. One student reported that this suggestion had indeed triggered him to remember to take the quiz, just in time! The ‘required quiz’ approach described above was used for university-based courses, but could easily be adapted to courses which are delivered at a distance and to courses on which students learn at their own pace. The latter are quite likely also to be distance learning courses.

I have made limited use, so far, of WebCT quizzes for a distance learning course (students are in Singapore). My purpose was to encourage students to become familiar with certain concepts and to move them away from a particular text-book which is issued to students. This book had been selected by a former tutor on the module and missed out some important areas and concepts. I provided additional material in the form of Portable Document Format (PDF) files – read using the free-to-download Adobe Acrobat Reader; PDF files can be created from any printable document using suitable software), and html documents with links to other open sites on the world-wide web. The WebCT quiz provided was marked by the system, but the scores did not form part of the assessment for the module. The quiz was purely formative. Feedback to each question was provided in the form of information on why wrong answers were wrong, and further discussion and comment if the correct answer was chosen. Students were able to take this quiz on unlimited occasions.

Preparing and Organising WebCT Quizzes

WebCT quizzes can include for a variety of forms of questions including multiple choice questions (MCQs). Two, the multiple response type (MR) question and the matching type (MT) question, have been referred to above. Both can be used to cover large numbers of learning outcomes and both are much easier to set than MCQs. With the traditional MCQ, much time can be spent devising alternative distractors or wrong responses, bearing in mind that they must be plausible but not too close in meaning to the correct response. Philosophically I prefer both MT and MR questions as I prefer to think of students spending most of their time during a quiz reading correct, or at least approximately correct, sentences rather than distractors. Further types of question available are ‘short answer questions’ (typically one word), ‘paragraph’ answer questions and true-false questions (apparently widely used in medical education assessment and surprisingly difficult to answer). I tend to avoid paragraph answer questions – mainly because they are the only type which are not automatically assessed!

Many lecturers have a prejudice against computer-assisted assessment (CAA), and I agree that over-use can be a problem. However, CAA can be set at different levels. I would refer any interested reader to the excellent guidance at the Leicester University ‘CASTLE’ website (2002). Their report ‘Designing & Managing MCQs’ comments that ‘MCQs can be set at different cognitive levels. For example, a question may simply challenge a student’s ability to recall facts, while another may test a student’s ability to apply factual knowledge to given situations; or, at a higher level, a question may test a student’s ability to evaluate given information.’

WebCT is mounted at the server level. Questions can be devised direct in WebCT but it is much more convenient to use Respondus. This is proprietary software which is mounted on the course leader’s local machine. Tests can be devised in Respondus, settings chosen (e.g. duration, number of attempts, availability etc.) and if the machine is suitably networked it will also handle the task of uploading the test so that no further work is required before the test is ready for students to take. Preparing questions in Respondus still requires a certain amount of mouse-clicking, but Respondus can import questions, responses, the correct responses and feedback, if any, from text files in a ‘standard format’. I have found the quickest way to prepare quizzes is to start with a Word file (using a self-devised template which numbers questions and responses automatically), save it as a text file (*.txt) and then import it into Respondus. Using Respondus means that quizzes can be stored locally. Respondus includes the facility to import questions from other quizzes, which can be very convenient.

Running WebCT quizzes in examination conditions requires more thought. Many aspects of security can be handled through WebCT, but sufficiency and location of institutional computers may be an issue. This problem can be halved if students are divided into two groups; the first group takes the test and is ‘held’ until the second group comes in to take their places. Mobile phones, email and use of the internet during quizzes could conceivably be a problem given inadequate supervision. One factor which does need attention is the needs of dyslexic students. My solution has been to produce a second test, identical to the first, but with a longer period for taking it and with a different password (only necessary for invigilated quizzes – and there are other ways of controlling access). One of the Oxford Brookes advisors on dyslexia felt that it was not necessary to provide a separate room for dyslexic students. Such students can experience difficulty in formal examinations with large rooms, very large numbers of other examinees and cramped working space, but the computer rooms at Oxford Brookes, and probably in most other institutions, provide much more acceptable conditions.

Assessment and Pedagogy

A brief note on assessment and pedagogy: an important reason for my interest in CAA in early 2003 was a lecture on ‘Thinking Radically About Assessment – Reducing Marking And Improving Learning’ delivered at Oxford Brookes only three months earlier by Graham Gibbs (2002), the internationally respected researcher and advisor on teaching and learning, then of the Centre for Higher Education Practice at the Open University. An important observation was that ‘assessment costs have often overtaken teaching costs, so that [there is] huge pressure to reduce assessment’. When accompanied by the statement ‘it is a common observation of higher education teachers that if coursework is taken away from a module due to resource constraints, students simply do not do the associated studying’ (Gibbs, 2002, from Gibbs and Simpson, forthcoming) there is cause for concern. Clearly, retaining coursework and using CAA amongst other means of reducing the associated workload (for many approaches, see op. cit.) are important to achieve intended outcomes.

However, many of the following points in the following list, which I circulated to my colleagues following the lecture (Gibbs, 2002), are relevant and help justify using CAA, in the ways outlined above:-

  1. One of the key aspects of learning is ‘time on task’ – this is one of Chickering and Gamson’s (1987) Seven Principles of good practice in undergraduate education.
  2. Student workload is almost entirely governed by assessment demands.
  3. Coursework marks are more closely correlated to subsequent performance than exam marks.
  4. Allow a drastic reduction in the volume of summative assessment, but encouraging enormously increased volume of formative assessment and feedback (this does not mean un-assessed).
  5. Rough and ready feedback provided very soon after work is done (e.g. by students) is much more effective than accurate feedback provided 3 or 4 weeks after submission. Note that giving prompt feedback is another of Chickering and Gamson’s seven principles (ibid.)
  6. The lessons learned from feedback on work should be applied soon after.
  7. Feedback is rarely read unless it will be relevant for tasks to come.
  8. If there are marks the feedback itself is rarely read. (!)
  9. What students learn by doing coursework is remembered. What they learn in preparation for an exam is far less likely to be remembered.

The idea of fairly frequent use of WebCT quizzes (e.g. as ‘required quizzes’) within a course accords well with points 1, 4 and 5 above, if not more. Many of the points also provide justification for the use of WebCT end-tests within coursework-only modules. I would argue, furthermore, that such end-tests combined with revision sessions and automated feedback blur the distinction between summative and formative assessment and can co-exist with sound pedagogy.e.

Contact Details

Nick Spencer Chapman

nspencerchapman@brookes.ac.uk

Department of Real Estate and Construction, School of the Built Environment, Oxford Brookes University, Gipsy Lane, Oxford OX3 0BP

References:

Chickering, A.W., and Gamson, Z.F. (1987), ‘Seven Principles of Good Practice in Undergraduate Education’, AAHE Bulletin, 39, pp. 3.

Gibbs, G. (2002), ‘Thinking Radically about Assessment – Reducing Marking and Improving Learning’, lecture given at Oxford Brookes University.

Gibbs, G. and Simpson, C. (forthcoming),’Does Your Assessment Support Your Students’ Learning?’

Oxford Centre for Staff Learning and Development (2004), ‘A science department used peer-assessment for weekly problem-sheets and improved performance in the end-of-year examination results (peer-assessment, new course requirement)’, No. 7 in Assessment under Pressure – 14 Innovative Case Studies, retrieved from the world wide web at: www.brookes.ac.uk/services/ocsd/2_learntch/aup14pr.html#studylist, Oxford, UK.

Computer ASsisted Teaching & Learning (2002), retrieved from the world wide web at: www.le.ac.uk/castle/index.html, University of Leicester, Leicester, UK.

Various Ways of Using Online Testing in a ‘Virtual Learning Environment, Nick Spencer Chapman

Nick Spencer Chapman

Nick Spencer Chapman is a civil engineer and a long-standing Senior Lecturer in Construction Management. He teaches various subjects including construction law and contract, project management, sustainable building and infrastructure, and structural mechanics. He has researched and written on problem-based learning and enjoys applying new technology to his teaching, where appropriate.

Creative Commons License
This paper is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Double Blind Review
This paper has been subject to a double blind peer review by at least two reviewers. For more information about our double blind review process please visit: http://bejlt.brookes.ac.uk/about/double-blind-review/

How to cite this paper.
Tagged with:
Posted in Practice Paper

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Subscribe to BeJLT

Get email alerts when there is a new issue.
* = required field

Send this page to Kindle

your kindle user name:
(you@kindle.com, without @kindle.com)
Approved E-mail:
(Approved E-mail that kindle will accept)
Kindle base email kindle.com | free.kindle.com
(Use kindle.com to download on wispernet or wifi, use free.kindle.com for wifi only.)
using kindle.com may incur charges)