evaluation of student feedback

16
This article was downloaded by: [The UC Irvine Libraries] On: 21 October 2014, At: 19:56 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Accounting Education: An International Journal Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/raed20 Evaluation of student feedback Len Hand & Mike Rowe Published online: 05 Oct 2010. To cite this article: Len Hand & Mike Rowe (2001) Evaluation of student feedback, Accounting Education: An International Journal, 10:2, 147-160, DOI: 10.1080/09639280110081651 To link to this article: http://dx.doi.org/10.1080/09639280110081651 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is

Upload: mike

Post on 27-Feb-2017

217 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [The UC Irvine Libraries]On: 21 October 2014, At: 19:56Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

Accounting Education: AnInternational JournalPublication details, including instructions for authorsand subscription information:http://www.tandfonline.com/loi/raed20

Evaluation of student feedbackLen Hand & Mike RowePublished online: 05 Oct 2010.

To cite this article: Len Hand & Mike Rowe (2001) Evaluation of studentfeedback, Accounting Education: An International Journal, 10:2, 147-160, DOI:10.1080/09639280110081651

To link to this article: http://dx.doi.org/10.1080/09639280110081651

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions and viewsexpressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of theContent should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for anylosses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly orindirectly in connection with, in relation to or arising out of the use of theContent.

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan,sub-licensing, systematic supply, or distribution in any form to anyone is

expressly forbidden. Terms & Conditions of access and use can be found athttp://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

Evaluation of student feedbackLEN HAND* and MIKE ROWE

Department of Accounting, Nottingham Business School, Nottingham Trent University, UK

Received: September 2000Revised: January 2001; May 2001Accepted: June 2001

Abstract

The processes by which feedback is gathered from students and courses evaluated provide chal-lenges and dif� culties. How much feedback is needed? Which instruments should be used? Whenshould the feedback be gathered? From whom should the feedback be gathered? What does thefeedback tell us? Does the process really improve the learning experience? These are all questionsthat concern tutors who wish to understand the educational experience of their students. This paperaims to offer some support and encouragement for tutors who are thinking about the evaluationprocess at course or module level. An action research model is adopted. A module leader (thepractitioner) seeks improvement in feedback gathering and evaluation processes. Experiences witha new � rst year undergraduate module are described and the various ways in which feedback wasobtained are evaluated. It is not the intention to evaluate the module in question but to evaluate theforms of student feedback. Feedback that occurs naturally as part of the teaching/learning process isset alongside structured feedback instruments. Implications for tutors involved in feedback andevaluation are considered and suggestions offered.

Keywords: student feedback, student questionnaires, learning-to-learn, study skills, � rst yearaccounting

Introduction and context – evaluation and feedback

Tutors have always received feedback about their students’ experiences. Being engagedupon the same course ensures some dialogue between students and tutors concerning theirexperiences. Why then should student feedback and the evaluation of courses of studyhave received much greater attention in recent times?

Moves towards greater levels of public accountability in all of the public sector providethe context within which Higher Education now operates. A feature of such accountabilityis the desire for more transparency concerning the value of the educational experience, andfor an overt demonstration of the effectiveness of the educational provision. Hence thenotion of gaining feedback from students and of evaluating this feedback may be seen asa part of the information-gathering process to demonstrate that education providers arein some way accountable to their stakeholder groups. Student feedback, in this context, isoften associated with the regulation of university education and, at extremes, disciplineof individuals. However, such approaches place little emphasis on the developmentalpossibilities that feedback offers. Drawing upon the re� ections of participants, whetherstudents or tutors, a developmental approach seeks to improve future learning experiences.It is with such an approach that this paper is concerned.

* Address for correspondence: Len Hand, Department of Accounting, Nottingham Trent University, Nottingham,NG1 4BU, UK. Email [email protected]

Accounting Education 10 (2), 147–160 (2001)

Accounting EducationISSN 0963–9284 print/ISSN 1468–4489 online © 2001 Taylor & Francis Ltd

http://www.tandf.co.uk/journalsDOI: 10.1080/0963928011008165 1

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

Evidence of rising interest in student feedback processes can be seen in work which hasemerged in recent times (see, for example, Partington, 1993; Loughborough University,1998). However, student feedback is still seen as synonymous with feedback ques-tionnaires. Having the students complete ‘happy-sheets’ or ticking off a series of pre-prepared questions may provide useful information. However, in this paper, it is arguedthat, while the questionnaire has its place, over-reliance on such instruments is unhelpfulif the aim as educators is to understand and illuminate the richness of the educationalexperience. This is particularly the case if feedback is sought as a developmental toolrather than merely for the purposes of ritual and regulation.

The research study, research questions, and methodology

The research uses an action research methodology. This has been de� ned as ‘. . . collab-orative enquiry by . . . academics . . . into their own teaching practice, into problems ofstudent learning, and into curriculum problems . . .’ (Zuber-Skerritt, 1992). The actionresearch tradition (as described, for example by McNiff, 1989) seeks out improvementsthrough a process involving evidence gathering and re� ection. A practitioner (in this casethe module leader) is gathering evidence on his/her practice, and re� ecting on thatevidence with a view to making improvements at two levels: (1) the student experience ofthe module; and (2) the quality of feedback gained about the learning experience.

As part of an undergraduate course re-design, a new year-long � rst year module1 wasintroduced into the BA (Hons) Accounting & Finance course, called ‘Developing Learningin Accounting & Finance’ (DLAF). The module set out to operationalize many of the ideasabout independent learning which had been discussed within the faculty over a period oftime. Rather than offering a study skills programme, the module aimed to develop thehigher order skills of re� ection and critical thinking which have been associated withdeeper learning, and with improved learning outcomes (see, for example, Martin andRamsden, 1985).

As part of the process of implementation, feedback about the module was gathered froma variety of sources during its � rst year of operation. This research began with a fairlyorthodox approach, that is to disseminate experiences of: running the module, gainingfeedback, evaluating the feedback, and of building such feedback into re� nements of themodule for its second run. However, as the evaluation progressed, it became clear that amore intriguing set of questions were arising, notably those concerned with the feedbackand evaluation process itself. Hence, while for purposes of module change and improve-ment the stages described above remained important2, the main research interest became:to evaluate the quality of the sources of feedback. It is not therefore the intention to discussthe speci� c feedback about the course, except in order to support the observations on thevariety of feedback instruments. The remainder of the paper will discuss the variedfeedback tools, both structured and unstructured, which have been evaluated. The sourcesof feedback evidence used were:

1 In the UK, at the time of writing, ‘module’ described a unit or subject of study. On this course students weretaking � ve year-long modules: Financial Reporting, Management Accounting, Law, Quantitative Methods, andDLAF plus two half-year modules: Business Economics and Organisational Behaviour.2 Anyone interested in more detailed feedback about the DLAF module should contact the authors.

148 Hand and Rowe

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

Structured and planned feedback

c a student feedback questionnaire (agree/disagree statements)c a student feedback questionnaire (open questions)c structured group discussions

On-going feedback

c re� ective learning diariesc staff emailc student email

Student feedback questionnaire (agree/disagree statements)

At about midway through the module a questionnaire was administered to the students togain initial feedback. The questionnaire was anonymously completed during a lecture bythe 61 students who attended, out of a cohort of 110. The � rst part of the questionnairecontained 20 statements about the � rst year of the course (see Appendix 1), to whichstudents were required to give an agree/disagree response (Lickert scale, 1–5) to eachstatement. While the detailed responses do raise some issues for the course team, the focusof this paper is the degree to which this provides a valuable form of evaluation.

Student feedback questionnaires have well-recognized limitations:

1. Response rates. What are we to make of the missing 49 students? It could beassumed that, as they had missed the lecture, they may be less committed to thecourse, and that their responses would (had they been captured) have been morenegative. But this is speculation. We have no way of knowing. The opposite couldbe true, that they included the best students on the course who felt that they had noneed for the lecture.

2. The quality and reliability of the students’ responses. A tick-box instrument,however well devised, may encourage super� cial responses to ‘get it over with’.Certainly, the high number of students who could not make up their mind (23% ofall responses were ‘neither agree nor disagree’) suggests a lack of deep thinking bysome of the students. Furthermore, as pointed out by Newton in the US context ofstudent evaluation of teaching instruments, the ease with which results can beobtained via such instruments may lend a spurious reliability to results ‘often to theexclusion of other measures of teaching effectiveness’ (Newton, 1988).

3. Time and costs. Time and costs of design, administration and analysis (as with anyfeedback method) need to be weighed against possible gains in understanding.Clearly a very judgmental question, but unless the question is at least asked thereis a risk of creating work and paper for little educational gain. In the US where, itshould be noted, such instruments may also have signi� cant consequences for tutorswho are subject to their � ndings, their validity and costs have been questioned (forexample, Wallace and Wallace, 1998).

Despite such reservations, 5-scale responses may help at least to raise questions and issues,which may be followed up through other means.

But there is a � nal, major dif� culty with the instrument, in that each question leavesothers unanswered. For example, a number of students felt unhappy with their choice ofcourse, and some were unconvinced about the value of the DLAF module. Without further

Evaluation of student feedback 149

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

investigation, the reasons for such responses were unknown. While present experiencessuggest that a closed-question instrument can be useful as a starting point, some follow-upwork with the students is necessary if such ‘tick-box’ questionnaires are to be helpful.Used in isolation, there is a risk of taking super� cial, and unrepresentative responses andtreating these as facts, and not looking beyond for deeper understanding. In action researchterms, there is a need for some triangulation before accepting the evidence. As McKenzieet al. (1998) suggest in a different context, there is a need for a holistic approach if we areto make sense of student feedback.

All too often, course evaluation would progress no further than a questionnaire. In thisresearch, fortunately, it was possible to pursue the unanswered questions through openquestions and workshop discussion groups.

Student feedback questionnaire (open questions)

The second half of the student feedback questionnaire asked the students to respond to fouropen questions:

c Describe your experiences with DLAF so farc Describe your experiences with IT on the course so farc Describe your overall � rst year experience so farc De� ne learning

Module-speci� c questions

The � rst two questions were obviously central to the feedback. This was a new module ina new course design, and embraced learning-to-learn philosophies to which the courseteam had strong attachments. Furthermore, the unusual nature of the module (compared toconventional � rst year accounting courses) along with some anecdotal student comments(such as ‘why are we doing this?’!!) meant that the answers to these questions wereeagerly awaited (if with some trepidation!). However, students’ responses raised twodeeper issues related to the evaluation of feedback i.e. (1) contradictory responses, whetherbetween respondents or within individual responses, and (2) contextualization.

The dilemma of ‘how to make sense of contradictory feedback’ was illustrated by manyof the responses received. Therefore the key for feedback work would seem to be (1) toensure that any responses are representative of the student cohort and (2) to accept that anyprogramme aimed at a large cohort is bound to contain contradictory evidence – that wecan’t please everyone. The professional judgement of tutors is important in converting theraw feedback into a meaningful evaluation which will ultimately lead to improvements.

When asking questions about the speci� c module some responses addressed widerissues affecting other modules and the year as a whole. Some students appeared to haveproblems understanding the DLAF module in isolation. Instead the module was placed ina wider context as illustrated by this response:

c I haven’t found DLAF very useful. 1 hour a week would be better, possibly leavingspace for more Financial Reporting which everyone � nds harder/more useful.

Thus, although the question was module-focused, the respondent chose to make a link withanother module. By focusing upon individual modules, evaluation instruments may miss

150 Hand and Rowe

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

the vital links that students make (or fail to make!) between subjects, between years,between work and play. So does a year-wide question provide more valuable feedback?

Feedback at year level

The question which asked the student to Describe your overall � rst year experience so farallowed for any re� ections which the students felt important about the whole year thus far.At this stage a few examples help to demonstrate the richness of responses which may beevoked from such an open question:

c The � rst year has generally be [sic] a complete learning experience, learning to liveon my own, learning to cope with money, learning to live with others as well aslearning about my course

c I must admit I have done little work so far, but marks I have had back have beenreasonable. I � nd 12 hours a week for a ‘full time’ course quite farcical, and thinkmost of the syllabus lacks challenge/volume.

c It’s been quite hard going. I � nd it an intense course which has been completelydifferent way of learning to what I’m used to.

These quotes not only illustrate the inevitable contradictions to be found in response toopen questions, but also provide a more rounded perspective on the student experience.

Feedback questionnaires are frequently discussed in terms of one-questionnaire-per-module, and there are clearly some advantages in this approach – speci� cally that thequestions are very focused and that any resulting evaluation may be handled in a discretemanner through the relevant team of tutors. However, there appear to be some majordrawbacks. First, that students can suffer feedback fatigue with attendant risks thatrespondents will become apathetic. Second, the student’s experiences during a year oftenoverlap. In this module, for example, IT learning impacted on performance in othermodules, and different approaches to module delivery between certain modules were oftencontrasted by students. If it is believed that the student is being offered a one-year learningexperience which is in many senses holistic and carries some degree of integration, thenperhaps the focus for feedback should be at year not module level.

An indirect feedback question

Of course, the open-responses discussed above rest upon an assumption that the studentsare ‘informed consumers’, that they know enough about what they are being asked toevaluate to provide helpful feedback. As Kerridge and Matthews (1998) ask – do studentshave the ability to make appropriate judgments? This sounds condescending, but is notintended thus; no-one denies that any student can spot a ‘bad’ teacher or ‘bad’ lecture/seminar. However, at a deeper level the student (particularly at � rst year) does not have anunderstanding about the whole curriculum and cannot easily contextualize her/his studies.This issue raises key questions about the quality and credibility of feedback received fromstudents. It may be that what staff see as challenging and developmental aspects of thecourse may be perceived by students as dif� cult and irrelevant. Ultimately it falls tostaff teams to determine curriculum content and to balance challenge against degrees ofdif� culty.

Evaluation of student feedback 151

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

Thus responses to the next question – De� ne learning – though not directly concernedwith student feedback, provided indirect evidence about the students’ grasp of the ideaswithin the module. The responses (as may be expected) showed considerable variation,from relatively mechanistic views such as:

The gathering, taking in and understanding of new information previously notknown.

or

The process of absorbing information into your memory so that it can be extractedwhen you need it. A clear understanding.

towards more re� ective de� nitions such as

Learning is not just about memorising work to pass the modules in the course. Itis to gain an understanding and ‘deeper’ knowledge of the material given to us

and

Finding out about things you didn’t know about before and researching to improveyour knowledge. Not relying on other people to teach you everything but � ndingout yourself.

We are not dwelling here on the intriguing questions arising for these responses are notbeing dwelt upon here3 but the question does raise a point pertinent to the feedback issue.Some feedback about learning comes indirectly, and such feedback may be more helpfulthan overt forms of feedback. Say, for example, that a large number of students hadde� ned learning in ways considered at odds with the expressed DLAF outcomes; whatevertheir expressed beliefs about the module itself, there would have been evidence for concernabout how well the module was working.

There is, of course, a paradox here. A ‘developing learning’ module had been conceivedbecause students (in general) were displaying evidence of weak levels of deep andindependent learning. It may be expected that such students might, therefore, be critical ofany module which appeared to be at odds with their own pre-conceived and (in our view)narrow ideas about learning.

Structured group discussions

As have already been noted, questionnaires leave many questions unanswered. In thisstudy it was attempted to plug the gaps by arranging group sessions with the students. The� rst of these took the form of workshop discussions managed by the DLAF tutors. In theworkshops,4 near the end of the module, the tutor led a discussion around responsesderived from the feedback questionnaire. This proved to be a valuable exercise whichallowed for clari� cation of issues raised, and informed the students that notice was beingtaken of their feedback.

A note was sent to tutors before the workshop to guide them in the workshop discussion.The results of these discussions were fed back to the staff team via e-mail and allowed the

3 The module leader’s report looks more deeply at these issues, and is available from the authors on request.4 Workshops were held at roughly two weekly intervals. The students were grouped into six workshops. Therewere 17 workshop meetings during the year.

152 Hand and Rowe

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

team to gain a closer understanding of matters left unexplained by the original closedquestionnaire. These meetings muddied the waters still further as different student viewsemerged. And some of the discussions developed into a discussion of various aspects ofthe year that some students had found problematic. Again this illustrates the dif� culty ofconstraining evaluation within any given module, and the students’ tendancy to re� ectmore broadly on the year as a whole. Nonetheless, the process did help to clarify feedbackthat was being received.

In addition, towards the end of the module, a focus group was arranged by a staffmember not involved in delivery of the module. The purpose of the focus group was togather additional feedback from the students on an anonymous basis. It was hoped thatstudents would feel able to express views which may not have emerged elsewhere.However, attendance was negligible and it proved impractical to run the focus groupsession. Special one-off focus groups, in the authors’ experience, are high on cost and lowon effectiveness, whereas group discussions that are built into the teaching programmeprovide fuller, representative, and helpful feedback. They allowed staff to get beyond asuper� cial understanding of student experiences, and approach a more rounded view.

On-going feedback

An important element of feedback, which is often ignored or forgotten, is the on-going feedback that is provided during the normal teaching and learning withinthe course. (Partington, 1993, p. 67)

In the authors’ experience, discussions about student feedback often overlook the feedbackwhich occurs naturally in any teaching and learning situation. Interactions between tutorsand students can provide a rich vein of evidence about the students’ perceptions andexperiences. Such feedback, however, is not easy to capture in any systematic manner andso becomes anecdotal. On the DLAF module, three sources of on-going feedback evidencewere available and were captured: learning diaries, staff email and student email.

Re� ective learning diaries

The use of diaries (both by staff and students) has been advocated as an aid for courseevaluation (see for example Bradley, 1986). Diaries can provide a form of qualitativefeedback which describe the feelings and experiences of course participants, and whichhelp to illuminate the complexity of the educational process (Parlett and Hamilton, 1977).Students on DLAF kept Re� ective Learning Diaries that were intended as a record of thestudents’ re� ections on their learning experiences. In addition, the diary provided someevidence for an essay which students wrote at the end of the module about their ‘changingunderstanding of accounting’.

The diaries were con� dential between the students and their DLAF tutor, who collectedthe diaries three times during the module – not for assessment but to give the studentsformative feedback about the way they were using the diaries. An unintended consequenceof the diaries was that they provided a valuable source of feedback about both individualstudents and about the module. For example, it emerged that students were struggling towrite in a re� ective manner (a skill regarded as central to the module). Students alsocommented on their motivation levels, and about subjects they were � nding dif� cult/easy/enjoyable/dull. Though the con� dential nature of the diaries limited the actions that could

Evaluation of student feedback 153

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

be taken, they nevertheless provided productive feedback to staff, and gave insights intothe student experience at a very personal level.

Staff e-mail

Feedback from staff, it could be argued, is a further source of module evaluation. Staffmembers often pick up student feedback as it arises in class meetings and elsewhere. Inthis module the staff team (six tutors) used e-mail extensively for contact with each otherabout the module and their experiences. E-mail was also used to gather more re� ectivethoughts about positive and negative aspects of the shared staff/student experience.

The module leader gathered all of the e-mails, and after eliminating trivial messages(e.g. ‘OK, that’s � ne’) the � le was left with 89 messages, spread over the period between28 September and 2 June. The contents of each message was analysed between the varioustypes of message. This analysis (see Appendix 2) represents a rich source of comment andnarrative about the experiences and re� ections of staff engaged with the module as theywere experiencing it. A picture of the concerns of the staff (at least those deemedsigni� cant enough to send them dashing to the key-board!) emerges. The value of thisfeedback (over other methods) is that it is recorded in real-time. Many feedback methodssuffer from memory lapse and/or the ‘rose coloured spectacles’ syndrome.

Feedback from the e-mail messages helped to identify the emerging issues for thedelivery of DLAF and to some extent informed the design of the closed-statementquestionnaire used mid-way through the module.

Student e-mail

The students were encouraged to use e-mail for contact with each other and with theirworkshop tutor (workshops were fortnightly, thus e-mail allowed for more frequent contactif required). Tutors also used the e-mail to send messages to their workshop groups and toindividual students.

One workshop tutor gathered the student/staff e-mails into one � le (in all, some 30messages, after eliminating several trivial messages). The messages were analysed andtheir content is shown in Appendix 3. However, they were quite restricted (largely to thetutor chasing up or admonishing students!) and so had limited use for feedback purposes.If a course or module were to use e-mail more actively, this could be a promising sourceof feedback evidence.

Overview of feedback sources

Having discussed in some detail the feedback evidence gathered it may be helpful for thereader to have an overview. Tables 1 and 2 summarize the structured and on-going feed-back sources that were used.

Implications for teachers

As illustrated by the tables, the wide range of methods and sources used in this evaluationof the DLAF module also allows some re� ection upon the problems and nature of studentfeedback. The key lesson drawn from this work is that there is more to feedback than the

154 Hand and Rowe

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

administration of single instruments. While this appears to be obvious, the lesson does notalways appear to inform approaches to feedback. Furthermore it is found that constrainingfeedback to an individual module, while it may serve administrative and regulatorypurposes, does not � t the student experience and sti� es feedback that may allow for thedevelopment of the course as a whole. In a nutshell, the experience suggests the need fora more holistic approach, utilizing a range of sources to better understand the wholestudent experience across the year. Any single instrument, used in isolation, may at bestraise further questions, and at worst give a misleading view of the student experience. Acombination of instruments allows some of these further questions to be addressed andpresents challenges to simplistic summaries of student experiences.

Having said that, gathering student feedback needs to be realistic, cost effective and man-ageable. Many factors will in� uence our choices about: when? how much? what methods?and so on. Hanging over these choices will also be a question about the costs of gatheringthe feedback, particularly in terms of staff time and effort. Following this research withfeedback on the DLAF module, there has been an aim to offer help to course and module

Table 1. Structured feedback methods used for DLAF module – (1998/99), and main issues arising

Method Approach Overview of main feedback points

Student feedbackquestionnaires – 20statements requiringagree/disagree(5 point LickertScale)

Hard copy completed bystudents in lecture – n 5 61(out of 110), 20 statements inall – covering: use of IT ,learning diaries, attendance,use of learning resources,views about DLAF , andchoice of course

Mixed views concerning the use andunderstanding of IT Resistance to use ofLearning DiariesLecture attendance less important thanseminar attendanceLow use of CBL and drop-in-centreWeak agreement regarding the value ofDLAFSome students unconvinced about theirchoice of course

Student feedbackquestionnaires –open questions

Hard copy completed bystudents in lecture – n 5 61(out of 110) – 4 openquestions about (1) learning,(2) � rst year experiences,(3) DLAF, (4) IT experiences

Varied views about learning; frommechanistic to re� ectiveMany positive views about the � rst yeargenerally; but a large minority ofstudents raising concerns about a rangeof mattersVery mixed messages concerning thevalue of the DLAF module – quiteextreme responsesGenerally re-assuring feedback re IT –some students critical of the delivery

Student focus group(independent)

Approximately 30 studentsinvited to attend one of threemeetings – only 1 attended!!

the main issue here is why almostno-one attended!

Workshop tutor-leaddiscussion (laterworkshops)

Used for follow up evaluation– based on responses toinitial questionnaire

shed more light on questions raised byinitial questionnaire (see above)

Evaluation of student feedback 155

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

teams in managing (and making manageable!) the feedback process. The following sug-gestions are offered:

c Capture on-going feedback through brief notes, tutor’s diaries, or e-mail. Informalfeedback (particularly if collected early in the programme) can give more focus tolater, structured, feedback.

c Aim to gather feedback at year level, rather than for each separate module. It is theyear which normally represents the current student experience and many questions/issues may be common. This approach also saves on duplication of effort, and islikely to reduce student feedback fatigue.

c If the curriculum allows for it, channel feedback through one module in the year. Inthe authors’ case, the Developing Learning module became the natural focus forfeedback.

c Avoid the ‘happy-sheet’ approach with vague questions which leave one intriguedbut no wiser. Go for feedback on speci� c aspects which appear problematic, arenew and untried, or are just plain interesting to you.

c Avoid re-inventing wheels – there is a great deal of material developed in thestudent feedback area within the public domain (see for example, LoughboroughUniversity, 1998; Partington, 1993).

c Give the students an opportunity to offer their own agenda in the feedback. Thismay be through open questions or focus groups.

c Do not feel obliged to take feedback on every module each year. A rolling pro-gramme may be more appropriate

Table 2. On-going feedback methods used for DLAF module – (1998/99), and main issues arising

Source Approach Overview of main feedback points

Students’ Re� ectiveLearning Diaries

Students wrote up diaries– staff collected and readthem from time-to-time

Con� dentiality important – thus detailedfeedback cannot be reported. However:diaries generally under-utilized – werean important (if disguised) feedbackabout areas of the year which studentswere � nding OK/notOK

E-mail correspondence– staff to students

File of emails betweenstaff-and students gathered– 30 messages captured

mainly dominated by messages fromtutors admonishing students for nonattendance, and chasing up work, andstudents apologizing for non-attendance

E-mail correspondence– staff to staff

File of emails between thestaff team (6 tutors) – 89messages captured

messages show emerging issues as: lotsof positive ‘feeling’ about DLAF as ateaching experience – some studentconcerns re� ected about the perceived‘irrelevance of DLAF – attendanceproblems – learning diaries being under-utilized – poor take-up of workshops onIT – poor motivation/attitudes amongst asigni� cant minority of students

156 Hand and Rowe

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

c Closed-statement, tick-the-box, questionnaires have a part to play, but evidencegained from them needs triangulating from other sources. In the words of the song,they tend to provide ‘More questions than answers’. And if placed mid-way throughthe year they may be useful in focusing later feedback.

c Use routine lecture/seminar/workshop sessions for feedback (questionnaire � lling,focus groups etc.). Attendance (and hence representation) is likely to be higher andthe occasion can be used to signal to the students that feedback is a natural part ofthe educational process. There may even be opportunities, depending on the type ofcourse, to blend the feedback gathering and re� ection process into the curriculum(e.g. discussions about the quality of evidence).

c Ensure that plans for feedback are sustainable in the long-term. Gathering andevaluating meaningful student feedback can be costly. Are there suf� cient resourcesto implement the feedback plans?

c Above all, it is necessary to view feedback as part of a wider evaluation process ratherthan a one-off exercise. This means planning and timing the feedback activities withinthe wider design/delivery/review process. A possible timetable could be:

c � rst term – capture informal feedback (discussions, email, or notes made bystaff) which could then inform

c a mid-year questionnaire (closed and open questions) leading toc seminar-based focus group(s) towards the end of the year debating and developing

issues which have arisenc re� ections by course team and changes to module/year curriculumc ‘closing the loop’ through providing feedback TO students about how the feed-

back has helped to develop the course or module.

Concluding remarks

Once the weaknesses inherent in an isolated end-of-module feedback is acknowledged, itis necessary to develop a holistic and dynamic model that addresses the needs of staff aswell as those of other stakeholders. Evaluation and accountability can be seen as animposition, demanding time and effort from staff with little bene� t to them directly. Thismay particularly be true where feedback is seen as ritualistic and forms part of a regulatoryframework. Such an approach, though super� cially attractive because of its simplicity, hasno clear follow-on and offers little to educators seeking to develop and improve theircourses. Nor does it deal with contradictory perspectives. By concentrating on the use ofLikert scales and other forms of measurement, regulatory approaches seek to quantifylevels of satisfaction. Averaging out the richness and variety of student experience maymake for easy judgements but risks the loss of fuller understanding.

A developmental approach, on the other hand, recognizes the way in which feedbackmight form part of a continuous cycle of evidence-gathering, re� ection, and change. Seek-ing out contradictions and diverse views, such an approach may expose complex issuesand problems, but these are precisely the ones with which must be grappled as educatorsif the aim is to improve the student learning experience. At the same time, drawing upona wide range of sources, the feedback offers the opportunity to reach decisions in the lightof a more rounded and nuanced understanding.

Evaluation of student feedback 157

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

References

Bradley, G. (1986) Using a diary to evaluate a course or programme. Journal of Further & HigherEducation 10(3), 51–56.

Kerridge, J.R. and Matthews, B.P. (1998) Student rating of courses in HE: further challenges andopportunities. Assessment & Evaluation in Higher Education 23(1), 71–82.

Loughborough University (1998) Student feedback systems – teaching quality systems in businessand management studies: the student interface – FDTL Project. Loughborough UniversityBusiness School, UK.

Martin, E. and Ramsden, P. (1985) Learning skills or skills in learning? In J. Bowden (ed.) StudentLearning: research into practice, pp. 155–167. Melbourne: Centre for the Study of HigherEducation, University of Melbourne, Melbourne.

McKenzie, J., Sheely, S. and Trigwell, K. (1998) Drawing on Experience: an holistic approach tostudent evaluation of courses. Assessment & Evaluation in Higher Education 23(2), 153–63.

McNiff, J. (1989) Action Research – Principles and Practices. Hampshire, UK: MacmillanEducation.

Newton, J.D. (1988) Using student evaluation of teaching in administrative control: the validityproblem. Journal of Accounting Education 6, 1–14.

Parlett, M. and Hamilton, D. (1977) Evaluation as illumination: a new approach to the study ofinnovatory programmes in D. Hamilton, D. Jenkins, C. King, B. McDonald and M. Parlett (eds)Beyond the Numbers Game, pp. 6–22. London: Macmillan.

Partington, P. (ed.) (1993) Student Feedback – Context, Issues and Practice Committee of Vice-Chancellors and Principals of the Universities of the United Kingdom, Shef� eld UK.

Wallace, J.J. and Wallace, W.A. (1998) Why the costs of student evaluations have long sinceexceeded their value. Issues in Accounting Education 13(2), 443–47.

Zuber-Skerritt, O. (1992) Action Research in Higher Education – examples and re� ections. London:Kogan Page.

Appendix 1: Developing learning in accounting and � nance modulestudentfeedback questionnaire – agree/disagree statements

1 I am con� dent in my use of the Word software.2 I have worked through the Word workbook quite thoroughly.3 I am con� dent in my ability to construct good spreadsheets.4 I have worked through the Excel workbook thoroughly.5 I believe that my use of the Internet is effective in locating material useful to my

studies.6 I am using the e-mail quite routinely with other students in connection with my

course.7 I am using the e-mail quite routinely with members of staff in connection with my

course.8 I use e-mail quite a lot for social purposes.9 I look at my e-mail twice a week.

10 I write up my learning diary at least once a week.11 I think that the learning diary should improve my progress by causing me to think

more deeply.12 The learning diary makes no difference to the way that I work on the course.13 My attendance at all lectures on the � rst year has been over 90%.14 My attendance at all seminars and workshops on the � rst year has been over 90%.

158 Hand and Rowe

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

15 I have used the accounting computer software – (EQL) – at least once a week in mostweeks so far.

16 I have used the accounting drop-in centre more than once for queries about myunderstanding of accounting.

17 I � nd that the ideas from DLAF help me when I am doing work or assessments onother modules.

18 I � nd working in the DLAF syndicate groups is helpful for my learning.19 The DLAF module is relevant to my study of accounting and � nance.20 I am pleased that I chose the course I am on.

Appendix 2: Staff e-mails

Message type or issuesraised by staff e-mailmessages

Howmanymessages? What the messages conveyed

1 messages givinginformation to rest of staffteam

9 e.g. clari� cation of the teaching programme

2 admin. Matters oncourse, and suggestions forchanges next time round

16 Relatively minor notes – e.g. suggested changes todebate titles, and clarifying who was taking whichlectures

3 re� ecting back studentconcerns or worries overthe nature and relevance ofthe DLAF module, and itsplace on a � rst year A&Fprogramme

5 not many but some revealing perceptions of DLAF:e.g. (1) ‘there was concern about the amount of‘psychological stuff’ in the � rst week’ . . . and (33)‘some sad darlings just wanted to cancel everythingelse and concentrate on the number crunching’ . . .and (87) ‘some students had dif� culty in relating whatwe were doing (in DLAF) to the rest of the course’

4 re� ecting back positivestudent experiences orfeedback on DLAF

17 a continuous stream of messages mainly reportingpositive experiences in workshops

5 attendance concerns atworkshops

4 Concern at poor attenders

6 attendance concerns –lectures

12 a running theme of the year – attendance at the(9am!) dlaf lecture plummeted as the year progressed

7 problems/concerns aboutCVs workshop which didnot go well

3 During week 15 a special workshop related to CVsand employers’ needs received scant attention fromthe students

8 the (poor) use ofre� ective learning diaries

8 the RLD, though seen as a core learning vehicle bystaff, was underdeveloped by the students

9 IT on the � rst year –delivery, content andproblems with engagingthe students

12 DLAF included introductions to IT software . . . therewas extremely poor take-up by the students oflectures and workshops

Evaluation of student feedback 159

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4

Message type or issuesraised by staff e-mailmessages

Howmanymessages? What the messages conveyed

10 assessment matters 13 Including clari� cation of criteria and markingprocesses

11 staff concerns overstudent motivation,attitude, performance orbehaviour

15 staff views and anecdotes about poor motivation/attitude among a signi� cant minority of students . . .e.g. the student who had done virtually no privatestudy/reading by Feb. 9th, and who expected the � rstyear ‘to be a doddle’ . . . a staff message threatening‘dire consequences’ to students who failed to hand-inself-assessment criteria . . . a student who stated in aworkshop ‘I’ve got better things to do with my time’. . .

Student feedback processand evaluation of themodule

7 correspondence about the need to use closingworkshops for feedback

Appendix 3: Student e-mails

Message type

Howmanymessages? Comments

tutor admonishing students for poorattendance

11 of the 17 students attached to this tutor, 5caused concerns at some stage reattendance . . .

Students apologizing for non-attendance

8 . . . and some even replied andapologized!

tutor chasing up students for work orinformation or to have a meeting

5 General housekeeping or admonishing forwork not given in

Correspondence between tutors andadministrator re students with poorattendance

2 to place an record particularly badattendance patterns

tutors letting students know what wasgoing on (e.g. at next session) orclarifying processes on the course

5 Information about what was coming up

Guidance to students who may havebeen struggling

1 One-off comment concerning strugglingwith a different subject

tutor praising group for goodattendance

1 Only one such message!

Correspondence about possible studentwithdrawals or transfers

2 either with the students or the courseadministrator

160 Hand and Rowe

Dow

nloa

ded

by [

The

UC

Irv

ine

Lib

rari

es]

at 1

9:56

21

Oct

ober

201

4