academics’ feedback on the quality of appraisal evidence

17
Quality Assurance in Education Academics’ feedback on the quality of appraisal evidence Chenicheri Sid Nair Jinrui Li Li Kun Cai Article information: To cite this document: Chenicheri Sid Nair Jinrui Li Li Kun Cai , (2015),"Academics’ feedback on the quality of appraisal evidence", Quality Assurance in Education, Vol. 23 Iss 3 pp. 279 - 294 Permanent link to this document: http://dx.doi.org/10.1108/QAE-05-2014-0023 Downloaded on: 10 June 2015, At: 17:47 (PT) References: this document contains references to 64 other documents. To copy this document: [email protected] The fulltext of this document has been downloaded 21 times since 2015* Users who downloaded this article also downloaded: Noha Elassy, (2015),"The concepts of quality, quality assurance and quality enhancement", Quality Assurance in Education, Vol. 23 Iss 3 pp. 250-261 http://dx.doi.org/10.1108/QAE-11-2012-0046 Mahsood Shah, Leonid Grebennikov, Chenicheri Sid Nair, (2015),"A decade of study on employer feedback on the quality of university graduates", Quality Assurance in Education, Vol. 23 Iss 3 pp. 262-278 http://dx.doi.org/10.1108/QAE-04-2014-0018 Access to this document was granted through an Emerald subscription provided by Token:JournalAuthor:EF6FCC10-1DD4-4C8A-835B-555E8A48FF02: For Authors If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online products and additional customer resources and services. Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation. *Related content and download information correct at time of download. Downloaded by UNIVERSITY OF WESTERN AUSTRALIA, Professor Sid NAir At 17:47 10 June 2015 (PT)

Upload: independent

Post on 03-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Quality Assurance in EducationAcademics’ feedback on the quality of appraisal evidenceChenicheri Sid Nair Jinrui Li Li Kun Cai

Article information:To cite this document:Chenicheri Sid Nair Jinrui Li Li Kun Cai , (2015),"Academics’ feedback on the quality of appraisalevidence", Quality Assurance in Education, Vol. 23 Iss 3 pp. 279 - 294Permanent link to this document:http://dx.doi.org/10.1108/QAE-05-2014-0023

Downloaded on: 10 June 2015, At: 17:47 (PT)References: this document contains references to 64 other documents.To copy this document: [email protected] fulltext of this document has been downloaded 21 times since 2015*

Users who downloaded this article also downloaded:Noha Elassy, (2015),"The concepts of quality, quality assurance and quality enhancement", QualityAssurance in Education, Vol. 23 Iss 3 pp. 250-261 http://dx.doi.org/10.1108/QAE-11-2012-0046Mahsood Shah, Leonid Grebennikov, Chenicheri Sid Nair, (2015),"A decade of study on employerfeedback on the quality of university graduates", Quality Assurance in Education, Vol. 23 Iss 3 pp.262-278 http://dx.doi.org/10.1108/QAE-04-2014-0018

Access to this document was granted through an Emerald subscription provided byToken:JournalAuthor:EF6FCC10-1DD4-4C8A-835B-555E8A48FF02:

For AuthorsIf you would like to write for this, or any other Emerald publication, then please use our Emeraldfor Authors service information about how to choose which publication to write for and submissionguidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.

About Emerald www.emeraldinsight.comEmerald is a global publisher linking research and practice to the benefit of society. The companymanages a portfolio of more than 290 journals and over 2,350 books and book series volumes, aswell as providing an extensive range of online products and additional customer resources andservices.

Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of theCommittee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative fordigital archive preservation.

*Related content and download information correct at time ofdownload.

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Academics’ feedback on thequality of appraisal evidence

Chenicheri Sid NairCentre for Advancement of Teaching and Learning,

The University of Western Australia, Perth, Australia

Jinrui LiApplied Linguistics, University of Waikato, Hamilton, New Zealand, and

Li Kun CaiForeign Language Department,

North China University of Science and Technology, Tangshan, China

AbstractPurpose – This paper aims to explore academics’ perspectives on the quality of appraisal evidence ata Chinese university.Design/methodology/approach – An online survey with both closed items and open-ended questionswas distributed among all academics at the university (n � 1,538). A total of 512 responded to thequestionnaire. The closed items were initially analysed using EXCEL and SPSS; the open-ended questionswere thematically analysed.Findings – The academics believed that the quality of student survey and peer observation ofteaching were affected by subjectivity and the lack of understanding of appraisal. Academics alsosuggested that appraisals should be contextualised and the approach standardised. The study suggeststhe need for training that informs and engages relevant stakeholders to ensure the rigour of appraisal.Originality/value – The study raises the issue of quality assurance regarding appraisal data from theperspective of academics. It is based on the collaborative effort of academics in Australia, China andNew Zealand, with the support of the management staff at the case study university. The study informsboth appraisers and academics of quality assurance issues in appraisal. It also contributes to theliterature, in that it initiates dialogues between communities of practices through collective questioningon the quality and mechanisms of appraisal in tertiary education.

Keywords Training, Performance appraisal, Quality assessment, Student survey, Education,Questionnaire, Quality of evidence, Peer observation of teaching, Teacher appraisal

Paper type Research paper

IntroductionStaff appraisals are not only used as indicators for financial accountability, universityranking and quality assurance (Shin, 2011), but are also related to personnel decisions,promotion and professional development (Alderman et al., 2012; Marsh, 2007; Minelliet al., 2007). This nexus of demands and purposes in tertiary education (Pinheiro, 2013)results in the challenge of definition, collection and interpretation of performanceevidence for appraisal.

Trigwell (2011) argues that it is a challenge to select reliable indicators for appraisals.Egginton (2010) goes on further to argue that it not easy to make fair judgement on goodperformance in appraisals, as it often involves conflicting goals and values. For

The current issue and full text archive of this journal is available on Emerald Insight at:www.emeraldinsight.com/0968-4883.htm

Quality ofappraisalevidence

279

Received 27 May 2014Revised 11 September 2014

Accepted 16 October 2014

Quality Assurance in EducationVol. 23 No. 3, 2015

pp. 279-294© Emerald Group Publishing Limited

0968-4883DOI 10.1108/QAE-05-2014-0023

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

example, although good teaching is often related to effective and meaningful teachingthat results in learning (Casey et al., 1997), its interpretations vary within and acrossdisciplines (Kreber, 2002) and in different contexts. Minelli et al. (2007) further definefour elements that bring about organisational impact of appraisals: the idea of theassessment, the method of collection and analysis, the bodies that look after theevaluation process and the way such data are used in the institution.

Generally speaking, research publications and student appraisal of teaching are keyevidential instruments used in university staff appraisal. However, compared withteaching, research achievement seems more important to academics’ careerdevelopment (Taylor, 2007). In fact, academics often suffer from the tension betweenteaching and research, in that teaching speciality is often under-valued (Bexley et al.,2013). The same tension is felt among academics in universities in China (Du et al., 2010).

There is also a trend towards collecting evidence by means of qualitative approaches,such as student interviews and peer observation of teaching, to reflect holistically staff’sachievement (Bennett and Nair, 2010). However, little is known about whether, and how,the quality of such evidence data is ensured. A numbers of studies on course appraisalvia student survey and/or peer review, and academics’ feedback on the quality ofappraisal evidence collected by student survey and class observation by colleagues andexperts, have been carried out in many Western countries (Stein et al., 2013). Hence, thepurpose of this study is to explore effects and issues of appraisal in a top provincial-leveluniversity in north China, from the perspectives of Chinese academics. The particularfocus of this article is academics’ comment on the quality of the three common indicatorsof performance: student’ survey, peer observation of teaching and research.

This study is a collaborative research project carried out by a community ofteacher-researchers and teacher development experts from Australia, China and NewZealand, with support from the management staff of the case study university. Itcontributes to the existing literature, in that it adds to the dialogue between communitiesof practice through collective questioning of the existing mechanisms of appraisal intertiary education.

Student appraisal of teachingOne controversial form of evidence used in teaching appraisal is the end-of-coursestudent survey and/or interview. Regarding the validity and reliability of the studentsurvey, Chen and Watkins (2010), through analysis of the scores on 435 teachers in twosemesters and survey data among 388 of the teachers in a university in China, found thatthe students’ appraisals were consistent and valid. This research supports previousresearch supporting the reliability and validity of student appraisals (Benton andCashin, 2012; Marsh, 2007; Stein et al., 2013; Stowell et al., 2012). However, studies byCrumbley et al. (2001), Pounder (2007), Buchanan (2011) and Darwin (2012) challengeassumptions underpinning appraisals in higher education and question not only theethics but the validity of higher education institutions’ reliance on student surveys asmeasures of effective pedagogic practice.

Some of the issues surrounding the validity and reliability of students’ appraisalreported include teachers’ workload, their rapport with students, grading ofstudents’ assignments (Boysne, 2008; Amin, 2002), teachers’ work priorities, thenature of courses and disciplines and students’ interest in the course (Rantanen,2013). Comparatively, grading also seems to be a very important factor that affects

QAE23,3

280

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

student appraisal (Balam and Shannon, 2010). Conflicting findings are also reportedregarding the utility of student appraisal. For example, Bennett and Nair (2010)found that teachers believed that student surveys could help them improveteaching. However, according to Palermo (2013), no evidence was found on thecausal effect between student survey feedback and overall improvement ofteaching. Harvey (2011), on the other hand, endorsed student feedback as one of themost powerful tools in the improvement cycle within a higher education setting.

Adding to this discourse of student appraisal is the belief that student surveys werenot based on any articulated philosophy of quality teaching that underpins appraisaland that student surveys cannot accommodate programme differences andmulti-dimensions of teaching (Lemos et al., 2011). In other words, student appraisal ofteaching was insufficient to inform pedagogy (Darwin, 2012). Moreover, it is argued thatstudents may not be able to make objective judgements about the teaching that takesplace in a classroom. For example, Winchester and Winchester (2012) explored theeffectiveness of student qualitative appraisal in a UK university by interviewing 7 out of192 students (5 from UK, 2 from China). Students were asked to provide formativefeedback on teaching via the Moodle website on a weekly basis. The study found thatstudents’ motivation to provide feedback gradually decreased, became routinized andtended to focus on negative aspects of teaching without critically analysing the positiveaspects. However, a number of researchers argue that students are reluctant to providefeedback continuously if there is no response to their feedback (Nair, 2011; Powney andHall, 1998; Leckey and Neill, 2001; Harvey, 2003). Furthermore, Harvey (2003) points outthe importance of both the evidence of action-taken in response to students’ feedbackand the evidence of teaching improvement. To sum up, there are multilevel issues thatdemotivate students to provide constructive feedback.

To inform teaching with students’ feedback, Winchester and Winchester (2013)suggested that students’ evaluation of teaching should be regularly carried out as anon-going feedback to teachers’ practices with the emphasis from Harvey (2011) that thisis not the only source of evidence that should be relied on. Further, Elassy (2013)suggested that students also had a role in the process and should be trained if they areto be involved in the appraisal.

Peer observation of teachingClass observation of teaching by colleagues and experts is an evidence-approach used inappraisals. The quality of peer observation is influenced by the relationship between anobserver and an observee (Bell, 2001). It is also influenced by the observers’ beliefs andintentions concerning teaching, and the awareness of the diversity of teaching inrelation to themselves (Courneya et al., 2008). Moreover, the approach to collecting peerobservation evidence is important as a quality enhancement tool (Lomas and Nicholls,2005). Kohut et al. (2007) studied the usefulness of peer observation by means of a surveyto which both tenured (143 respondents) and untenured (80) staff in an Americanuniversity responded. The peer review process in the university included:

• pre-observation interviews on the context of teaching;• peer observation;• using checklists and narrative statements;• video-tape of the class;

281

Quality ofappraisalevidence

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

• self-analysis by the observees; and• post-observation discussion between the observers and the observees to exchange

opinions and negotiate the meaning of the outcome.

The effectiveness of peer observations was attributed to the shared expectation,participation and conversation between the observers and observees.

The quality of peer observation is also affected by the purpose of appraisal, that iswhether the appraisal is oriented by personnel management and/or linked to professionaldevelopment (Bartlett, 2000). One example is Byrne et al.’s (2010) study on perspectives,engagement, benefits and issues of peer observation. Data were collected by questionnaireand interviews with staff in a department of a UK university. A comparison was madebetween the appraisal-based and development-based peer observation approaches. Thestaff (n � 36) believed that the traditional box-ticking observation is just a routine operationfor management purposes. In contrast, those who conducted peer development-orientedobservation (n� 26) reported positive experiences of improvement to teaching and research.Chamberlain et al. (2011) explored the reasons for staff engagement with peer review thatwas for developmental rather than evaluative purposes. In this study, data were collectedusing a questionnaire survey of 84 staff, along with three focus groups (n � 16) acrossdepartments of a UK university. The study found that there was ambiguity and a lack ofdiscussion on the purpose, role of stakeholders and utility of peer observation outcomes.There was a need for a connection between the peer observation outcome and follow-upsupport for professional development. Bell and Cooper (2013) reported a partnershipapproach to a staged peer observation of a teaching programme participated by 12 out of 20staff in an engineering school of an Australian university. The programme had the followingfeatures: voluntary participation into different stages of the programme; discussion of andtraining in peer observation; participation of the head of school, not only as a leader, but alsoas a learning partner; facilitation and support provided by an external-to-facultycoordinator; and a partnership between junior and senior staff. The study reported that thepeer observation approach could enhance teaching skills and knowledge, if it were carefullydesigned and supported to address the complexity of the process.

Research and teachingResearch and teaching are two crucial indicators of academics’ performance in tertiaryeducation. The discussion between teaching and research nexus has been ongoing fordecades (Douglas, 2013; Taylor, 2008); yet, there is no clear definition of the quantity andquality of academic work with regard to both teaching and research (Soliman and Soliman,1997). In addition, staff perspectives on the benefit of research on their teaching vary,depending on the level of students and subjects they are teaching or the weighting ofresearch in appraisal (Taylor, 2007). A common perspective among academics is that timespent on research tends to result in more positive outcomes in promotion and payment thanthat spent on teaching (Douglas, 2013; Murphy and MacLaren, 2009). Teaching has beendegraded to a secondary level due to the priority research has in attracting governmentfunding (Lucas, 2006; Mayson and Schapper, 2012). According to Brew (2010), theintegration of research in teaching is affected by both external factors such as governmentfunding on research and internal factors including staff and students’ perspectives and thenature of the courses. This tension between research and teaching has gained some

QAE23,3

282

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

recognition, as universities place greater emphasis on the research domain for rankings,public funding and institutional prestige (The Guardian, 2012).

Quality of appraisal evidenceGuba and Lincoln (1989) suggested that appraisal should do three things: takequalitative approaches, engage participants and negotiate meaning among allstakeholders. Concurring with this suggestion, Smith (2008) proposed a five-phasemodel of appraisal. This model suggests collecting evidence from a multitude of sources(including self-reflection, peer appraisal, students’ experience and outcome of learning),providing opportunities and guidance to interpret the appraisal outcome and enhancingthe engagement with and application of the appraisal data for further development.

Although the multi-sources approach to appraisal has been gaining increasing attention,little is known on how the quality of these sources is controlled and assessed. In studies onstaff appraisal in China, quality issues have been reported especially with regard to theineffectiveness of collecting and interpreting data (Chen and Yeager, 2011). In addition, Zouet al. (2012) found, by document analysis of self-appraisal reports from 53 universities, thatthe quality of tertiary education is interpreted institutionally as organisational, rather thaneducational, quality. Moreover, although the outcomes of appraisals should be used forimprovements, Nair and Bennett (2011) report, however, the intertwining of evaluation andimprovement, and that informing the feedback outcomes to all concerned is notsystematically implemented in many appraisal systems. To ensure the quality of appraisal,Freeman and Dobbins (2013) suggest that there is a need to engage all stakeholders such asteachers and students in an ongoing dialogue of appraisal.

In summary, the existing studies have found that while both student survey and peerobservation of class teaching are useful evidence, the quality of these forms of evidencemay be affected by issues such as student and peer bias. Therefore, there is a need toinvestigate the quality of evidence collection, from the academics’ perspective.

This studyThis study is a case study on staff appraisal at a university in China. The overarchingresearch purpose of this study is to identify issues in the current appraisal systems inChina, and promote dialogue regarding these issues in general within the highereducation community. Specifically, this paper aims to explore academics’ opinions onthe quality of appraisal evidence collected.

The questionnaire was designed taking the following design considerations: firstly,the items were developed taking into consideration relevant research literature on staffappraisal. Secondly, the authors in this paper have had significant experiences inresearching elements of staff appraisal and the design of questionnaires (Nair andBennett, 2011). Once the items in the relevant domains of the questionnaire weredeveloped, it was then translated into Chinese by a National Accreditation Authority forTranslators and Interpreters Ltd Australia-certified translator. Both the English and theChinese versions of the questionnaire were then sent to the third author, who was alecturer at the participant university where the data were collected. The third authorprovided contextual information and suggested improvement of the questionnaireregarding the approaches of staff appraisal in the Chinese setting. The questionnairewas revised and piloted among former colleagues in China to gauge the face validity ofthe items in the questionnaire. Further improvements were made based on the piloting

283

Quality ofappraisalevidence

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

results. To ensure that the translation captured the essence of the English version of thequestionnaire, a back translation was also carried out.

The university in this study is a top university located in a province in northern China. Ithas about 1,500 teaching staff and 50,000 students in various disciplines includingengineering, medicine and the humanities. Data in this study were collected through thesurvey questionnaire, the Perceptions of Teaching Appraisal Questionnaire (PTAQ). Thequestionnaire was made available online for all staff in the case study university.

A total of 512 academics responded to the survey, representing a response rate of 30 percent. The raw data were analysed using EXCEL, with reliabilities and validity of thequestionnaire analysed with SPSS. Open-ended items were thematically analysed followinga grounded theory approach (Braun and Clarke, 2006). For the purpose of this article, bothdescriptive statistics and the open-ended items were utilised to present the overall findings.

The questionnaireThe PTAQ consists of 49 closed items in two parts, with the third part having fouropen-ended questions. Part one of the questionnaire collected bio-demographicinformation essential for this study. Part two of the questionnaire measured the variousaspects of the appraisal system. Table I outlines the domains and sub-domains that thequestionnaire measured.

The Cronbach alpha reliability of the items in the PTAQ, using the individualparticipant as the unit of analysis, ranged from 0.87 to 0.92. Using a maximum likelihoodfactor analysis, six sub-domains and two distinctive domains were identified in thequestionnaire.

The appraisal evidenceAs shown in Table II, the majority of the respondents were junior academics–lecturers/assistant professors (57.5 per cent). Senior academics–associate professors made up thesecond largest group of respondents, constituting 30 per cent.

In terms of frequency of appraisals, there was some variation across the university.Just over half of the respondents had an annual appraisal, while the remainder (about 48per cent) reported that they had been appraised once every semester. The data showedthat the appraisal system comprised mainly student surveys and peer observation ofteaching. Other components of appraisal included research publications, self-appraisaland mentors’ reports on new teachers’ performance. The appraisers were internalsubject experts, peers/fellow teachers, management staff and deans. Sometimes,external experts were also included as appraisers.

Student surveys at this university took three forms: online, paper-based or in theform of student interviews (Figure 1). The primary form of feedback was the onlinemode. Student surveys are a compulsory component of the appraisal system. Theteaching appraisal system in the University was established in 1993. It was re-examinedand strengthened in 2004 according to the National Standard of UndergraduateEducation Assessment.Though the methodology used varied with respect to the administration of the studentsurveys or collection of feedback, the majority of staff (77 per cent) agreed that studentfeedback on teaching was worthwhile. A small percentage (9 per cent) reflected that theyavoid giving a fail grade to students, because it might influence students’ rating on theirteaching. This goes against the empirical evidence in the early works of Marsh and

QAE23,3

284

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Table I.Structure of the

PTAQ

Dom

ain

Sub

dom

ain

No.

ofite

ms

Des

crip

tion

Exa

mpl

eof

App

rais

alsy

stem

s3

Mea

sure

the

appr

oach

esto

appr

aisa

latt

hein

stitu

tion

The

com

pone

nts

ofte

achi

ngap

prai

sal

incl

uded

:cl

assr

oom

obse

rvat

ion

stud

ents

’eva

luat

ion

ofm

yte

achi

ngby

ques

tionn

aire

my

self-

eval

uatio

npe

eran

dco

lleag

ues’

Stud

ents

urve

ysR

evie

wof

feed

back

2M

easu

res

the

stud

ents

urve

ym

echa

nism

and

proc

esse

sA

fter

the

eval

uatio

n,Iw

ill:

prov

ide

stud

ents

with

asu

mm

ary

ofth

eir

feed

back

prov

ide

stud

ents

with

the

actio

nsIa

mpr

opos

ing

tota

keor

have

take

n(R

espo

nse

scal

e:A

lway

s1

23

45

Nev

er)

Eng

agem

entw

ithda

ta9

Det

ailin

gac

tions

6U

seof

surv

eys

3

Proc

ess

ofte

ache

rap

prai

sal

8M

easu

res

stru

ctur

ean

dpr

oces

sof

the

teac

her

appr

aisa

luse

din

the

inst

itutio

n

Irec

eive

dor

alfe

edba

ckfr

omth

eap

prai

sers

(Res

pons

esc

ale:

stro

ngly

agre

e,ag

ree,

neut

ral,

disa

gree

,str

ongl

ydi

sagr

ee)

Impa

ctof

teac

her

appr

aisa

lPu

rpos

eE

ffec

ts9 3

Mea

sure

sth

eim

pact

toth

eap

prai

salo

nth

ete

ache

rT

hepu

rpos

eof

teac

hing

appr

aisa

lw

as:

findi

ngou

tw

heth

erI

coul

dm

eet

the

stan

dard

ofte

achi

ng.(

Res

pons

esc

ale:

stro

ngly

agre

e,ag

ree,

neut

ral,

disa

gree

,st

rong

lydi

sagr

ee) 285

Quality ofappraisalevidence

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Roche (1997) in a Western setting which show that student grades have an insignificanteffect on students’ rating of their teachers.

Over half (56 per cent) of the staff in this Chinese university clearly expressed a beliefthat the student survey on teaching was a tool for improvement. This finding is in linewith research by Nair and Bennett (2011). In addition, over half (55 per cent) believedthat it was used to monitor the quality of teaching by management, while less than half(47 per cent) believed that it was also used to learn students’ learning experience.

Academics in general reported that they were well-informed about the appraisalsystem at their university, as well as being given the necessary help in formulatingimprovement plans for areas that needed improvement. However, only around 57 percent believed that the approaches used in the appraisal system could objectively reflecttheir teaching performance. Although this sentiment was expressed, academics (62 percent) generally thought the appraisers who conducted their appraisal reviews weretrustworthy; however, about half of the staff reported that they received oral and/orwritten feedback from the appraisers. These findings of staff perceptions on theappraisal system are summarised in Table III.

Table IV outlines the staff perception of the impact of such teacher appraisals. Theresult suggests that the system as such is useful and helps in improving their teaching.In addition, there was a clear understanding that the data from the appraisals areutilised as a management tool. An interesting outcome from the survey was that staffperceived that the data from such appraisal had little bearing on the promotion exercise.

The data in Table IV also showed that comments from the student surveys wereperceived by staff as a positive influence in making them more effective teachers. Less than20 per cent perceived that any negative comments had a detrimental effect on their teachingand with the majority of teachers indicating that it did not influence their grading practice.

Table II.Respondent makeup

Appointment (%)

Professors 12.7Associate professors 29.8Lecturers 44.6Assistant professors/teachers 12.9

21.7

37.4

17.1

23.8

0

10

20

30

40

Online or Paper Online Only Paper Only Student Interviews

Percen

t

Figure 1.Survey or feedbackmode

QAE23,3

286

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Subjectivity affected the quality of student survey and class visitationOf the 361 academics who responded to the question on whether there were any factors thataffected appraisal, 46 per cent listed various factors. The main factor they outlined related tothe subjectivity of appraisers, especially student appraisers whose judgement wasinfluenced by their attitudes and understanding of appraisal, their achievement in the classand the teacher–student relationship. For example, one academic listed the following factors:

Student appraisal of teaching is not fair enough. Some students, especially those who got a failgrade, take revenge of the teachers [using students’ appraisal of teaching]. Some teachers geta high appraisal score by unethical approaches. Students have higher expectations towardsteachers who teach specialised courses than those who teach commonly required courses.Colleagues who had class visitation did not provide real opinions due to a kind of adherence toformality (Respondent No. 4).

Teachers further elaborated on subjectivity or appraisal biases such as appraisers andappraisees coming from different subject areas, appraisers’ attitudes, emotions, theirpreferred teaching style, relationship between appraisers and appraisees and a lack ofknowledge of appraisal. For example:

Sometimes the result of appraisal was not transparent. Emphasis was given on appraisal itselfrather than providing feedback and solution[s] (Respondent No. 22).

Table III.Perceptions of

appraisal system

Measurements% agreement

(strongly agree � agree)

Clear about criteria used in the university 70.7Informed of the appraisal process for my teaching 68.9Approaches used objectively reflect my teaching performance 57.2The appraisers are trustworthy 61.7Received oral feedback from appraisers 60.4Received written feedback from appraisers 57.4Given help in formulating improvement plans 76.8Appraisers were trustworthy 62.0

Table IV.Impact of appraisal

on the teacher

Measurement% agreement

(strongly agree � agree)

Met the standard of teaching 80.9Informed teacher of strength and weakness 80.1Decides on teachers promotion 31.8For improving teaching 80.5Part of professional development 66.9Uses as management tool 74.4Used for reporting to internal and external bodies 44.1Student positive comments made me more confident teacher 87.9Student negative comments helped me improve 75.9Student negative comments discouraged me to teach 18.8Fear of failing students – effects appraisal 9.2

287

Quality ofappraisalevidence

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

In addition, the teachers pointed out that the general appraisal standard did not addressdisciplinary differences and had too much emphasis on research. The appraisal was only aroutine process. These factors are further elaborated in answers to the following questions.

Research was over-weighted in appraisalThree hundred and seventy-nine participants responded to the question about theimportance of research in the institution’s appraisal system. The majority ofparticipants (260) highlighted the importance of research over teaching, describingresearch as the key reference for employment, promotion and rewards. For example, oneacademic expressed this idea as follows:

[The research is] so important that all efforts of teaching are neglected. It makes one feel thatresearch projects, research achievements, and research publications can almost take overeverything. Other work, no matter how much or how well one has done, is regarded as uselessif without research (Respondent No. 462).

Some of the respondents also provided what they thought should be the weighting forresearch in the appraisal system. This weighting ranged from 40 per cent to 100 per cent.

It seemed that the academics were in a dilemma about meeting the researchrequirement due to a heavy teaching workload. Some reported that younger academicswere promoted faster than the senior ones because of research achievements:

Research is very important. It is the decisive factor in promotion. Some senior academics, whoare very good at teaching and have heavy workloads, cannot be promoted to a higher postbecause they do not have enough time to do research. In contrast, some young academics, whocannot teach well and have less of a workload, have no problem to be promoted because theyhave enough time to do research. This has led to a phenomenon of respecting research anddespising teaching (Respondent No. 387).

In contrast, some young academics expressed difficulty in doing research and being aneffective teacher:

Research plays a very important role in appraisal. It is difficult to get a professional titlewithout much research. However, the general research level, for a university like ours, isrelatively low. Generally speaking, there are not many research projects, especially for youngacademics. Some young academics have to take on more classes in order to earn more money,which often influences their teaching effects (Respondent No. 134).

Contextualised but standardised approach to appraisalAcademics (271) provided feedback on what and how teacher appraisals could be improvedat the university. Almost 36 per cent (n � 97) emphasised that the teacher appraisal shouldfocus on teaching rather than research. They argued that the large ratio of research toteaching in appraisal distracts their attention away from teaching towards research, whichendangered the quality of teaching. In addition, a heavy teaching workload left very littletime for research. The academics went on to articulate that there was a need to adjust orreduce the ratio of research in appraisal, according to the nature of the work or position. Thisstrong emphasis was clearly enunciated as follows:

For those who are working at teaching-focused institutes, the appraisal should refer more tothe teaching ability and effects rather than the research achievements. Otherwise it wouldmake young staff put too much effort into research; consequently, the overall teaching effectswould drop down (Respondent No. 371).

QAE23,3

288

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Academics also expressed concern over other factors that affected the quality ofappraisal, including the expertise of the appraisers and the process of data collection:

Clarify the standards of appraisal; appraisers should know the specialised area of teaching;provide detailed feedback; focus on comprehensive aspects; manage student survey systemcarefully; do not make appraisal a formality; research evidence should be used to improveteaching rather than for the purpose of research itself (Respondent No. 94).

It seemed that the academics had a strong intention to improve their work. They clearlysuggested a number of approaches that could serve this purpose: prompt feedback,training, exchange of opinions between appraisers and appraisees, modelling excellentteaching and guidance provided by mentors.

DiscussionThe study found that the current appraisal employed in the Chinese institution used amulti-faceted approach based on student surveys and classroom visitation byadministrators, experts and peers, and research. This approach is in line with the workof Smith (2008), who advocates a multi-phase model for appraisals. Further, academicsgenerally agreed on the value of students’ comments and believed that the main purposeof appraisal was to improve teaching. However, relatively fewer academics believed inthe trustworthiness of the appraisers whose judgement may be affected by subjectivityand lack of expertise. The academics expressed the need for the appraisal system to beintertwined with opportunities for professional development.

The subjectivity of students in the appraisal process was a major concern of theteachers. Some teachers pointed out that students might provide biased responses toappraisal questions due to grades they received from the teachers, their understandingof teaching and appraisal and their rapport with the teachers. These concerns concurwith Amin’s (2002) and Boysne’s (2008) studies on factors that affected the reliability ofstudents’ appraisal of teaching. However, a majority of the teachers believed thatstudent feedback surveys were valuable in regard to teaching improvement, indicatingthat teachers might expect formative feedback instead of assessment from students.

Teachers also challenged the objectivity of other appraisers such as experts, peers andadministrators. Factors that affected objectivity were attitudes towards appraisal,background knowledge of teaching and speciality and personal relationships. These factorswere in alignment with findings by Bell (2001) and Courneya et al. (2008). The teachersbelieved that the issues outlined above resulted in the inability to provide objective reflectionon their actual performance. It seemed that neither students nor peers/colleagues wereengaged in a continuous dialogue on appraisal, as is suggested in the literature (Freeman andDobbins, 2013). This observation perhaps suggests there is a need for an in-depth discussionon appraisal among all stakeholders to clarify the purposes, approaches, process and theutility of appraisal and assessment in tertiary education.

Another major issue was the ratio of research to teaching in teacher appraisal. Thisissue has been relatively less explored by other studies on teacher appraisal (Murphyand MacLaren, 2009), although anecdotal evidence in Australian higher educationsuggests that such concern is not an isolated issue nor confined to the Chineseuniversities. The research literature clearly documents this tension where teaching isconsidered the second cousin to research (Lucas, 2006; Taylor, 2008; Mayson andSchapper, 2012; Murphy and MacLaren, 2009). According to the Chinese academics, thisissue is related to the purpose of appraisal, which in many cases is not well-defined.

289

Quality ofappraisalevidence

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Chinese academics in general believed that the fundamental purpose of such appraisalsshould be teaching, as this is the fundamental purpose of higher education institutionsin China. Generally, staff were broadly supportive of the idea that teaching and researchare part-and-parcel of academic life, though it was apparent to Chinese academics thatthe institution was weak in their management of this relationship.

Moreover, it seemed that more flexibility was expected to address contextual issues.For example, some academics pointed out that the general appraisal criteria were notable to address disciplinary differences. In addition, academics argued that theappraisal approach did not address the differences between new and experienced staff.

A factor that was strongly echoed by a number of teachers was that the appraisal wascarried out as a routine process and failed to address practical issues such as staff’s needs forprofessional development and the differencing teaching workloads. This finding supportsthe suggestions made by Nygaard and Belluigi (2011) that a contextualised approach shouldbe developed to address the complexity of appraisal. This finding of tying in appraisals withprofessional development concurs with those of Chen and Yeager (2011) and Nie and Xu(2006). As Chen and Yeager (2011) reported, academics wished to have prompt dialogicalfeedback that could help with further improvement, the ability to exchange opinions withexperts and the provision of training opportunities.

ConclusionThis case study reveals three major issues that affect the quality of appraisal: appraisers’expertise, the weight given to research and the pedagogical implications of the appraisaloutcome. What the data suggest is that there is an urgent need for an institution-widediscussion on basic concepts of appraisal, and how to engage different stakeholders andconnect appraisal with learning and professional development. Interestingly, the results ofthis first study of the appraisal system in a Chinese university reveal that training is needednot only for academics but also for students to increase their understanding and competencein providing appraisal evidence. Studies on feedback with students in filling out surveyssuggest that students themselves are deficient in understanding the importance of thefeedback they are giving, as well as understanding some of the terminologies that are usedin feedback questionnaires (Bennett and Nair, 2011; Weaver, 2006). In addition, the researchalso highlights the importance to ensure that systematic training is provided for those whocollect appraisal data.

By exploring academics’ perspectives on the quality of appraisal evidence, this study notonly contributes to the understanding of appraisal in tertiary education in mainland China,but also identifies common issues surrounding appraisals, specifically the quality ofevidence collection. One outcome of this study is to make the process and results of appraisaluseful. Future studies are needed to explore how a robust system of quality assurance cancontribute to more effective appraisal evidence collection and interpretation.

Moreover, this study suggests the need for training appraisers and reinforces theresearch of Elassy (2013) and Freeman and Dobbins (2013). However, there has beenlittle discussion in the literature regarding how to systematically train staff as well asstudents in this process. The authors of this study argue that an important role oftertiary education is to help future graduates develop knowledge, scholarship and newskills necessary for the workforce. One such new skill is giving appropriate feedback tohelp organisations in the workplace improve their service. Therefore, further researchcan explore how to integrate appraisal training to the curriculum.

QAE23,3

290

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

To conclude, this study is indicative of not only the need to engage in a widerdiscourse of appraisal training in the higher education sector but also the need torecognise the important role teaching plays and the appropriate weighting that needs tobe considered in the appraisal process.

ReferencesAlderman, L., Towers, S. and Bannah, S. (2012), “Student feedback systems in higher education: a

focused literature review and environmental scan”, Quality in Higher Education, Vol. 18No. 3, pp. 261-280.

Amin, M.E. (2002), “Six factors of course and teaching evaluation in a bilingual university inCentral Africa”, Assessment & Evaluation in Higher Education, Vol. 27 No. 3, pp. 281-291.

Balam, E.M. and Shannon, D.M. (2010), “Student ratings of college teaching: a comparison offaculty and their students”, Assessment & Evaluation in Higher Education, Vol. 35 No. 2,pp. 209-221.

Bartlett, S. (2000), “The development of teacher appraisal: a recent history”, British Journal ofEducational Studies, Vol. 48, pp. 24-33.

Bell, M. (2001), “Supported reflective practice: a programme of peer observation and feedback foracademic teaching development”, International Journal for Academic Development, Vol. 61,pp. 29-39.

Bell, M. and Cooper, P. (2013), “Peer observation of teaching in university departments: aframework for implementation”, International Journal for Academic Development, Vol. 18No. 1, pp. 60-73.

Bennett, L. and Nair, C.S. (2010), “A recipe for effective participation rates for web based surveys”,Assessment and Evaluation Journal, Vol. 35 No. 4, pp. 357-366.

Bennett, L. and Nair, C.S. (2011), “Demonstrating quality - feedback on feedback”, Proceedings of theAustralian Universities Quality Forum, Demonstrating Quality, Australian Universities QualityAgency, Melbourne, pp. 26-31, available at: http://auqa.edu.au/qualityenhancement/publications/occasional/publications/

Benton, S.L. and Cashin, W.E. (2012), “Student ratings of teaching: a summary of research andliterature”, IDEA Paper No. 50, The IDEA Center, Manhattan, KS, available at: www.theideacenter.org/category/helpful-resources/knowledge-base/idea-papers (accessed 9September 2014).

Bexley, E., Arkoudis, S. and James, R. (2013), “The motivations, values and future plans ofAustralian academics”, Higher Education, Vol. 65 No. 3, pp. 385-400.

Boysne, G.A. (2008), “Revenge and student evaluations of teaching”, Teaching of Psychology,Vol. 35, pp. 218-222.

Braun, V. and Clarke, V. (2006), “Using thematic analysis in psychology”, Qualitative Research inPsychology, Vol. 3 No. 2, pp. 77-101.

Brew, A. (2010), “Imperatives and challenges in integrating teaching and research”, HigherEducation Research and Development, Vol. 29 No. 2, pp. 139-150.

Buchanan, J. (2011), “Quality teaching: means for its enhancement?”, Australian UniversitiesReview, Vol. 53 No. 1, pp. 66-72.

Byrne, J., Brown, H. and Challen, D. (2010), “Peer development as an alternative to peerobservation: a tool to enhance professional development”, International Journal forAcademic Development, Vol. 15 No. 3, pp. 215-228.

Casey, R.J., Gentile, P. and Bigger, S. (1997), “Teaching appraisal in higher education: anAustralian perspective”, Higher Education, Vol. 34, pp. 459-482.

291

Quality ofappraisalevidence

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Chamberlain, J.M., D’Artrey, M. and Rowe, D.A. (2011), “Peer observation of teaching: a decoupledprocess”, Active Learning in Higher Education, Vol. 12 No. 3, pp. 189-201.

Chen, G.H. and Watkins, D. (2010), “Stability and correlates of student evaluations of teaching ata Chinese university”, Assessment & Evaluation in Higher Education, Vol. 36 No. 6,pp. 675-685.

Chen, Q.Y. and Yeager, J. (2011), “Comparative study of faculty evaluation of teaching practicebetween Chinese and US institutions of higher education”, Frontiers of Education in China,Vol. 6 No. 2, pp. 200-226.

Courneya, C.A., Pratt, D.D. and Collins, J. (2008), “Through what perspective do we judge theteaching of peers?”, Teaching and Teacher Education, Vol. 24, pp. 69-79.

Crumbley, L., Henry, B.K. and Kratchman, H. (2001), “Students’ perceptions of the evaluation ofcollege teaching”, Quality Assurance in Education, Vol. 9 No. 4, pp. 197-207.

Darwin, S. (2012), “Moving beyond face value: re-envisioning higher education evaluation as agenerator of professional knowledge”, Assessment & Evaluation in Higher Education,Vol. 37 No. 6, pp. 733-745.

Douglas, A.S. (2013), “Advice from the professors in a university Social Sciences department onthe teaching-research nexus”, Teaching in Higher Education, Vol. 18 No. 4, pp. 377-388.

Du, P., Lai, M.H. and Lo, L.N.K. (2010), “Analysis of job satisfaction of university professors fromnine Chinese universities”, Frontiers of Education in China, Vol. 5 No. 3, pp. 430-449.

Egginton, B.E. (2010), “Introduction of formal performance appraisal of academic staff: themanagement challenges associated with effective implementation”, EducationalManagement Administration & Leadership, Vol. 38 No. 1, pp. 119-133.

Elassy, N. (2013), “A model of student involvement in the quality assurance system at institutionallevel”, Quality Assurance of Education, Vol. 21 No. 2, pp. 162-198.

Freeman, R. and Dobbins, K. (2013), “Are we serious about enhancing courses? Using theprinciples of assessment for learning to enhance course evaluation”, Assessment &Evaluation in Higher Education, Vol. 38 No. 2, pp. 142-151.

Guba, E. and Lincoln, Y. (1989), Fourth Generation Evaluation, Sage, Newbury Park, CA.Powney, J. and Hall, S. (1998), Closing the Loop: The Impact of Student Feedback on

Students’Subsequent Learning, Scottish Council for Research in Education, Edinburgh.Harvey, L. (2003), “Student feedback”, Quality in Higher Education, Vol. 9 No. 1, pp. 3-20.Harvey, L. (2011), “The nexus of feedback and improvement”, in Nair, C.S. and Mertova, P. (Eds),

Student Feedback: The Cornerstone to an Effective Quality Assurance System in HigherEducation, Woodhead Publishing, Oxford.

Kohut, G.F., Burnap, C. and Yon, M.G. (2007), “Peer observation of teaching: perceptions of theobserver and the observed”, College Teaching, Vol. 55 No. 1, pp. 19-25.

Kreber, C. (2002), “Controversy and consensus on the scholarship of teaching”, Studies in HigherEducation, Vol. 27 No. 2, pp. 151-167.

Leckey, J. and Neill, N. (2001), “Quantifying quality: the importance of student feedback”, Qualityin Higher Education, Vol. 7 No. 1, pp. 19-32.

Lemos, M.S., Queirós, C., Teixeira, P.M. and Menezes, I. (2011), “Development and validation of atheoretically based, multidimensional questionnaire of students’ evaluation of universityteaching”, Assessment & Evaluation in Higher Education, Vol. 36 No. 7, pp. 843-864.

Lomas, L. and Nicholls, G. (2005), “Enhancing teaching quality through peer review of teaching”,Quality in Higher Education, Vol. 11 No. 2, pp. 137-149.

Lucas, L. (2006), The Research Game in Academic Life, Open University Press, Maidenhead,SRHE.

QAE23,3

292

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Marsh, H.W. (2007), “Students evaluations of university teaching: dimensionality, reliability,validity, potential biases and usefulness”, in Perry, R.P. and Smart, J.C. (Eds), TheScholarship of Teaching and Learning in Higher Education: An Evidence-based Perspective,Springer, Dordrecht, pp. 319-383.

Marsh, H.W. and Roche, L.A. (1997), “Making students’ evaluations of teaching effectivenesseffective”, American Psychologist, Vol. 52, pp. 1187-1197.

Mayson, S. and Schapper, J. (2012), “Constructing teaching and research relations from the top: ananalysis of senior manager discourses on research-led teaching”, Higher Education, Vol. 64,pp. 473-487.

Minelli, E., Rebora, G., Turri, M. and Huisman, J. (2007), “The impact of research and teachingevaluation in universities: comparing an Italian and a Dutch case”, Quality in HigherEducation, Vol. 12 No. 2, pp. 109-124.

Murphy, T. and MacLaren, I. (2009), “Teaching portfolios and the quality enhancement project inhigher education”, Educational Futures, Vol. 2 No. 1, pp. 71-84.

Nair, C.S. (2011), “Students’ feedback an imperative to enhance quality in engineering education”,International Journal of Quality Assurance in Engineering and Technology Education,Vol. 1 No. 1, pp. 58-66.

Nair, C.S. and Bennett, L. (2011), “Using student satisfaction data to start conversations aboutcontinuous improvement”, Quality Approaches in Higher Education, Vol. 2 No. 1, pp. 17-22.

Nie, D.M. and Xu, J.S. (2006), “The contradictions & countermeasures of the teaching qualityevaluation of college teachers”, Journal of Educational Science of Hunan NormalUniversity, Vol. 5 No. 3, pp. 48-51.

Nygaard, C. and Belluigi, D.Z. (2011), “A proposed methodology for contextualised evaluation inhigher education”, Assessment & Evaluation in Higher Education, Vol. 36 No. 6,pp. 657-671.

Palermo, J. (2013), “Linking student evaluations to institutional goals: a change story”,Assessment & Evaluation in Higher Education, Vol. 38 No. 20, pp. 211-223.

Pinheiro, R. (2013), “Bridging the local with the global: building a new university on the fringes ofEurope”, Tertiary Education and Management, Vol. 19 No. 2, pp. 144-160.

Pounder, J.S. (2007), “Is student evaluation of teaching worthwhile?”, Quality Assurance inEducation, Vol. 15 No. 2, pp. 178-191.

Rantanen, P. (2013), “The number of feedbacks needed for reliable evaluation: a multilevelanalysis of the reliability, stability and generalisability of students’ appraisal of teaching”,Assessment & Evaluation in Higher Education, Vol. 38 No. 2, pp. 224-239.

Shin, J.C. (2011), “Teaching and research nexuses across faculty career stage, ability and affiliateddiscipline in a South Korean research university”, Studies in Higher Education, Vol. 36No. 4, pp. 485-503.

Smith, C. (2008), “Building effectiveness in teaching through targeted evaluation and response:connecting evaluation to teaching improvement in higher education”, Assessment &Evaluation in Higher Education, Vol. 33 No. 5, pp. 517-533.

Soliman, I. and Soliman, H. (1997), “Academic workload and quality”, Assessment & Evaluation inHigher Education, Vol. 22 No. 2, pp. 135-157.

Stein, S.J., Spiller, D., Terry, S., Harris, T., Deaker, L. and Kennedy, J. (2013), “Tertiary teachers andstudent evaluations: never the twain shall meet?”, Assessment & Evaluation in HigherEducation, Vol. 38 No. 7, pp. 892-904.

Stowell, J.R., Addison, W.E. and Smith, J.L. (2012), “Comparison of online and classroom-basedstudent evaluations of instruction”, Assessment & Evaluation in Higher Education, Vol. 37No. 4, pp. 465-473.

293

Quality ofappraisalevidence

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)

Taylor, J. (2007), “The teaching-research nexus: a model for institutional management”, HigherEducation, Vol. 54 No. 6, pp. 867-884.

Taylor, J. (2008), “The teaching–research nexus and the importance of context: a comparativestudy of England and Sweden”, Compare: A Journal of Comparative and InternationalEducation, Vol. 38 No. 1, pp. 53-69.

The Guardian (2012), “British universities are in a need of a teaching revolution”, available at:www.theguardian.com/higher-education-network/blog/2012/feb/15/uk-universities-teaching-revolution (accessed 13 May 2014).

Trigwell, K. (2011), “Measuring teaching performance”, in Shin, J.C., Toutkoushian, R.K. andTeichler, U. (Eds), University Rankings: Theoretical Basis, Methodology and Impacts onGlobal Higher Education, Springer, Dordrecht, pp. 165-184.

Weaver, M.R. (2006), “Do students value feedback? Student perceptions of tutors’ writtenresponses”, Assessment and Evaluation in Higher Education, Vol. 31 No. 3, pp. 379-394.

Winchester, M.K. and Winchester, T.M. (2012), “If you build it will they come? Exploring thestudent perspective of weekly student evaluations of teaching”, Assessment & Evaluationin Higher Education, Vol. 37 No. 6, pp. 671-682.

Winchester, M.K. and Winchester, T.M. (2013), “A longitudinal investigation of the impact offaculty reflective practices on students’ evaluations of teaching”, British Journal ofEducational Technology, Vol. 45 No. 1, pp. 112-124.

Zou, Y.H., Du, X.Y. and Rasmussen, P. (2012), “Quality of higher education: organisational oreducational? A content analysis of Chinese university self-evaluation reports”, Quality inHigher Education, Vol. 18 No. 2, pp. 169-184.

Further readingNewton, J. (2002), “Views from below: academics coping with quality”, Quality in Higher

Education, Vol. 8 No. 1, pp. 39-61.

About the authorsChenicheri Sid Nair is a Professor of Higher Education Development at the Centre for theAdvancement of Teaching and Learning (CATL). His current role looks at the quality of teachingand learning at University of Western Australia (UWA). Dr Nair is a Chemical Engineer bytraining, but his interest in helping students succeed in the applied sciences in higher educationled him to further specialise in Science and Technology education. This led him to his many worksin improving student life in the higher education system. His research work lies in the areas ofquality in the higher education system, classroom and school environments, and theimplementation of improvements from stakeholder feedback. Chenicheri Sid Nair is thecorresponding author and can be contacted at: [email protected]

Jinrui Li is currently a Research Assistant at the University of Waikato and is co-editing a bookon learner autonomy in Asian countries. Previously, she worked as a Lecturer in universities bothin China and New Zealand. Dr Li has carried out research on peer feedback, tutors’ assessmentfeedback on writing and staff appraisal in universities. Her research interest includes differentlevels of assessment in tertiary education.

Li Kun Cai is a Lecturer in the Foreign Language Faculty of North China University of Scienceand Technology. She is interested in various aspects of English language education, feedback andassessment.

For instructions on how to order reprints of this article, please visit our website:www.emeraldgrouppublishing.com/licensing/reprints.htmOr contact us for further details: [email protected]

QAE23,3

294

Dow

nloa

ded

by U

NIV

ER

SIT

Y O

F W

EST

ER

N A

UST

RA

LIA

, Pro

fess

or S

id N

Air

At 1

7:47

10

June

201

5 (P

T)