the changing nature of teaching and unit evaluations in australian universities

16
Quality Assurance in Education Emerald Article: The changing nature of teaching and unit evaluations in Australian universities Mahsood Shah, Chenicheri Sid Nair Article information: To cite this document: Mahsood Shah, Chenicheri Sid Nair, (2012),"The changing nature of teaching and unit evaluations in Australian universities", Quality Assurance in Education, Vol. 20 Iss: 3 pp. 274 - 288 Permanent link to this document: http://dx.doi.org/10.1108/09684881211240321 Downloaded on: 01-08-2012 References: This document contains references to 45 other documents To copy this document: [email protected] This document has been downloaded 31 times since 2012. * Users who downloaded this Article also downloaded: * Sandy Bond, (2011),"Barriers and drivers to green buildings in Australia and New Zealand", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 494 - 509 http://dx.doi.org/10.1108/14635781111150367 Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187 http://dx.doi.org/10.1108/00012531111135646 François Des Rosiers, Jean Dubé, Marius Thériault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528 http://dx.doi.org/10.1108/14635781111150376 Access to this document was granted through an Emerald subscription provided by CURTIN UNIVERSITY LIBRARY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation. *Related content and download information correct at time of download.

Upload: independent

Post on 03-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Quality Assurance in EducationEmerald Article: The changing nature of teaching and unit evaluations in Australian universitiesMahsood Shah, Chenicheri Sid Nair

Article information:

To cite this document: Mahsood Shah, Chenicheri Sid Nair, (2012),"The changing nature of teaching and unit evaluations in Australian universities", Quality Assurance in Education, Vol. 20 Iss: 3 pp. 274 - 288

Permanent link to this document: http://dx.doi.org/10.1108/09684881211240321

Downloaded on: 01-08-2012

References: This document contains references to 45 other documents

To copy this document: [email protected]

This document has been downloaded 31 times since 2012. *

Users who downloaded this Article also downloaded: *

Sandy Bond, (2011),"Barriers and drivers to green buildings in Australia and New Zealand", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 494 - 509http://dx.doi.org/10.1108/14635781111150367

Hui Chen, Miguel Baptista Nunes, Lihong Zhou, Guo Chao Peng, (2011),"Expanding the concept of requirements traceability: The role of electronic records management in gathering evidence of crucial communications and negotiations", Aslib Proceedings, Vol. 63 Iss: 2 pp. 168 - 187http://dx.doi.org/10.1108/00012531111135646

François Des Rosiers, Jean Dubé, Marius Thériault, (2011),"Do peer effects shape property values?", Journal of Property Investment & Finance, Vol. 29 Iss: 4 pp. 510 - 528http://dx.doi.org/10.1108/14635781111150376

Access to this document was granted through an Emerald subscription provided by CURTIN UNIVERSITY LIBRARY

For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.

About Emerald www.emeraldinsight.comWith over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.

*Related content and download information correct at time of download.

The changing nature of teachingand unit evaluations in Australian

universitiesMahsood Shah

Office of the Deputy Vice Chancellor Academic, RMIT University,Melbourne, Australia, and

Chenicheri Sid NairCentre for Advancement of Teaching & Learning, University of Western

Australia, Perth, Australia

Abstract

Purpose – Teaching and unit evaluations surveys are used to assess the quality of teaching and thequality of the unit of study. An analysis of teaching and unit evaluation survey practices in Australianuniversities suggests significant changes. One key change discussed in the paper is the shift fromvoluntary to mandatory use of surveys with the results used to assess and reward academic staffperformance. The change in the direction is largely driven by the introduction of performance-basedfunding as part of quality assurance arrangements. The paper aims to outline the current trends andchanges and the implications in the future such as increased scrutiny of teaching and intrusion toacademic autonomy.

Design/methodology/approach – The paper is based on the analysis of current teaching and unitevaluation practices across the Australian university sector. The paper presents the case of anAustralian university that has introduced performance-based reward using various measures toassess and reward academic staff such as the outcome of student satisfaction surveys. The analysis ofexternal quality audit findings related to teacher and unit evaluations is also presented.

Findings – The findings suggest a shift in trend from the use of voluntary to mandatory tools toassess and reward quality teaching. The case of an Australian university outlined in the paper and theapproach taken by seven other universities is largely driven by performance-based funding. One of thekey concerns for many in higher education is the intrusion of academic autonomy with increased focuson outcomes and less emphasis on resources needed to produce excellence in learning and teachingand research. The increased reliance on student happiness as a measure of educational quality raisesthe questions on whether high student satisfaction would strengthen academic rigour and studentattainment of learning outcomes and generic skills which are seen as key factors in graduate exitstandards.

Practical implications – The renewal of quality assurance and performance-based funding usingstudent satisfaction as a measure of educational quality will result in increased use of student voice toassess learning and teaching outcomes. Such direction will increase the accountability on academics toimprove student experience and the measures will be used to assess academic staff performance.

Originality/value – The paper outlines the trends and changes in the teacher and unit evaluations inAustralian universities and its implications in the future. The paper also provides a case of anAustralian university that has recently made teacher and unit evaluations compulsory with the resultsused in academic staff annual performance review and linking reward with performance outcomes.

Keywords Teaching and unit evaluations, Performance-based funding, Quality assurance,Performance appraisal, Teaching, Australia, Universities

Paper type Case study

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0968-4883.htm

QAE20,3

274

Quality Assurance in EducationVol. 20 No. 3, 2012pp. 274-288q Emerald Group Publishing Limited0968-4883DOI 10.1108/09684881211240321

Teaching and unit evaluations have been used in most Australian universities for morethan two decades. The main aim of such evaluations is to monitor student perception ofteaching and unit quality and identification of areas of good practice and areas needingimprovement. In some institutions the results of such evaluations are used to reviewacademic staff performance and for the identification of professional developmentneeds. Recent government policies in Australian higher education related to qualityassurance and performance based funding using teaching quality measures such asstudent satisfaction or experience has resulted the shift of voluntary teacher and unitevaluations to mandatory tools in some universities to assess and reward academicstaff performance. This paper attempts to analyse the trend and direction of teachingand unit evaluations in some Australian universities, in particular the shift from usingvoluntary teacher evaluations to mandatory evaluation to assess and reward academicstaff performance. The paper provides a case of an Australian university that hasmade teacher and unit evaluations mandatory and the outcome is used in academicstaff performance development and review process (PDR). The paper finally arguesthat government policy to improve teaching quality via performance based fundingmay achieve political needs rather than using such tools to enhance educationalexperience of students. Further, the paper sheds some light on how the renewal ofquality in Australian higher education and the new performance based funding mayhave impacted on academic autonomy with increased scrutiny of teaching quality.

IntroductionTraditionally, most universities have had two separate tools aimed for two separatepurposes. First, the unit evaluations are used at the end of each teaching period and itis aimed to measure student satisfaction of the unit of study. The results of this surveyare distributed to the faculties and schools for review and improvement of the unit ofstudy. Second, voluntary teacher evaluations are used by academic staff to evaluatestudent satisfaction with their teaching. The results of this survey are confidential tothe teacher only and they are not used in annual PDR unless the teacher wishes to usethem in the performance review and academic promotions process. Both these surveysserve different purposes with the teacher evaluation in particular lackingaccountability in terms of improvements and enhancement of student experience.

The emergence of quality assurance in Australian higher education; changes ingovernment policy since 2004 with the introduction of performance based funding suchas the learning and teaching performance fund (Commonwealth of Australia, 2004);and internal reviews within universities have resulted in a number of changes. Theyinclude:

. Merger of two separate instruments into one single survey tool aimed to measureteacher and unit evaluations.

. University wide policy on the use of teaching and unit evaluation results andaccountability at various levels.

. A move from voluntary to mandatory evaluation which is conducted at the endof each teaching period.

. Linking the findings of the survey in annual academic staff PDR and academicpromotions as one of the measures to assess and reward academic staff (Bennettand Nair, 2010).

. Rewarding academic staff in terms of teaching awards (Bennett and Nair, 2010).

Teaching andunit evaluations

275

. Increased accountability placed on associate deans and academic staff toimprove teaching quality outcomes.

. Implementation of university wide survey and improvement framework withfocus on data collection, analysis, reporting and closing the loop.

. Use of both traditional paper and online survey methodology with increasedemphasis on online methodology (Bennett and Nair, 2010).

. Consistent use of the survey at all teaching locations including offshore; variousmodes of learning and with university pathway colleges.

The paper aims to encourage debate on the changing nature of teaching and unitevaluations in Australian universities and the implications of the new performancebased funding and quality assurance policies on teaching quality which will beintroduced post 2012. It is worth noting in this paper the Australian government’s planin 2012 to use student experience measures such as the new University ExperienceQuestionnaire (UES) and the Australian Graduate Survey (AGS) and its link withperformance based funding is currently on hold. The initial plan was to use thesemeasures in 2012 linked to performance funding, however due to budgetaryconsideration the government has decided not to tie any form of funding to thesemeasures. It is clear, though, that both sides of politics are interested in assessinguniversity performance using agreed metrics in both teaching and research and linkingoutcomes with monetary reward and also making the results publicly available on thenew “My University” web site.

Drivers of changePost 2013 may witness significant changes and increased focus on how teaching andunit evaluations and other student surveys are conducted; analysed, reported and theaccountability at various levels within the university to act on the findings of thesurvey. One of the key drivers of change was the introduction, by the government,between 2004-2008, of a quarter billion dollars of performance based funding called thelearning and teaching performance fund. The introduction of performance basedfunding has resulted in the creation of league tables which ranked universities basedon learning and teaching measures such as student retention, progression, studentsatisfaction and graduate outcomes. The renewal of quality post Bradley review ofAustralian higher education (Commonwealth of Australia, 2008) and the government’svision for higher education which is articulated in Transforming Australia’s HigherEducation system (Commonwealth of Australia, 2009) will result in further changes.Some of the key changes related to student feedback will include:

. The establishment of “My University” web site which will provide public accessto information on each university on various performance measures.

. Introduction of the new UES to measure the first and final year studentengagement and experience.

. Refinement of the current AGS previously known as Course ExperienceQuestionnaire (CEQ) and Graduate Destination Survey (GDS) and review thestanding of these surveys in the framework of national surveys.

. Changes in survey methodology and coding practices to ensure transparencyand confidence to the wider stakeholders including universities.

QAE20,3

276

. Use of student experience data in performance based funding possibly in thenear future and providing the results to the general public via the “MyUniversity” web site.

The development of “My University” web site using various measures may result inthe creation of Australian universities home grown ranking. Between 2004-2008, boththe government and the media ranked universities with the amount of fundingallocation. Some universities also used the ranking information in marketing andadvertisement materials. While universities on the top of the league table enjoyed theprestige and eliteness, questions are raised by the public on universities who are at thebottom of the league table. Shah and Nair (2011a) suggest that ranking in Australianhigher education may have an impact on stakeholders’ views about individualuniversities. For example, students’, parents’ and employers’ perceptions of theuniversities at the bottom of the league table may reflect low-quality academic staff,teaching resources, infrastructure and learning environment.

Performance based funding and new accountabilityTraditionally, university funding of student places using government subsidised loanschemes in Australia was based on student enrolments. The emergence of the qualityagenda in higher education has resulted in renewed approach to institutional funding.In Australia, the governments’ political interest and the promise to revolutioniseeducation has resulted in the introduction of a demand driven funding model whichmeans funding for actual student places that are actually taken up. Apart from thefunding of actual student places, the government has introduced performance basedfunding in 2012 to assess and reward universities using various measures related toaccess and participation of disadvantaged students, student attainment of genericskills and learning outcomes. Though currently student satisfaction is not included inthe performance based funding formulae, indications are that such inclusion could takeplace in the near future. To achieve its higher education aspirations, the governmentestablished targets based on individual negotiations with public universities using anew mission-based compacts funding arrangement. Around the world there has beenshift in the public funding of universities with the introduction of performance basedfunding, for example, the Research Assessment Exercise (RAE) in UK; in the USAusing research measures and in Norway, Sweden and Denmark (Sorlin, 2007) and inGermany (Orr et al., 2007). Governments are holding universities accountable for socialand economic development with a shift from a public funding regime with limitedgovernment monitoring and scrutiny on how and where funds were used to trust basedperformance funding with increased focus on accountability, transparency, monitoringand reward or sanctions based on quality audits and performance assessment. Thefocus now is on the impact or value-add of the funding on the social and economicdevelopment of the country and using the knowledge based economy to solve globaland national issues.

The linking of millions of dollars in performance based funding to rewarduniversities will undoubtedly result in unprecedented changes within the Australianhigher education sector. Past experience in Australia suggest that the attempt by thegovernment to use performance based funding between 2004-2008 was controversialwith lack of trust within the university sector on the way the performance wasassessed and rewarded. Shah et al. (2011), suggest that the performance based fundingbetween 2004-2008 mostly favoured elite universities who have high student entry

Teaching andunit evaluations

277

requirements and well-funded learning and teaching infrastructure and supportmechanisms with high student retention, progression and student experience. Theyargue that institutions with the highest number of disadvantaged students fromunderrepresented background were significantly disadvantaged in the controversialfunding. One prominent Vice Chancellor with one of the highest proportion ofdisadvantaged students stated that she did not believe that the teaching and learningperformance fund is genuinely measuring “value add”. This university, with one of thenation’s most economically disadvantaged and culturally diverse student populations,gained nothing from the fund in 2005 (Armitage, 2006). Harris and James (2010),suggest that the concern with the performance based funding and the value issuesassociated with the merit of such a scheme are a long way from being resolved with thefunding scheme being a contentious policy within the Australian higher educationsector. They suggest that the influence of performance based funding onimprovements in undergraduate education is far from clear despite the attentionwhich has been drawn to the quality of teaching for the best part of a decade or more,with lack of evidence of improvement.

The benefit from the performance based funding mostly by elite universities raisesthe question of the extent to which government policies enable institutional diversityand commitment to their mission and equality in funding and rewarding institutions.The Australian experience suggests that performance based funding has discourageddiversity, student access and opportunity and institutional commitment to theirmission in providing education opportunities in their regions. Such policies have led tomistrust between policy makers and users on the quality agenda in higher educationparticularly related to performance assessment and reward. According to Shah et al.(2011), performance based funding and other government policies such as externalquality audits in the last 10 years have not contributed towards the enhancement ofstudent experience and student retention in Australian higher education.

A review of literature on performance based funding using teaching and researchmeasures in the USA and UK suggests tensions between governments anduniversities. According to Watt et al. (2004), the performance funding in SouthCarolina, USA has increased tension between the universities and the Commissionresponsible for performance assessment and funding and the ongoing friendly andcooperative discussions between governments and universities have deteriorated.They suggest that performance funding does not reflect the unique missions of theuniversities to drive economic development and provide access to higher education.Findlow (2008, p. 322) suggests that the conflicting notions of quality and bureaucraticmodels of higher education accountability in the UK have been an impediment tosuccessful innovation in higher education with lack of academic staff trust in quality.According to Harvey and Lee (1997, p. 2), the emergence of core journal list whichcount most in journal ranking in the UK and the assessment of academic staff researchoutput and subsequent reward poses serious threat to “academic freedom and thediversity with the profession”. Various scholars have argued that universities areincreasingly being held accountable for the quality of their performance, due to theinfluence of new public management and managerialism (Deem, 1998; Roberts, 2001)and, consequently, have to control and improve the quality of their output (Deem, 1998,2001; Halsey, 1995; Pollitt and Bouckaert, 2000). The value of student feedback inteaching is also criticised by Crumbley and Reichelt (2009) who argue that studentsprovide low satisfaction rating to academics that provide low grades in assessments.

QAE20,3

278

The rigour in assessment marking has an impact on student evaluations of teaching inthe context where teachers are committed to use feedback to improve teaching quality.

The use of student survey results in future and linking monetary reward between2004-2008 have been condemned in media by various scholars in Australia. Variousresearchers and critics (Coaldrake, 2005; Yorke, 1998) have argued the need forgovernments to undertake audits on institutional practices used in student survey datacollections, analysis and reporting with fears of shoddy practices arising from the largeamount of funding linked to student survey results. Shah and Nair (2011a) argue thatthe use of student survey data and other measures in performance based funding mayalso result in the diversion of institutional resources to areas which may attractfunding rather than focusing on ensuring academic rigour and enhancement in areassuch as course development, approval, reviews, assessment moderation, use ofexternal examiners, professional development of academics and quality of teaching.The use of performance-based funding to reward institutions using student surveyresults may also lead to the shift of voluntary teaching evaluations to mandatoryevaluation used in performance development and review processes in order to assessand reward academic staff. Such an approach could result in internal universityranking of faculties, schools and courses due to huge reliance on quantitative measuressuch as student satisfaction.

Political imperatives or institutional improvementThe current landscape of higher education seems to suggest that universities are nolonger viewed as ivory towers of intellectual pursuit and truthful thought, but rather asenterprises driven by political interest with government policies driving changes andreforms (Powell and Owen-Smith, 1998, p. 267). There is an ongoing debate on theextent to which the introduction of quality and assessment of universities, and moregenerally of the management principles they represent, represents a negative or rathera positive change for higher education. Todnem By et al. (2008, p. 22) acknowledge thatgovernments label the “right” or the “wrong” reasons. The right reasons involveinitiatives that do not privilege any particular group of individuals (e.g. politicians,regulatory bodies or university managers) over others, and that are designed to assist“the further development of a society that is both competitive and just”. Wrongreasons, on the other hand, are behind changes that favour some of these groups anddisregard what is best for the wider society. Previous research has shown thatperformance assessments can be understood both positively – as aimed at qualityimprovement – or negatively – as instrumental (Townley et al., 2003).

There are also voices arguing that higher education institutions must re-examinetheir long-standing privileges and agree to be held more accountable for what they do,as well as to have their outputs measured more objectively (Barry and Clark, 2001;Sizer and Cannon, 1999). This imperative is driven by the government and otherstakeholders to ensure the universities play an active role in contributing to the socialand economic development in global context and it is able to solve global problems bytransforming learners. A recent study undertaken by Trullen and Rodrı́guez (2011)suggest that quality assessments, when perceived by faculty as improvement-oriented,can generate positive outcomes that go beyond the assessment itself and reinforce therelationship of professionals with their organisations. The current trend seems to shiftfrom improvement led quality assurance to a compliance regime with increasedaccountability to improve teaching and research outcomes.

Teaching andunit evaluations

279

The assessment of teaching and research performance linked to reward is not a newphenomenon in higher education in developed countries. For example such anapproach has been used the UK, Sweden and in The Netherlands. In Sweden, there arefew sanctions on not achieving teaching and research standards, while in TheNetherlands failure to publish internationally will lead to career stagnation and loss ofresearch time. In the UK sanctions may be directly related to the work conditions; staffwith a fixed-term contract are especially vulnerable (Teelken, 2011). According toBlackmore (2009) standardised teaching evaluations encourage lecturers to focus onthe narrow range of outcomes that are measurable, on style rather than substance, tominimise the discomfort by reducing contentious readings and watering downsubstance to produce “thin” pedagogies. Indeed, “the sign of an organisation withemotional and moral anorexia is one living on a diet of thin measurable outcomes thatis slender spiritual nourishment” (Sinclair, 1995, p. 236). “the goal becomes not forself-improvement, but to improve your rating”, leading to a “fever of enhancement”(Sinclair, 1995, p. 318).

MethodologyThis research used a qualitative research method in a case study context. The authorsprovide a case study of a small public university in Australia where teacher and unitevaluations was reviewed to align with the new PDR process which is linked to newperformance expectations of academic staff at all levels. The authors made contactwith seven other Australian universities who have recently moved from voluntary tomandatory tools with the use of a single instrument. Telephone discussions took placewith relevant project managers who led the enhancement of teacher and unitevaluations. The authors prepared structured questions to collect feedback from therespondents.

Analysis undertaken by the authors as part of this paper suggest that in the last fiveyears eight Australian universities have changed their approach in relation to teachingand unit evaluations. The change is largely driven by government policy onperformance based funding using student satisfaction measures. The key change in theeight universities include: merging two separate teacher and unit evaluation surveysinto a single tool which is mandatory to use across the university in all units of studyand all teachers involved in teaching; using the results of the survey in the annual PDRprocess which was previously voluntary; and new accountability on associate deans(academic) and teaching staff to improve teaching quality and improving staff andstudent engagement to optimise response rates.

In one Australian institution, the university has aligned the 13 core CEQ items in theinternal unit and teacher surveys and the result is used to assess academic staffperformance. The university views that there is a correlation between high studentsatisfaction on semester-based teacher and unit evaluations and the CEQ withgraduates (which was used to assess and reward universities). However, there is a lackof evidence supporting this link. Analysis undertaken by the authors as part of thispaper suggests that student agreement on generic skills scale is higher in semesterbased evaluations compared to the findings with graduating students using samesurvey questions. In one university the institution is using its strategic plan to drivechange and improvement and it has set a minimum threshold for each faculty on allCEQ scales (good teaching, generic skills and overall satisfaction item), along withother measures such as research output, for academic staff to qualify for reward andprogression. The changes made in the eight universities are driven by the changing

QAE20,3

280

government policy on performance based funding. In another Australian University,the Vice Chancellor outlined the operational directive on the use of end of semesterteacher and unit surveys. The directive places responsibility on the faculties to ensurethat all units of study and teachers are evaluated and the results used in staffperformance reviews. The operational directive states that:

Faculties will ensure that all subjects, including those offered offshore, have been surveyedvia the student feedback survey, at least once a year and preferably in each major teachingperiod. As part of the probation process, staff will be required to undertake studentevaluations of all the units in which they have major teaching duties in each teaching session,and to provide copies of such evaluation reports as required in the probation process(University X, 2009).

Case of an Australian universityThe case study used in this paper is based on a small public university that has beenexperiencing changes and reforms as a result of change in leadership. One of the mostcontroversial changes implemented in the university is the implementation of the newPDR process which is aligned with the university’s strategic plan and the 10 yearaspirations. The new PDR process outlines performance expectations of academic staffat various levels. Some of the measures used in the performance expectation which isused to assess and reward academic staff performance in the PDR process includes:scores in the end of semester unit and teaching evaluation surveys; researchpublications; external research grants; and the supervision of higher degree researchstudents.

Prior to 2008, the university did not have a systematic PDR process which linkedacademic staff performance with the strategic priorities of the university and neitherwas the PDR process used across the university. While the university had studentsurveys, the evaluation of units and teachers was voluntary and the results were notlinked to academic staff performance. The external quality audit of the university in2003 recommended the need to develop new performance management system for staffand this was also affirmed in the 2008 audit.

This university is one of the best examples or case studies on how performancebased funding using various measures such as student satisfaction is driving change.The case study demonstrates a typical example of how government politics intrudes onUniversity and academic autonomy. In 2009, the university introduced its newenterprise agreement which provides 6 per cent salary increment to all academic andgeneral staff. The enterprise agreement clearly states that 4 per cent of the salaryincrement is unconditional to all staff whereas the 2 per cent in 2011 and 2012 will besubject to the university securing government performance based funding. Theuniversity’s new enterprise agreement clearly states the university’s position on howstudent feedback will play an important role in the assessing and rewarding academicstaff performance. The agreement states that:

. . . during the period of this agreement the university seeks in particular to improve studentfeedback on teaching and to increase research income, output and quality. It is acknowledgedby the parties to this agreement that by the nominal expiry date of the agreement allacademic staff, unless specifically exempted, will be actively engaged in high qualityteaching and research, and seeking to attract income that will fund their research. The partiesalso acknowledge the historical antecedents of the university and the expectations that werein force when some staff entered the university or the xxxxx College. It is therefore

Teaching andunit evaluations

281

understood that the period of this Agreement is one of transition to a new environment andthat the nature of academic work in the university is changing with the consequence that at,or around, the nominal expiry date of this agreement some academic staff who are unable tomeet the new expectations concerning teaching and research may be redundant due to thechange in nature of the positions the university needs in its structure (University Y, 2009).

The University introduced the new teacher and unit evaluation survey in 2009 with thenew PDR process and academic staff performance expectations. Trend data as shownin Figure 1 from three teaching periods in 2009-2011 suggests improvement in all threescales of the teaching and learning environment. However, the overall satisfaction itemhas show wide fluctuation during this period. The results suggest that the changesresulting from the new PDR though very insignificant have been a driving forces inimproving the unit and teaching evaluations. However, the staff at the university arguethat this positive change is not necessarily due to the new PDR process which is seenby many academics as a forced assessment and reward mechanism imposed onacademics with lack of focus on input factors such as staff professional development;support for early career academics in improving teaching and research; and resourcesand infrastructure in faculties to produce high quality outcomes. Previous researchsuggests that institutional actions or improvements as a direct result of student voiceimproves student satisfaction and student engagement in surveys (Nair et al., 2008).The 2008 external quality audit of the university made an explicit recommendation onthe need for a holistic evaluation framework and effective reporting and improvementswith all cohorts of students. Between 2008-2011, the university focussed more onmonitoring student satisfaction rather than using the results to improve studentexperience at university, course and unit level and the quality of support services.

Limitations of the strategy deployedThe changes in the university with academic staff accountability to improve teachingquality is driven by the new performance based funding and the universities aspirationto become top third nationally on educational measures including CEQ which isoutlined in the new strategic plan. Individual staff PDR is linked to the key strategiesof the university’s strategic plan. While the university has successfully implementedthe new process with evidence of improvement in teacher and unit evaluation results,

Figure 1.Unit and teacherevaluation results

QAE20,3

282

there are issues and concerns about the new process. This section of the paper brieflydiscusses the limitation of the strategy deployed.

Validity of the instrumentThe university uses the 13 core CEQ items from the national Australian GraduateSurvey which includes two key scales and one item (good teaching, generic skills andoverall satisfaction item). While the AGS is a valid tool to measure course levelexperience which has been used in Australian universities since the 1970’s, it may notbe a valid tool to measure student experience at individual unit of study or teacherlevel. For example, graduates are in a better position to assess the extent to which thecompletion of the three year bachelor course has enabled the attainment of genericskills such as: ability to work as a team member; the course sharpened analytic skills;developed problem-solving skills; improved skills in written communication;confidence about tackling unfamiliar problems; and the course helped develop theability to plan their own work.

Further, it is questionable if the use of the 13 CEQ items is appropriate at the teacherand unit level in the first and second year of study. This is because the attainment ofgeneric skills is more a staged process and such measurements are appropriate afterthe completion of the three year course with learning outcomes, course content,teaching methods; academic support and learning environment, peer network; workplacement and student assessments contributing to the development of learningoutcomes and generic skills. The strategy deployed by the university holds academicstaff accountable for high student satisfaction on all three scales with the view thatstudents have attained the generic skills in the first semester of study regardless ofpart-time or full-time study. Patrick and Crebert (2003, 2004) suggest that graduateswho have completed work experience as part of the course have a greater awareness ofthe value that generic skills provided for themselves and for their employers. Theirstudy showed that graduates who completed work experience as part of the courseagreed (87.5 per cent) that university undergraduate degrees provided sufficientopportunities to develop generic skills compared to (51.2 per cent) graduates withoutcompleting work experience component in the course.

Another example is related to the Good Teaching Scale item: lecturers wereextremely good at explaining things. This question maybe irrelevant for units of studywhich are taught using other modes of learning such as distance or online learning.The use of the course level experience instrument to measure student experience of theteacher(s) and unit of study and its use in annual performance review and rewardundermines academic professionalism and autonomy. Poor student experience scoresrestrict academic staff progression and reward at teacher and unit of study level basedon using unreliable and invalid instruments.

Survey methodologyOnline survey methodology is used to collect teacher and unit evaluation surveys aftereach teaching period with response rates ranging from 39-43 per cent. The decision touse an online methodology was based on two key factors: low cost, ease of datacollection, accuracy and timely reporting. In 2010, the university offered financialrewards to students as a mean to improve response rates which resulted in 8 per centdecrease in the overall response rate. The use of the online methodology has resulted inlow response rates with some units of study having less than five responses and noresponses for units taught at offshore locations. The decision to use online

Teaching andunit evaluations

283

methodology was one that was rushed, with no consideration of the research literatureon such an approach. Factors not considered were whether online surveys could attracthigh response rates with a representative sample of students participating in thesurvey; whether online surveys have higher student satisfaction compared to paperbased surveys; ways by which the university could engage students in surveys tooptimise response rates and whether rewards and incentives improve response rate.Various research shows minor significance in student satisfaction ratings betweenonline and paper based surveys (Layne et al., 1999; Dommeyer et al., 2004; Stowell et al.,2011; Avery et al., 2006; Ardalan et al., 2007). On the other hand studies by (Kulik, 2005;Nowell et al., 2010; Stowell et al., 2011) suggest differences in student satisfaction withonline surveys attracting lower satisfaction compared to paper based surveys.

Various studies suggest that the change in methodology with huge reliance ononline surveys has resulted in concerns raised by teachers on low response rates (Saxet al., 2003). Nowell et al. (2010) argues that students will only complete the evaluationsonline when they feel the benefits of completion outweigh the time and effort it takes tocomplete the survey. According to Shah and Nair (2011b), the use of rewards such ascash prizes have failed to improve response rate in one Australian University. Studentevaluations play an important role in decision on retention, staff promotion and tenure.Changing the methodology from paper to online administration may create anxietyabout how this change may impact the results of the evaluations. According to Stowellet al. (2011) part of the fear originates from the possibility that when evaluations arecompleted outside of the classroom, teachers lose control over the conditions underwhich students complete the evaluations.

The use of online survey methodology with lower response rates raises importantquestions on whether the university is doing justice to academic staff given that theresults of the survey are used in the PDR process which determines reward andprogression.

Ranking and reliance on scalesThe CEQ instrument has two scales and one extra item related to overall satisfactionwith 13 core items. Six items are related to the good teaching scale, six items are relatedto the generic skills scale and one item relates to overall satisfaction. The reporting ofthe data is done using the average score on each of the three scales using ranking. Theuniversity ranks its performance with selected comparators and the sector atuniversity level and it also ranks the performance of each faculty. Individual reportsfor teachers and units of study are also reported using the average score on each of thethree scales. The key limitation of the current reporting methodology is lack ofdiagnostic analysis on each of the 13 items with trend benchmarked performance. Thecurrent reporting does not help academic staff to identify the items on which studentshave rated high or low as more focus is on the average score on each scale and the useof ranking. The lack of analysis on each item limits academic staff professionaldevelopment in targeted areas, such as quality management of student assessments.The lack of triangulation between quantitative and qualitative analysis is also neededto make effective use of student voice via qualitative comments collected on thesurveys.

Conclusion and future implicationsIt is clear that the changing government policy related to performance based funding isdriving change within some universities. The introduction of the new Tertiary

QAE20,3

284

Education Quality and Standards Agency (TEQSA) and the renewal of quality inAustralian higher education with the government’s plan to again introduceperformance based funding will witness further reforms. The experience ofperformance based funding as part of quality assurance between 2004-2008 hasshown limited evidence of improved student experience and retention in universities.In fact government policy and its outcomes have resulted in controversy with lack oftrust in some universities between policy makers and its users. Performance basedfunding has only benefited elite and well resourced universities and many post 1987and regional universities who are committed to provide access and opportunity tounderrepresented students in higher education have had limited benefit.

The changes in government policy places increased accountability on institutions toimprove student experience with increased emphasis on using quantitative measuressuch as student satisfaction to measure educational quality outcomes. Government’snew performance based funding and linking millions of dollars in reward will result inthe use of national rankings and possible internal rankings within universities. Suchchanges will witness increased emphasis on academic staff performance and usingvarious student experience measures in the annual PDR process.

The case of an Australian university outlined in the paper and the approach takenby seven other universities may be deployed in other institutions with the view thatperformance funding will improve quality outcomes. One of the key concerns for manyin higher education is the intrusion on academic autonomy with increased focus onoutcomes and less emphasis on resources needed to produce excellence in learning andteaching and research. The increased reliance on student happiness as a measure ofeducational quality raises questions about whether high student satisfaction wouldstrengthen academic rigour and student attainment of learning outcomes and genericskills which are seen as key factors in graduate exit standards.

It is yet to be seen whether government funding as part of the new policy wouldtransform higher education and improve Australia’s reputation and competitiveness inhigher education against its comparators who are also in the process of renewingquality. It is also yet to be seen how the government’s political agenda impacts on theuniversity and academic autonomy in years to come in the context of increasedregulation and scrutiny by external agencies.

References

Ardalan, A., Ardalan, R., Coppage, S. and Crouch, W. (2007), “A comparison of student feedbackobtained through paper-based and web-based surveys of faculty teaching”, British Journalof Educational Technology, Vol. 38 No. 6, pp. 1085-101.

Armitage, C. (2006), “Teaching prize fund dreadful”, The Australian, 22 November.

Avery, R.J., Bryant, W.K., Mathios, A., Kang, H. and Bell, D. (2006), “Electronic courseevaluations: does an online delivery system influence student evaluations?”, Journal ofEconomic Education, Vol. 37 No. 1, pp. 21-37.

Barry, J.J. and Clark, H. (2001), “Between the ivory tower and the academic assembly line”,Journal of Management Studies, Vol. 38 No. 1, pp. 87-101.

Bennett, L. and Nair, C.S. (2010), “A recipe for effective participation rates for web basedsurveys”, Assessment and Evaluation Journal, Vol. 35 No. 4, pp. 357-66.

Blackmore, J. (2009), “Academic pedagogies, quality logics and performative universities:evaluating teaching and what students want”, Studies in Higher Education, Vol. 34 No. 8,pp. 857-72.

Teaching andunit evaluations

285

Coaldrake, P. (2005), “Let an umpire decide – the government’s latest university ranking systemaims to improve teaching standards, but these critics claim it is more likely to retard realprogress”, The Australian, August.

Commonwealth of Australia (2004), Learning and Teaching Performance Fund: Issues Paper,August, available at: www.dest.gov.au/archive/highered/learning_teaching/documents/ltp_issues_paper.pdf

Commonwealth of Australia (2008), Review of Higher Education: Final Report, August, availableat: www.deewr.gov.au/HigherEducation/Review/Documents/PDF/Higher%20Education%20Review_one%20document_02.pdf

Commonwealth of Australia (2009), Transforming Australia’s higher education system, August,available at: www.deewr.gov.au/HigherEducation/Documents/TransformingAusHigherED.pdf

Crumbley, L.D. and Reichelt, J.K. (2009), “Teaching effectiveness, impression management, anddysfunctional behaviour: student evaluation of teaching control data”, Quality Assurancein Education, Vol. 17 No. 4, pp. 377-92.

Deem, R. (1998), “‘New managerialism’ and higher education: the management of performancesand cultures in universities in the United Kingdom”, International Studies in Sociology ofEducation, Vol. 8, pp. 47-70.

Deem, R. (2001), “Globalisation, new managerialism, academic capitalism andentrepreneurialism in universities: is the local dimension still important?”, ComparativeEducation, Vol. 37 No. 1, pp. 7-20.

Dommeyer, C.J., Baum, P., Hanna, R.W. and Chapman, K.S. (2004), “Gathering faculty teachingevaluations by in-class and online surveys: their effects on response rates andevaluations”, Assessment & Evaluation in Higher Education, Vol. 29 No. 5, pp. 611-23.

Findlow, S. (2008), “Accountability and innovation in higher education: a disabling tension?”,Studies in Higher Education, Vol. 33 No. 3, pp. 313-29.

Halsey, A.H. (1995), The Decline of Donnish Dominion, Clarendon Press, Oxford.

Harris, K. and James, R. (2010), “The course experience questionnaire, graduate destinationsurvey, and learning and teaching performance fund in Australia”, in Dill, D.D. andBeerkens, M. (Eds), Public Policy for Academic Quality: Analyses of Innovative PolicyInstruments, Springer, New York, NY.

Harvey, S. and Lee, F.S. (1997), “Research selectivity, managerialism, and the academic labourprocess: the future of nonmainstream economics in UK universities”, Human Relations,Vol. 50 No. 11, pp. 1427-60.

Kulik, J.A. (2005), Online Collection of Student Evaluations of Teaching, Office of Evaluations andExaminations, University of Michigan, Ann Arbor, MI.

Layne, B.H., DeCristoforo, J.R. and McGinty, D. (1999), “Electronic versus traditional studentratings of instruction”, Research in Higher Education, Vol. 40 No. 2, pp. 221-32.

Nair, C.S., Adams, P. and Mertova, P. (2008), “Student engagement: the key to improving surveyresponse rates”, Quality in Higher Education, Vol. 14 No. 3, pp. 225-32.

Nowell, C., Gale, L.R. and Handley, B. (2010), “Assessing faculty performance using studentevaluations of teaching in an uncontrolled setting”, Assessment and Evaluation in HigherEducation, Vol. 35 No. 4, pp. 463-75.

Orr, D., Jaeger, M. and Schwarzenberger, A. (2007), “Performance-based funding as aninstrument of competition in German higher education”, Journal of Higher EducationPolicy and Management, Vol. 29 No. 1, pp. 3-23.

QAE20,3

286

Patrick, C.-J. and Crebert, G. (2003), “The contribution of work placement to generic skillsdevelopment”, Proceedings from the Microelectronic Engineering Research Conference2003, Griffith University, Brisbane.

Patrick, C.-J. and Crebert, G. (2004), “The contribution of work placement to generic skillsdevelopment”, Proceedings from the 15th Annual Conference for the AustralasianAssociation for Engineering Education and the 10th Australasian Women in EngineeringForum, Engineers Australia, Canberra, pp. 40-6.

Pollitt, C. and Bouckaert, G. (2000), Public Management Reform: A Comparative Analysis, OxfordUniversity Press, Oxford.

Powell, W.W. and Owen-Smith, J. (1998), “Universities and the market for intellectual property inthe life sciences”, Journal of Policy Analysis and Management, Vol. 17 No. 2, pp. 253-77.

Roberts, V. (2001), “Global trends in tertiary education quality assurance: implications for theAnglophone Caribbean”, Educational Management and Administration, Vol. 29 No. 2,pp. 425-40.

Sax, L.J., Gilmartin, S.K. and Bryant, A.N. (2003), “Assessing response rates and non-responsebias in web and paper surveys”, Research in Higher Education, Vol. 44 No. 4, pp. 409-32.

Shah, M. and Nair, C.S. (2011a), “Renewing quality assurance at a time of turbulence: an attemptto reenergize quality in Australian higher education”, Perspectives: Policy and Practice inHigher Education, Vol. 15 No. 3, pp. 92-6.

Shah, M. and Nair, C.S. (2011b), “Developing an effective student feedback and improvementsystem: exemplars with proven success”, Australian Quality Forum 2011, AustralianUniversity Quality Agency, Melbourne, pp. 113-9.

Shah, M., Lewis, I. and Fitzgerald, R. (2011), “The renewal of quality assurance in Australianhigher education: the challenge of balancing academic rigor, equity and quality outcomes”,Quality in Higher Education, Vol. 17 No. 3, pp. 265-78.

Sinclair, A. (1995), “The chameleon of accountability: forms and discourses”, AccountingOrganisations and Society, Vol. 20 Nos 2-3, pp. 219-37.

Sizer, J. and Cannon, S.S. (1999), “Autonomy, governance and accountability”, in Brennen, J.,Fedrowitzj, J., Huber, M. and Shah, T. (Eds), What Kind Of University? InternationalPerspectives on Knowledge, Participation and Governance, SRHE and Open UniversityPress, Buckingham, pp. 193-202.

Sorlin, S. (2007), “Funding diversity: performance-based funding regimes as drivers ofdifferentiation in higher education systems”, Higher Education Policy, Vol. 20, pp. 413-40.

Stowell, R.J., Addison, E.W. and Smith, L.J. (2011), “Comparison of online and classroom-basedstudent evaluations of instructions”, Assessment and Evaluation in Higher Education,pp. 1-9.

Teelken, C. (2012), “Compliance or pragmatism: how do academics deal with managerialism inhigher education? A comparative study in three countries”, Studies in Higher Education,Vol. 37 No. 3, pp. 271-90.

Todnem By, R., Diefenbach, T. and Klarner, P. (2008), “Getting organisational change right inpublic services: the case of European higher education”, Journal of Change Management,Vol. 8 No. 1, pp. 21-35.

Townley, B., Cooper, D.J. and Oakes, L.S. (2003), “Performance measures and the rationalizationof organizations”, Organization Studies, Vol. 24 No. 7, pp. 1045-71.

Trullen, J. and Rodrı́guez, S. (2011), “Faculty perceptions of instrumental and improvementreasons behind quality assessments in higher education: the roles of participation andidentification”, Studies in Higher Education, pp. 1-15.

Teaching andunit evaluations

287

University X (2009), Student feedback survey Vice-Chancellor’s directive.

University Y (2009), Enterprise agreement 2009-2012.

Watt, C., Lancaster, C., Gilbert, J. and Higerd, T. (2004), “Performance funding and qualityenhancement at three research universities in the United States”, Tertiary Education andManagement, Vol. 10 No. 1, pp. 61-72.

Yorke, M. (1998), “Performance indicators relating to student development: can they be trusted?”,Quality in Higher Education, Vol. 4 No. 1, pp. 45-61.

Further reading

Nasser, F. and Fresko, B. (2002), “Faculty views of student evaluation of college teaching”,Assessment and Evaluation in Higher Education, Vol. 27 No. 2, pp. 187-98.

About the authorsMahsood Shah is the Principal Advisor Academic Strategy, Planning and Quality with the Officeof the Deputy Vice Chancellor (Academic) at RMIT University, Melbourne, Australia. In this role,Mahsood works closely with faculties and schools and provides strategic advice to the DVC(Academic) on all aspects of academic strategy, academic quality, reviews and enhancinginstitutional learning and teaching outcomes. Mahsood has 18 years of work experience intertiary education in various roles with responsibilities related to strategy development, strategyimplementation and reviews, quality assurance, leading external quality audits, review ofacademic and administrative units including review of academic programs, performancemonitoring in all areas of the university including the development of IT-enabled managementinformation capability, course accreditations with professional bodies, stakeholder survey,student experience, and building institutional research capacity in universities. Mahsood Shah isthe corresponding author and can be contacted at: [email protected]

Chenicheri Sid Nair is Professor of Higher Education Development at the Centre for theAdvancement of Teaching and Learning (CATL). His current role is in the area of quality ofteaching and learning. Prior to this he was Interim Director and Quality Advisor (Evaluationsand Research) at the Centre for Higher Education Quality (CHEQ) at Monash University,Australia. In this role he headed the evaluation unit at Monash University where he restructuredthe evaluation framework at the university. The approach to evaluation at Monash has beennoted in the first round of AUQA audits and is part of the good practice database. His researchwork is in the areas of quality in the Australian higher education system, classroom and schoolenvironments, and the implementation of improvements from stakeholder feedback. Recent bookpublications include Leadership and Management of Quality in Higher Education and StudentFeedback: The Cornerstone to an Effective Quality Assurance System in Higher Education. He hasextensive lecturing experience in the applied sciences in Canada, Singapore and Australia. He isan international consultant in quality and evaluations in higher education.

QAE20,3

288

To purchase reprints of this article please e-mail: [email protected] visit our web site for further details: www.emeraldinsight.com/reprints