final report: a comprehensive teaching evaluation strategy for
TRANSCRIPT
Teaching Quality Evaluation Working Party
Final Report: A Comprehensive Teaching Evaluation Strategy for Massey University
April 2005
2
Teaching Quality Evaluation Working Party Final Report
Executive Summary & Recommendation
In 2003, the then Assistant Vice-Chancellor (Academic) established a working party to undertake a systematic review of the evaluation of teaching quality across the University. The Teaching Quality Evaluation Working Party (TQEWP) was convened in 2004 and charged with two primary tasks: to design a comprehensive strategy for the evaluation of teaching quality at Massey University; and to review and revise the SECAT instrument so that it would be more responsive to the needs of staff and students (Appendix 1). In order to carry out these tasks, the working party considered information from three primary sources: 1) Staff and students at the University; 2) Current literature on teaching evaluation; and 3) Teaching evaluation tools and techniques used at other universities in New Zealand and Australia. A comprehensive literature review highlighted the need for a multiple methods approach. The literature argues that in order to evaluate teaching effectively a range of methods needs to be used. The literature also indicates that effective evaluation is more likely to occur if the evaluation is from several different sources. The final area identified from the literature was that the focus of the evaluation should be on teaching improvement, and as such both summative and formative systems were recommended. The evaluation of the systems and tools used currently by other universities mirrors the findings from the literature. All the universities examined (Appendix 4) offered a choice of tools targeted at a variety of sources. The choice of the particular evaluation approach was made by the teacher based on the type of information required and the pool of informants believed to be in the best position to supply the information. The evaluation tools offered could be used for both formative and summative purposes and were available at any time. There were, of course, specific tools that were required to be used to achieve standardisation, but generally the teachers were encouraged to utilise any and all of the tools as and when they wanted. The information gathered from Massey University’s staff showed that they were very keen to improve their teaching and were already operating informal systems to achieve this goal. There was a strong view that different areas of teaching could and should be evaluated by different tools that involved different groups of people. There was also a belief that evaluations at different times would be very effective at identifying possible ways of improving teaching. Overall, staff seek a means by which to gather and analyse observations and comments on their teaching from a wide range of sources, reflective and relevant to the different modes, practices and forms of teaching and learning across a wide range of disciplines. Interviews held with representatives of the University’s Students’ Associations highlighted the importance of a transparent teaching evaluation system where students were informed of actions taken as a result of the feedback they provided. ‘Closing the feedback loop’ was identified as critical to the collection of valid and reliable student data.
3
As a result of the information gathered from the three primary sources the working party makes the following recommendation: Recommendation: That the University adopt a comprehensive teaching evaluation strategy based upon best practices in teaching evaluation that incorporates the use of multiple methods and sources, using formative and summative aspects to assure and enhance the quality of teaching and learning. Adoption of this recommendation will require: 1) Explicit lines of responsibility for teaching quality at all levels of the University. 2) Design and development of a Massey University website to identify and promulgate best practice
tools for evaluating teaching. 3) The replacement of SECAT with a range of evaluation tools including the Student Paper Rating
Instrument (a standardized summative evaluation of paper quality which is compulsory with the results made available to Massey staff and students) and the Student Teaching Rating Instrument (a flexible and formative tool for student feedback on teaching with results that are confidential to the teacher).
4
Table of Contents
EXECUTIVE SUMMARY & RECOMMENDATION......................................................................................... 2 1.0 INTRODUCTION ................................................................................................................................................ 5 2.0 CONTEXT............................................................................................................................................................. 5
2.1 INTERNAL DRIVERS..................................................................................................................................... 5 2.2 EXTERNAL DRIVERS ................................................................................................................................... 6
3.0 BEST PEDAGOGY IN TEACHING EVALUATION................................................................................... 6 3.1 WHY EVALUATE? ....................................................................................................................................... 6 3.2 BEST PRACTICES IN TEACHING EVALUATION ............................................................................................ 7
3.2.1 Using student ratings – important points .................................................................................... 7 4.0 THE MASSEY UNIVERSITY TEACHING EVALUATION STRATEGY .............................................. 8
4.1 ROLES AND RESPONSIBILITIES FOR ENHANCING TEACHING QUALITY...................................................... 8 4.1.1 Teaching staff ................................................................................................................................ 9 4.1.2 Heads of Departments, Institutes and Schools ............................................................................ 9 4.1.3 Pro Vice-Chancellors.................................................................................................................... 9 4.1.4 Teaching and Learning Sub-Committee of Academic Board ..................................................... 9 4.1.5 Students .......................................................................................................................................... 9 4.1.6 Support Services ..........................................................................................................................10
4.2 TRANSPARENCY IN THE UNIVERSITY’S APPROACH TO TEACHING EVALUATION ................................... 10 4.2.1 Evaluation purposes, tools, methods and appropriate outcomes ............................................10 4.2.2 Communication of the strategy...................................................................................................11
4.3 GUIDELINES AS TO ACCEPTABLE EVIDENCE OF TEACHING QUALITY FOR THE PURPOSES OF CONFIRMATION AND PROMOTION............................................................................................................ 11
4.4 BALANCING FORMATIVE TEACHING IMPROVEMENT WITH SUMMATIVE ACCOUNTABILITY REQUIREMENTS ........................................................................................................................................ 12 4.4.1 Replacing SECAT with two instruments based separately on student ratings of papers and teachers....................................................................................................................................................12
5.0 IMPLEMENTATION OF THE MASSEY UNIVERSITY TEACHING EVALUATION STRATEGY....................................................................................................................................................................................... 13
5.1 PHASE 1: WEBSITE DEVELOPMENT / SPRI & STRI PILOT / ETHICS APPROVAL................................. 14 5.2 PHASE 2: ADDITION OF MATERIAL AND INTERACTIVE FEATURES TO THE TQE WEBSITE / FIRST FULL
TRIAL OF THE SPRI & STRI.................................................................................................................... 14 5.3 PHASE 3: WEBSITE INTERACTIVITY ENHANCED / ANALYSIS OF THE FIRST YEAR OF STRI & SPRI
IMPLEMENTATION / REVIEW OF RESOURCE REQUIREMENTS.................................................................. 14 MEMBERS OF THE TQEWP WORKING PARTY AND REFERENCE GROUP ..................................... 17 APPENDIX 1: TQEWP TERMS OF REFERENCE.......................................................................................... 17 APPENDIX 2: TQEWP ISSUES REPORT ......................................................................................................... 19 APPENDIX 3: LIST OF SUBMISSIONS RECEIVED DURING THE CONSULTATION PERIOD...... 24 APPENDIX 4: WEBSITES EXAMINED............................................................................................................. 26 APPENDIX 5: TEACHING QUALITY WEBSITE CONCEPT MAP ........................................................... 27 APPENDIX 6: STUDENT PAPER RATING INSTRUMENT......................................................................... 28 APPENDIX 7: EXAMPLE TEACHING EVALUATION INSTRUMENT ................................................... 30 APPENDIX 8: STUDENT FOCUS GROUP FEEDBACK SUMMARY ........................................................ 32 REFERENCES........................................................................................................................................................... 34
5
1.0 Introduction
In 2003, the then Assistant Vice-Chancellor (Academic) established a working party to undertake a systematic review of the evaluation of teaching quality across the University. The Teaching Quality Evaluation Working Party (TQEWP) was convened in 2004, with Professor Wayne Edwards as Chair, and charged with two primary tasks: to design a comprehensive strategy for the evaluation of teaching quality at Massey University; and to review and revise the SECAT instrument so that it is more responsive to the needs of staff and students (Appendix 1). The Working Party initially met on three occasions to develop a framework for identifying issues and gathering feedback from staff and students. All staff were invited to provide submissions on the evaluation of teaching quality at Massey University, particularly with regard to any perceived ‘gaps in the system’, examples of best practice (internal or external to Massey), or areas where the existing systems are working well. A series of individual interviews were then held with interested staff and students on all campuses in order to explore their experiences with SECAT, obtain information about what might be considered valid evidence of teaching quality, and identify methods of teaching evaluation that were considered effective. The outcomes of the consultation process were summarised in the Teaching Quality Evaluation Working Party Issues Report (June 2004, Appendix 2), and distributed to all staff and students who provided written submissions and/or attended interviews with members of the Working Party (Appendix 3), together with College Boards. The purpose of the Issues Report was to provide a preliminary opportunity for staff and students to examine the outcomes of the consultation process, together with a brief discussion of the issues arising. Feedback received in response to the Issues Report was then examined in the context of the current literature on teaching evaluation, and an exploration of the teaching evaluation tools used at other universities in Australia and New Zealand was conducted (Appendix 4). The information obtained from these processes was then triangulated in order to identify the elements to be addressed in the final report of the Working Party. The following report presents the outcomes of the TQEWP deliberations regarding the design of a comprehensive strategy for the evaluation of teaching quality at Massey University, and the review and revision of the existing SECAT instrument. The recommendation identified by the TQEWP provides the basis for the Strategy which incorporates student ratings as one element of a more holistic approach to teaching evaluation.
2.0 Context
Massey University’s Profile 2005-2007 refers explicitly to the further development and refinement of the University’s procedures for the evaluation of teaching quality (Objective B13). The use of appropriate measures for the collection of student perceptions of teaching quality is also stated as a University objective during the current planning period (B14). Thus the importance of carrying out a review in this area has already been signalled as a key priority for the University. That said, additional internal and external drivers have also precipitated the need for significant change to our teacher evaluation systems, although these should remain secondary to our commitment to ongoing improvement in an area critical to our success as an institution of advanced learning.
2.1 Internal drivers
Information obtained from the University’s Academic Work Environment Survey, and during the 2003 Academic Audit identified an enduring perception that teaching and research do not receive equal recognition at the University. It could be argued that initiatives such as teaching awards, the continuation of targeted assistance with the Fund for Innovation and Excellence in Teaching (FIET), and Training and Development Unit (TDU) programmes supporting teaching enhancement have gone some way to dispel this perception. However, in comparison to the supportive infrastructure available to staff for the achievement of their research goals, the supports provided for teaching improvement are not nearly as transparent. The University’s standardised tool for teaching evaluation is the Student Evaluation of Content, Administration and Teaching (SECAT). Findings from a SECAT Review conducted in 2000 reinforced that SECAT was a convenient tool for gathering student feedback but its one-size-fits-all approach did not provide the detailed information required for demonstrable teaching improvement.
6
In addition to these findings, information obtained from the SECAT Office and during the consultation phase of this investigation has shown that staff and students are dissatisfied with the measure for the following reasons:
• the inability of SECAT to adapt to different teaching contexts and disciplinary requirements (e.g., multiple delivery modes, multiple teachers, studio teaching etc);
• low response rates with questionable validity; • the exclusion of papers at postgraduate level, special topics and those heavily weighted with
practical components resulting in the survey being administered inequitably amongst teaching staff;
• the availability of SECAT results to persons and groups outside of direct reporting lines; • the lack of feedback to students; and • in light of the above, the disproportionate weighting given to SECAT results in formalised
processes such as PRP, confirmation and promotions.
2.2 External drivers
The key priorities in this STEP are for tertiary education organizations to work with the TEC, the NZQA and the MoE to take responsibility for, and actively work to improve, the quality of their teaching to ensure that all students and learners gain the best value possible from their participation in tertiary education (Ministry of Education: Statement of Tertiary Education Priorities, 2005, p. 2).
The advent of the Government’s Tertiary Education Strategy (TES) and Statement of Tertiary Education Priorities (STEP) has clearly signalled the importance placed on the accountability of Tertiary Education Providers. Performance-based measures of research and teaching are an outcome of the accountability climate and while the Performance-based Research Fund (PBRF) has been built from an existing knowledge base, the parallel teaching measure is likely to be more controversial. For this reason, the work of the TQEWP is timely as it will enable the University to engage proactively in ongoing sector debates regarding teaching quality. Education Minister Trevor Mallard stated “I’m impatient to see more effective quality assurance systems and greater responsiveness of the part of providers to what we know about quality teaching” (2004). The investigation, consideration, and adoption of a comprehensive teaching evaluation strategy will ensure that Massey University is well positioned to respond to issues or challenges raised by external agencies. The establishment of PBRF has already had an impact upon perceptions of research and associated rewards at the University with resource and services targeted toward the coordination and management of PBRF related activities (e.g., http://pbrf.massey.ac.nz). In the current PBRF climate, it will be increasingly important to support staff in the achievement of goals relating to teaching—especially when the bulk of the University’s funding is sourced from tuition subsidies, and the breadth of Massey’s portfolio of qualifications means that academic staff, on average, spend approximately 60% of their time on teaching related activities. Arguably, it follows that our ability to enhance and improve teaching practice will be dependent on the ease with which staff can collect and collate valid evidence of their teaching quality.
3.0 Best Pedagogy in Teaching Evaluation
3.1 Why evaluate?
“All members of the institution should be accountable for their activities and performance. The conduct and utilization of credible evaluation programmes have an important influence on the welfare and future excellence of the individual, the department and the institution” (Hoyt & Pallett, 1999).
The conduct of teaching evaluation within a university requires both a strategic and purposive approach in order to fulfil individual, faculty and organizational needs. However, it is imperative that the needs of one group are not elevated at the expense of the others. Marsh and Dunkin (1992, cited in Richardson, 2003) describe the primary purpose of teaching evaluation as diagnostic feedback about teaching effectiveness. Other purposes relating to administrative decision-making, research on teaching and student selection of courses were considered secondary. Unfortunately, as noted by Penny and Coe (2004):
7
“A frequent complaint from teachers, however, is that the principal purpose for collecting student ratings is not necessarily teaching improvement but rather, use of the data as a politically expedient performance measure for quality monitoring” (p. 215).
Thus, a teaching evaluation system that provides information useful to teachers for teaching improvement, in addition to summary data that can be used for secondary purposes is key—both formative and summative aspects are required. Centra (1993, cited in Hoyt & Pallett, 1999) suggested that observable teaching improvement occurs when a motivated teacher acquires new and valued knowledge. This knowledge might be gained from what Kane & Sandretto (2004) describe as “ongoing and purposeful reflective practice” which is “a means of interrogating and establishing teaching practices where subject knowledge, skills, interpersonal relationships with students, research and personality can complement each other and work in concert to develop excellence in teaching” (p. 303). Hoyt & Pallett (1999) suggest that this is achievable if “improvement efforts are supported by institutional policy and guided by comprehensive and valid appraisals” (p. 6).
3.2 Best practices in teaching evaluation
Published literature is almost unanimous on the use of multiple sources of data for effective teaching evaluation (Cashin, 1995; Braskamp & Ory, 1994; Kulik, 2001, Penny & Coe, 2004). Teaching is a complex activity subject to multiple influences and dimensions and as such, no single indicator is able to provide a complete picture of the quality of teaching or information regarding teaching improvement (Cashin, 1995; Marche & Roche, 1997). The most common method of teaching evaluation is the use of student ratings which many authors support as valid, reliable and credible (Cashin, 1995; Braskamp & Ory, 1994; Marsh, 1997; Menges & Austin, 2002). There are moderate to high positive correlations between student ratings and student results (Cohen, 1981), with Cashin (1995) finding that “the classes in which students gave the instructor higher ratings tended to be the classes where the students learned more (p. 3)”. Cashin (1995) goes on to caution against the use of student ratings in the absence of other data sources, observing that student ratings must be interpreted by an evaluator “in combination with other kinds of data to make judgements about an instructor’s teaching effectiveness (p. 6)”. The assertion that student ratings require interpretation by an independent evaluator is supported by other authors in the teaching evaluation field. Specifically, discussing student ratings with a teaching consultant or trusted colleague is thought to be effective in achieving teaching improvement (Levinson-Rose & Menges, 1981; Penny & Coe, 2004; Roche & Marsh, 2002, cited in Richardson, 2003). The literature on the use of student ratings is extensive and while it is accepted that other methods of evaluation are integral to a teaching evaluation system, it is important to present some of the more pertinent findings in relation to the use of student feedback.
3.2.1 Using student ratings – important points
Aleamoni (1987, cited in Zepke, 1996) investigated what he described as the myths of student feedback. The first myth related to the ability of students to provide reliable feedback on teacher effectiveness. Aleamoni quoted a number of studies that found high correlations (0.70-0.87) between different students’ ratings of the same instructor. Feldman (1978, cited in Cashin 1995) found an average correlation of 0.71 between student and faculty views of effective teaching, and Cashin (1995) extended this argument with findings that correlated student ratings with those of both colleagues and administrators. In response to the perception that student ratings were simply a ‘popularity contest’, Aleamoni (1987) found that “while students praised lecturers for warmth, humour and accessibility, they criticised the same lecturers for poor organization and poor presentation of the material” (cited in Zepke, 1996, p. 5). Cashin (1995) examined the relationship between student ratings and a number of variables relating to instructor and student characteristics. Notably, instructor variables such as: age; teaching experience; gender; race; research productivity and personality were not related to student ratings. Similarly, students’ age, gender, level, grade point average and personality were unrelated to their ratings of instructors. Variables that do affect student ratings are: course level (Cashin, 1995); academic field (Cashin, 1995; Neuman, 2000); workload/difficulty (Cashin, 1995); and class size (Neuman, 2000). In her article (2000), Neuman states:
“It has been consistently shown that humanities disciplines tend to receive higher ratings than science disciplines, while social science subjects tend to lie somewhere between humanities and sciences (Cashin, 1990; Cashin & Clegg, 1987; Cranton & Smith, 1986; Feldman, 1978). Such
8
studies have also found that the higher the course level, the higher the ratings tend to be, and that smaller class sizes tend to receive higher rating results than large classes (Cranton & Smith, 1986; Feldman, 1984, 1978)” (p. 125).
While it is generally accepted that students are not qualified to judge particular features such as instructor knowledge or required curriculum content, these elements can and should be evaluated by faculty colleagues.
When considering what might be deemed best practice in teaching evaluation, the appropriate use of student ratings—that which aligns to the research base—is certainly a core component. However, this is only effective if it is used in a context that supports other methods of evaluation, and the use of evaluation for teaching improvement. As suggested by O’Neill (2004)1:
“teachers vary considerably in their pedagogical knowledge, skills and capacity for change. This suggests that any approach to the evaluation of teaching should be sufficiently flexible to accommodate the learning and development needs of teachers across the university—no one-size-fits-all approach will be useful.”
4.0 The Massey University Teaching Evaluation Strategy
The following sections outline the Massey University Teaching Evaluation Strategy in fulfilment of the TQEWP Terms of Reference (Appendix 1). The Strategy builds upon the published literature on teaching evaluation and the requirements of Massey staff and students identified during the consultation process. Specifically, the Strategy seeks to address the following five primary areas which are embodied in the Working Party’s recommendation:
1 Roles and responsibilities for enhancing teaching quality; 2 Transparency in the University’s approach to teaching evaluation with the evaluation
purposes, tools, methods and appropriate outcomes communicated openly amongst staff, between staff and students, and across the University and its communities of interest;
3 Clear guidelines as to acceptable evidence of teaching quality for the purposes of confirmation and promotion;
4 Multiple evaluation tools and methods that balance formative teaching improvement with summative accountability requirements including: separation of student feedback on paper administration and delivery from that related to individual teachers; and the ability to aggregate information for priority student groups; and
5 The development and implementation of a comprehensive teaching evaluation strategy that is sustainable in terms of initial and ongoing resource allocation.
Recommendation: That the University adopt a comprehensive teaching evaluation strategy based upon best practices in teaching evaluation that incorporates the use of multiple methods and sources, using formative and summative aspects to assure and enhance the quality of teaching and learning.
4.1 Roles and responsibilities for enhancing teaching quality
Tertiary educators, tertiary education organizations and the central educational agencies share responsibility to support effective teaching and to invest in the tertiary education workforce at an individual, institution and system-wide level (Ministry of Education: Statement of Tertiary Education Priorities, 2005, p. 10).
In order to achieve a comprehensive teaching evaluation strategy that is valuable and valued by staff and students at the University, it is important to clarify the roles and responsibilities of key groups in the teaching evaluation process. In many cases the roles and responsibilities have been implicit which may have contributed to the confusion inherent in some staff and student submissions regarding their role in the evaluation process (TQEWP Issues Report, 2004, p. 2).
1 Submission to the TQEWP on behalf of the College of Education College Board, 6 August 2004.
9
4.1.1 Teaching staff
Handal (1999) argued that teachers are accountable to their institution, to the profession, and to their students to teach well and foster student learning as effectively as possible (cited in Penny & Coe, 2004, p. 221).
Consistent with the requirements of academic freedom, academic staff have the freedom to regulate the subject-matter of the courses they teach, and to teach and assess students in the manner they consider best promotes learning (Education Amendment Act, 1990, Section 161, Part 1). As active researchers, academic staff are able to encourage a more critical approach to learning given that they are often authors in their discipline, engaged in a personal journey of discovery, and can demonstrate ownership of the material they teach rather than merely transmitting knowledge generated by others. In respect of their academic freedom and research role, university teachers have a responsibility to reflect on, analyse and improve their curriculum, pedagogy and assessment through teaching evaluation.
4.1.2 Heads of Departments, Institutes and Schools
Heads of Departments, Institutes and Schools have a responsibility for leading, coordinating and ensuring the quality of the work of groups of teacher-researchers in cognate areas. As such, it is expected that they will encourage and enhance the ability of teaching staff to fulfil their requirements as university teachers, through existing processes such as Performance Review and Planning (PRP). Heads of Departments, Institutes and Schools also have a responsibility to acknowledge areas of excellence and take appropriate action in regard to the collective improvement of teaching quality.
4.1.3 Pro Vice-Chancellors
As leaders of their Colleges, Pro Vice-Chancellors have a responsibility to ensure that teaching is regularly evaluated, and that College procedures and strategies for the enhancement of teaching are put in place to support the collective improvement of teaching quality. In terms of maintaining current standards and contributing to the university-wide enhancement of teaching quality, Pro Vice-Chancellors also have a responsibility to report upon their teaching improvement strategies, and share collective areas of concern or best practices in an appropriate forum such as the proposed Teaching and Learning Sub-Committee of Academic Board.
4.1.4 Teaching and Learning Sub-Committee of Academic Board
Although a decision regarding the establishment and form of the Teaching and Learning Committee is yet to be made, the TQEWP suggests that such a committee would have a critical role in the enhancement of teaching quality across the University. In order to demonstrate academic freedom the University is required to act in a manner consistent with the maintenance of the highest ethical standards, and to permit public scrutiny to ensure the maintenance of those standards (Education Amendment Act, 1990, Section 161, Part 3). As a sub-committee of Academic Board the Teaching and Learning Committee would be well placed to initiate university-wide strategies for the enhancement of teaching and learning, promulgating good practices and proposing and developing policies as appropriate. In many respects the Committee ‘closes the loop’ required by Part 3 of the Education Amendment Act (1990), ensuring that Massey University exercises its academic freedom and autonomy in a manner consistent with the intentions expressed in the Act (1990).
4.1.5 Students
The University-Student Contract (University Calendar, 2005, p. 29) requires the University to use best endeavours to provide the student with tuition and supervision of a professional standard, and the student to use best endeavours to fulfil the requirements prescribed by the University for the course(s). As partners in the learning process students are uniquely positioned to provide informed feedback on aspects of course delivery, administration and teaching that help or hinder their learning. It follows that the University would be remiss if it did not require student feedback as part of teaching evaluation, primarily because such feedback has the potential to inform strategies that might enhance student learning. Although students are not obligated to fill out surveys or participate in focus groups, it is in their best interests to do so when these activities relate directly to the improvement of teaching quality.
10
4.1.6 Support Services
The primary responsibility of support services is to provide supports and services to students, and the academic staff and departments in the fulfilment of their teaching and research requirements. In order to achieve this effectively, communications must be open to ensure that services are enabling and responsive—seeking to meet the needs of staff and students whilst taking account of established practices nationally and internationally. In some instances there may anecdotal feedback collected in relation to aspects of paper delivery, administration or teaching that should be shared constructively between services and teaching staff either directly, or via established communication channels. A diagrammatic representation of the responsibilities for enhancing teaching quality is presented in Figure 1.
4.2 Transparency in the University’s approach to teaching evaluation
4.2.1 Evaluation purposes, tools, methods and appropriate outcomes
Implementation of an appropriate strategy for teaching evaluation at Massey University requires clarity of purpose, clear guidelines for the use of evaluation tools and methods, and clear expectations for actions undertaken as a result of the evaluation. Consequently, the TQEWP proposes that:
1. The primary purpose of evaluating teaching is to enhance teaching quality for the improvement of student learning.
2. Teaching evaluation is an essential element of teaching practice and must be carried out systematically by all staff involved in teaching activity. Multiple evaluation tools and methods are required to ascertain teaching effectiveness and provide data that are credible, useful, and in support of the primary purpose of enhancing teaching quality for the improvement of student learning.
3. Appropriate elements of the teaching evaluations will be used for the purposes of quality monitoring, performance review and appraisal in accordance with the University’s commitment to accountability and quality assurance. As outlined in Section 4.1, responsibility for ensuring that the outcomes of teaching evaluations are acted upon is shared amongst teaching staff and their line managers.
Figure 1: Diagrammatic representation of the responsibilities for enhancing teaching quality
Academic Board
Committee on Teaching
& Learning
Heads of Department,
Institutes & Schools
Pro Vice-Chancellors
Teaching Staff
Students
Supports &
Services
11
4.2.2 Communication of the strategy
Our review of the existing literature and traversal through comparable university teaching evaluation strategies reveals the need to establish a single source or first point of contact for all stakeholders in the teaching quality equation. Previous sections of this report have already established the need to address teaching evaluation with a 360-degree approach via multiple sources using a range of methods. The options and opportunities for identifying and improving teaching excellence are numerous and as an array they form a complex network of question-forming and answer-finding scenarios. The TQEWP proposes that Massey University create a website as the home base for accessing information on teaching evaluation. The “Teaching Quality Website” would act as the interface between the primary foci, the teacher and the various sites, sources, people and places offering advice and critique on teaching quality. A concept map of the site was developed by the TQEWP (Appendix 5) as offering the following resources:
• Statements about the University’s philosophy and mission regarding teaching excellence including links to relevant external sites noting sector-wide developments;
• Best practices for methods of evaluating teaching to include a full menu of options described and explained to assist with making evaluation choices;
• University requirements for teaching evaluation including interactive features for requesting formative and summative SECAT-type surveys;
• Links to University resources working in support of teaching evaluation such as those provided through the Training and Development Unit;
• Links to teaching portfolio requirements including suggestions and examples; and • Links to relevant literature on teaching excellence, teaching evaluation and student
surveying issues—updated annually. The Site would be the main mechanism for communication of the University’s teaching evaluation strategy, serving to facilitate and nurture a culture of self-reflection upon teaching practice amongst a community dedicated to teaching and learning as companion activities. It does not replace the one-on-one dialogue between teacher and teacher, teacher and academic manager, or teacher and student that rests at the core of sound interaction, instead it is the basis from which those practices and critical discussions can proceed. The TQEWP suggests that the website be created in a preliminary form as early as September 2005. The website could then be expanded to include a full compendium of best practices and supplementary material together with interactive features. As the website becomes established, all students, including extramural students, staff and university stakeholders will be able to review what defines quality teaching and how it can be measured in the context of a university environment. For this website to succeed as a first point of access it must be structured and organized according to contemporary web protocols for design, consistency of interface and ease of access. For it to mature as a valuable source of information to teachers, it must be maintained and updated on a regular basis to reflect changing modes of evaluation and new research.
4.3 Guidelines as to acceptable evidence of teaching quality for the purposes of confirmation and promotion
Critical to the evaluation of teaching quality is the principle of triangulating varied sources of data. Pedagogical, institutional and personal factors all impact on the choice of the evaluation tool, and each tool is limited by its scope, audience and resource requirements. Consequently, triangulation of data yields a more complete picture, and is arguably a more ethical form of assessment than a single student rating. The TQEWP suggests that acceptable evidence of teaching quality for the purposes of confirmation or promotion should require information from three of the following sources: student feedback; student outcomes; peer review; and self-reflective practices. Thus, the acquisition, review and implementation of appropriate actions in relation to evidence of teaching quality would be expected to occur during the three-year confirmation period for academic staff. For evidence of sustained attention to teaching improvement, the process would then be repeated during each three-year period thereafter. Table 1 presents a summary of the evaluation methods and associated tools which could be used individually to problem solve particular aspects of teaching, or in combination as evidence of teaching quality.
12
Table 1: Overview of Teaching Evaluation Methods and Tools
Teaching Evaluation Method
Evaluation Tools
1 Using students’ experience of teaching
End of paper student ratings via standard questionnaires, discussion groups, Small Group Instructional Diagnoses (SGID), research supervision evaluations, anecdotal and unsolicited comments.
2 Using peer perceptions
Peer review and moderation procedures using established techniques for observation and feedback, anecdotal comments.
3 Using information related to student learning
Use of student assessment data such as pass rates and grade distributions, fast feedback from classroom assessment techniques including Muddiest Points, Minute Papers, Confidence Logs, and Course Feedback Logs
4 Using self-reflective practices Self-review, personal observation, reflective journals
5 Using programme evaluation data
Utilizing surveys such as the Course Experience Questionnaire (CEQ), Postgraduate Research Experience Questionnaire (PREQ) and Graduate Destination Survey (GDS) for comparative benchmarks of teaching quality
6 Compiling a Teaching Portfolio
Using established practices and based on the outcomes of various teaching evaluation methods
4.4 Balancing formative teaching improvement with summative accountability requirements
During the consultation period the Working Party received a number of comments—particularly from staff with an academic management role—regarding the importance of summative evaluations which are carried out at the conclusion of the paper (e.g., SECAT). While the Working Party recognises that such information is important for accountability and performance purposes, Table 1 shows that end-of-paper student ratings are only one of a series of tools utilized for teaching evaluation. If the primary purpose of teaching evaluation is to enhance teaching quality for the improvement of student learning, formative and summative evaluations must be balanced in a manner that recognises the importance of both evaluation approaches in the enhancement teaching quality. Information obtained from the University’s teaching staff revealed a strong desire for separating student ratings of paper administration and delivery from those regarding individual teachers. The Working Party debated this approach at length, with particular consideration given to the related literature and the elements deemed necessary for a comprehensive strategy:
• Balancing formative evaluations for teaching development and improvement with summative evaluations that will yield comparative data and fulfil accountability requirements;
• Developing a supportive environment where teachers can experiment with teaching innovation without the risk of negative outcomes impacting on their confirmation, promotion or performance review;
• Recognising that differences exist between evaluating the quality of a paper and evaluating the quality of an individual’s teaching—and that both are important in enhancing student learning;
• Providing a more flexible approach to student surveying that supports standard and personalised question sets which are efficiently managed and effectively delivered in accordance with individual and University requirements; and
• Ensuring that students have an opportunity to receive information about the results of their evaluations.
The outcome of the TQEWP debates was that separating student ratings of a paper from those of an individual teacher was advantageous, and would better fulfil the requirements of teaching staff, academic managers, students and the university. However the TQEWP was also aware that without added detail regarding the scope, structure and individual questions associated with separate instruments, the benefits would not necessarily be apparent to all staff and students. For these reasons some detail regarding the proposed Student Paper Rating Instrument and Student Teacher Rating Instrument are provided in the following sections in accordance with the second term of reference for the TQEWP.
4.4.1 Replacing SECAT with two instruments based separately on student ratings of papers and teachers
The TQEWP utilized the literature on course and teacher evaluation to identify the essential elements of ‘quality’ papers, and the behaviours and characteristics exhibited by effective teachers. Individual questions from student surveys at Massey University and other institutions were then examined in order to extract those questions that matched the key areas and best suited
13
the Massey University context. The wording of each question was discussed and adjusted for construction, clarity and consistency over the course of several Working Party meetings. The resulting “Student Paper Rating Instrument (SPRI)” (Appendix 6) and “Student Teacher Rating Instrument (STRI)” (Appendix 7) represent the outcomes of the TQEWP deliberations in terms of the core questions comprising the instruments. The TQEWP also took the initiative to pilot the SPRI with 41 students and their feedback informed amendments to the question set (Appendix 8). A summary of the differences between the two instruments is presented in Table 2. It is important to state that the TQEWP understood the preference of academic managers for summative numerical data on individual teacher performance, and believe that the dual instruments suggested will meet their requirements in addition to the more general requirements of teachers seeking feedback for teaching improvement.
Table 2: Comparison of the SPRI and STRI
Student Paper Rating Instrument Student Teacher Rating Instrument
Purpose Accountability, quality assurance Teacher development and improvement
Design Standard – 19 questions + demographic information (Appendix 6)
Flexible – could include combinations of questions drawn from the recommended set (Appendix 7), item banks (such as those available for SECAT) or teacher-designed questions
Administration
Compulsory for undergraduate papers – conducted on a rolling schedule so that each paper is surveyed at least once every three years. Optional for postgraduate papers – depending on the nature of the paper and the likely response rate.
Generated and administered upon request
Restrictions Not suitable for papers where the number of respondents is likely to be less than 10 None
Timing End of paper Anytime
Dimensions investigated (relevant survey questions in brackets)
Learning and value (1-4) Student motivation (4) Course components (5-7) Course organization (5-8) Group interaction (9) Assessment (10-14) Workload (17) Student diversity (15-16)
(from the recommended question set) Enthusiasm (1-2) Course organization and preparation (3-6) Communication skills (7-10) Student rapport (11-16) Accessibility (17-19) Student motivation (20-23)
Results
Results generated for each paper. Aggregates generated for department, College and University levels. Aggregates generated for priority student groups at College and University levels Results available to all staff and students with space for commentary by the relevant teaching staff.
At the conclusion of each semester, a list of the teachers for whom surveys were conducted will be sent to the relevant staff (such as the department heads or programme directors as appropriate) but the actual results will remain confidential to the individual teacher
5.0 Implementation of the Massey University Teaching Evaluation Strategy
In light of the substantive changes suggested in this report the TQEWP agreed that some detail surrounding the implementation of the strategy would be critical to its acceptance by University staff. Although the strategy has not been costed exactly, preliminary discussions involving the staff responsible for the SECAT process suggest the Strategy would be relatively cost neutral in terms of infrastructure and support and assuming a phased implementation plan. The indirect costs in terms of academic staff time and workload may be another issue. However, submissions made to the Working Party did provide evidence that many staff were already engaged in evaluation processes other than SECAT. Consequently, the acceptance of the Teaching Evaluation Strategy—including a clear framework for evaluation activities and easily accessible support materials—may result in an efficiency gain for staff activity in this important area. Should the Strategy receive the approval of the University’s Academic Board and Council, it is proposed that implementation would occur in three stages spanning the period from July 2005 until
14
September 2007. Many of the resources required for implementation are available from the existing SECAT budget with cost savings staggered over the period due to the removal of copy-typing, deleting the requirement to survey all extramural papers every semester, and replacing the SECAT survey itself. Significant cost savings will also be made in the areas of printing and publications with online submission of survey requests and electronic distribution of survey results.
5.1 Phase 1: Website Development / SPRI & STRI (Recommended Questions) Pilot / Ethics Approval
Tasks Responsibility Resourcing required and funding source
Deliverables Targeted Timeline
Development of TQE Website
TBA – could be carried out internally or by an external contractor (such as the company responsible for processing the existing SECAT)
$7,000 - $10,000 Available from the existing SECAT budget if the copy-typing of results is suspended
Website “shell”, partially populated with support materials. Online form for submission of staff requests for a SECAT.
Sept 2005
Piloting of SPRI & STRI
TDU & SECAT Administrator with assistance from the TQEWP
Survey design and administration carried out internally using existing SECAT resources
Formatted surveys with questions finalised and ready for submission to the Human Ethics Committee
February 2006
Application for Ethics Approval
TDU & SECAT Administrator with assistance from the Office of the Assistant Vice-Chancellor (Academic)
None
Surveys receive approval from the relevant Campus Human Ethics Committee
May 2006
5.2 Phase 2: Addition of interactive features to the TQE website / First full trial of the SPRI & STRI / Population of the Website with relevant materials completed
Tasks Responsibility Resourcing required and funding source
Deliverables Targeted Timeline
Addition of interactive features to the TQE Website
Web contractor
Estimated at up to 30,000 sourced partially from the existing SECAT budget (with funds redirected from copy-typing and the compulsory surveying of all extramural papers)
Addition of staff and student logins with related security features. Automated request forms with auto-complete functionality based on linkages to staff and paper information. Automated results generation and electronic distribution for the STRI and SPRI to include multiple papers and multiple years. Searchable question banks for the STRI.
End 2006
First full trial of the STRI and SPRI
TDU & SECAT Administrator
Similar to the existing SECAT and drawn from the existing budget once SECAT is suspended
STRI and SPRI available to be used by University staff. Results generated and available on the TQE website (SPRI). Results returned to the individual teaching staff (STRI).
Semester 2, 2006
Population of TQE Website completed
TDU & SECAT Administrator Existing
Full set of resources and guidelines available on the TQE Website for use by university staff
August 2006
5.3 Phase 3: Website interactivity enhanced / Analysis of the first year of STRI & SPRI implementation / Review of resource requirements
Tasks Responsibility Resourcing required and funding source
Deliverables Targeted Timeline
Website interactivity enhanced
Web contractor Estimated at up to 40,000 sourced from the existing SECAT
Addition of Staff Profiles feature which will enable individual teachers to store ‘favourite’ STRI
June 2007
15
budget. questions for use in future evaluations
Analysis of STRI & SPRI To be advised To be finalised
Appropriate analyses of the question sets conducted with areas for improvement identified
S2, 2007
Review of Resource Requirements
TDU & SECAT Administrator None
Revised budget for ongoing administration and support of the Teaching Evaluation Strategy
Sept 2007
16
Members of the Teaching Quality Evaluation Working Party
Wayne Edwards (Chair), Social & Policy Studies in Education, Hokowhitu Don Houston, Technology & Engineering, Turitea Barrie Humphreys, Human Resource Management, Turitea Ruth Kane, Technology Science & Maths Education, Hokowhitu Nik Kazantzis, Psychology, Albany Shelley Paewai, Office of the AVC(Academic), Turitea Tim Parkinson, IVABS, Turitea Julieanna Preston, 3D Design, Wellington Marise Ryder, TDU, Hokowhitu Gordon Suddaby, TDU, Hokowhitu Assistant Vice-Chancellor (Academic), Ex Officio
Members of the Teaching Quality Evaluation Reference Group
Bill Anderson, Learning & Teaching, Hokowhitu Nanthi Bolan, Natural Resources, Turitea Catherine Brennan, Sociology, Social Policy & Social Work, Turitea Nigel Grigg, Technology & Engineering, Turitea Gray Hodgkinson, 2D Design, Wellington Helen Pennington, Psychology, Turitea Karen Rhodes, Arts & Language Education, Hokowhitu Matthijs Siljee, Art & Design Studies, Wellington Ina Te Wiata, TDU, Hokowhitu Kogi Naidoo, TDU, Hokowhitu MUSA, Education Vice-President, Turitea
17
Appendix 1: Teaching Quality Evaluation Working Party and Reference Group
TERMS OF REFERENCE Revised 12 March 2004
The Teaching Quality Evaluation Working Party to be established from December 2003 will encompass three activities:
1. Design of a comprehensive strategy for the Evaluation of Teaching Quality at Massey University which will: assist staff to improve their teaching; be research-led and reflective of international best practice; be relevant to New Zealand culture/s, circumstances and contingencies.
2. In line with item 1, review and revise the SECAT instrument currently in place at Massey University so that it is more responsive to the needs of staff and students.
3. Liaise with colleagues at New Zealand Universities and internationally to benchmark quality teaching and advocate for appropriate strategies nationally and with government.
Broad timelines are as follows:
For item #1: Strategy to Evaluate Teaching Quality
Feb – April 2004 Working Party to meet with university stakeholders across Colleges and campuses and seek input on use of existing measures and overall strategies Consultation ongoing with colleagues nationally and internationally
May – June 2004 Working Party proposal developed and draft issued for University-wide consultation on Massey-all
July 2004 Discussion of proposals structured at College Boards, College Executives, Academic Committee, Academic Board, Campus Academic Advisory Forums, Student Academic Advisory Committee and others as relevant
August – September Working Party revised proposal developed and submitted to October Academic Board as Early Notice and to November Academic Board for review and approval
February 2005 Strategy and measures developed with phased implementation
Full implementation in 2006
For Item #2: Revision of SECAT
Feb – June 2004 University-wide consultation on revision needs for SECAT, including dissemination of principles to be applied in the revision process.
July – August Draft revised SECAT measure/s, administration and reporting processes September – October Consultation with College Boards and other university stakeholders (individual staff,
students and student associations, Maori, unions and others as appropriate) November Revised proposal submitted to Academic Board as Early Notice and to Academic
Committee as formal Agenda item February 2005 Final proposal submitted to Academic Board for review and approval July 2005 Implementation of revised SECAT
18
For Item #3: Benchmarking and Advocacy
Feb – August 2004 Ongoing liaison with colleagues at New Zealand universities and selected universities internationally
S2 2004 or S1 2005 Make recommendations regarding structure, contributors and keynote speakers for the New Zealand Working Conference on the Evaluation of Teaching Quality in Higher Education to be hosted by Massey University.
By 2005 Negotiated agreement with selected universities to benchmark measures Ongoing Advocacy and expert advice to the Ministry of Education and government on the
Evaluation of Teaching Quality
Membership of the Working Group
Wayne Edwards (Chair), Social & Policy Studies in Education, Hokowhitu Don Houston, Technology & Engineering, Turitea Shelley Paewai, Office of the AVC(Academic), Turitea Marise Ryder (TDU, Hokowhitu) Barrie Humphreys, Human Resource Management, Turitea Ruth Kane, Technology Science & Maths Education, Hokowhitu Nik Kazantzis, Psychology, Albany Tim Parkinson, IVABS, Turitea Julieanna Preston, 3D Design, Wellington Gordon Suddaby, TDU, Hokowhitu Assistant Vice-Chancellor (Academic), Ex Officio
Membership of the Reference Group
Bill Anderson, Learning & Teaching, Hokowhitu Nanthi Bolan, Natural Resources, Turitea Catherine Brennan, Sociology, Social Policy & Social Work, Turitea Nigel Grigg, Technology & Engineering, Turitea Gray Hodgkinson, 2D Design, Wellington Peter Lind, Office of Teacher Education, Hokowhitu Helen Pennington, Psychology, Turitea Karen Rhodes, Arts & Language Education, Hokowhitu Matthijs Siljee, Art & Design Studies, Wellington Ina Te Wiata, TDU, Hokowhitu Kogi Naidoo, TDU, Hokowhitu Simon Carryer, MUSA, Turitea
19
Appendix 2: TQEWP Issues Report
Introduction
In 2003, the Assistant Vice-Chancellor (Academic) established a working party to undertake a systematic review of the evaluation of teaching quality across the University. The Teaching Quality Evaluation Working Party was convened in 2004, with Professor Wayne Edwards as Chair, and charged with two primary tasks: to design a comprehensive strategy for the evaluation of teaching quality at Massey University; and to review and revise the SECAT instrument so that it is more responsive to the needs of staff and students. The Working Party met on three occasions to develop a framework for identifying issues and gathering feedback from staff and students. All staff were invited to provide submissions on the evaluation of teaching quality at Massey University, particularly with regard to any perceived ‘gaps in the system’, examples of best practice (internal or external to Massey), or areas where the existing systems are working well. A series of individual interviews were then held with interested staff and students in order to explore their experiences with SECAT, obtain information about what might be considered valid evidence of teaching quality, and to identify methods of teaching evaluation that were considered effective.
Purpose of this Draft Issues Report
The purpose of this document is to provide a preliminary opportunity for staff and students to examine the outcomes of the consultation process, together with a brief discussion of the issues arising. It is hoped that the circulation of an ‘advance draft outline’ of the Working Party’s final report will assist with determining whether the Working Party is ‘on the right track’, and whether any critical issues have been overlooked. The intended audience of this Draft Issues Report is outlined in the table below
Consultation Group Timeline
Teaching Quality Evaluation Reference Group Sent out: 21 June 2004 Comments due: 25 June 2004 Revision completed: 28 June 2004
Staff and students who provided written submissions and/or attended interviews with members of the working party.
Sent out: 28 June 2004 Comments due: 7 July 2004 Revision completed: 9 July 2004
College Boards Sent out: 12 July 2004 Comments due: 6 August 2004
Drafting of Final Report Begins
A Comprehensive Teaching Quality Evaluation Strategy for Massey University (first term of reference)
Overarching Issue Primary Areas to be Addressed Associated Issues
There is a need to demonstrate and be accountable for teaching quality to external communities of interest. The majority of the University’s funding is received from the Student Component.
Appropriate references should be made to the relevant legislation especially the interdependence of teaching and research, and academic freedom.
Research on teaching provides significant opportunities for publication in all discipline areas and hence, opportunities for reward under the PBRF system.
1
The need to advance a university-wide culture of enhancing teaching quality in accordance with the University’s commitment to teaching excellence
Responsibilities for enhancing teaching quality require clarification.
What is the role of the AVCs? PVCs? Is there advantage in a university-wide Learning & Teaching Committee or an ‘Office of Learning & Teaching’? What is the role of HoDs/Is/Ss? What is the role of academic staff? What is the role of students? What is the role of the support services?
20
Overarching Issue Primary Areas to be Addressed Associated Issues
The Working Party noted that many of the issues above were addressed, in part, by the Learning & Teaching Plan 1998. Consequently, a revision and update of the Learning and Teaching Plan might provide a useful way forward in the clarification of the issues above.
Recognition of quality teaching needs to be reinforced through the University academic reward processes such as promotions, and opportunities for career advancement.
Even though evidence of teaching quality is embedded in promotions and confirmation criteria, concerns about the equal weighting of teaching and research endure. Committees with responsibility for promotions and confirmation will need to become familiar with the University’s teaching evaluation strategy (such as the need for more than one evaluation measure) and continue to ensure that policy and practice are consistent.
Ongoing education of staff.
Developing a culture of teaching evaluation requires that teaching strategies, methods and outcomes are continually discussed at all levels of the University. While the Training and Development Unit is well positioned to facilitate discussions through seminars and training sessions, opportunities for continuing education in teaching will need to be advanced within the Colleges and at departmental level.
Need to recognise that teaching and learning processes are complex. Frameworks for the evaluation of teaching (Boyer, ‘excellence, relevance & access’) already exist and it will be important to use and adapt these to the Massey context instead of reinventing the wheel.
Misconceptions regarding the use of student feedback for the evaluation of teaching quality exist and these will need to be addressed with reference to the literature.
Any methodology for the evaluation of teaching quality should be based upon the systematic use of multiple measures, the triangulation of quantitative and qualitative data, and a focus on emerging patterns. The construction of a Teaching Portfolio is most likely to
encourage self-reflective teaching practice while providing the means to present the outcomes of teaching evaluations.
The strategy must recognize the importance of formative and summative methods of teaching evaluation.
More than one evaluation measure will be necessary.
Evaluation methods must be supportive of teaching innovations and enable staff to ‘take risks’ and be creative with their teaching.
Evaluation methods must be as ‘non-threatening’ as possible. The strategy should be focused on the improvement of teaching quality. Reinforce that teaching improvement is an ongoing process
and the outcomes are most likely to be evident in data collected over time.
Differences exist between evaluating the quality of a paper, and evaluating the quality of an individual’s teaching. Evaluation of ‘teacher specific’ aspects should be separated from the evaluation of ‘paper administration / delivery’.
Teacher-specific information would be confidential to individual teachers, although HoDs/Is/Ss would be informed of the papers that underwent a teacher-specific evaluation.
2
Key Features of a comprehensive teaching evaluation strategy
The outcomes of an evaluation of the paper administration / delivery must be available to students.
Feedback to students is a critical element of the evaluation strategy and can sometimes be achieved during the paper delivery (as a result of ‘fast feedback’ mechanisms), but should be systematically addressed for any summative paper evaluations. Thus, all administration / delivery related information (eg structure, composition) would be available to
21
Overarching Issue Primary Areas to be Addressed Associated Issues
teachers, HoDs/Is/Ss and students in an appropriate format, as soon as practicable after the delivery of the paper has been completed.
The outcomes of teaching evaluation would form part of the PRP process.
Evaluation methods need to be transparent and accessible to all teaching staff.
The strategy must be transparent and applicable to all staff involved in teaching. Evaluation methods need to be flexible in order to
accommodate a variety of delivery modes, and teaching & learning philosophies.
The ability to examine aggregate data for particular student groups (Maori, Pasifika, Learners with Disabilities, International, First Year Students), and other demographic characteristics (age, gender) should be incorporated in appropriate areas of the strategy.
Data related to particular student groups would be collected as part of a standardized student rating instrument but results aggregates would be generated for College, Campus and University-wide levels only.
The evaluation of teaching quality needs to be appropriately resourced.
As part of its deliberations, the Teaching Quality Evaluation Working Party will investigate the resource implications of the teaching evaluation strategy proposed, and summarise these in their final report. Every attempt will be made to present the strategy in a ‘cost-neutral’ manner, utilizing cost reductions from the existing SECAT tool to provide support for other methods of teaching evaluation. That said, additional resourcing may be required for full implementation of a comprehensive teaching evaluation strategy.
Recognition of teaching evaluation in staff workloads.
Addressing the issues related to teaching quality will require staff to reflect upon, and evaluate their own teaching, and assist others with the improvement of their teaching practice. For some staff, this approach will more effectively recognize the time already spent on improving their teaching. For others, this may involve greater attention to their teaching activities. Either way, time spent on teaching improvement individually or on a collaborative basis must be a recognized element of staff workloads which should be discussed and agreed in accordance with the department’s workload allocation model and the PRP process.
3 Acceptable evidence of teaching quality
Expected evidence of teaching quality should be drawn from multiple methods (eg a SECAT-type standardized survey, fast-feedback, purposeful reflective practice, peer review) and at least one source (eg students, peers, unsolicited comments) over a sustained period.
Guidelines and/or principles will be required for ‘recording’ evidence of teaching evaluations and the improvements that may result. This information would be examined as part of the PRP process, for confirmation and for promotions. The evidence would also form the basis of a teaching portfolio.
4
Recommended formative and summative methods of teaching evaluation
Develop formalized structures for peer review / mentoring / moderation, using external representation where appropriate.
Ensure that validity issues are addressed through recognition of the limitations of the use of this evaluation method. For example, the focus should be on how the teacher uses the feedback rather than the raw feedback itself. Further, peer review / mentoring / moderation can be applied to different aspects of the paper – a departmental colleague may be best positioned to comment on the course content, study notes or assessment, while a staff member with expertise in teaching and learning may be used to comment on generic issues.
22
Overarching Issue Primary Areas to be Addressed Associated Issues
Advocate the use of Small Group Instructional Diagnoses (SGID) and other student focus groups for small or postgraduate classes.
External facilitators are required for the use of this method – although TDU already perform this function, wider use would need to involve educating staff within the Colleges.
Advocate the use of Fast Feedback.
Methods of fast feedback (GIFT, Minute Paper, Muddiest Point) are useful in all papers – particularly those with multiple contributors. These evaluations need to be clearly outlined and promulgated to teaching staff.
Continue with the application of a standardized questionnaire.
Recognise that anecdotal feedback also has value.
Unsolicited comments and ongoing interactions with students can, and should, contribute to evidence of teaching quality.
5 Ongoing Evaluation & Monitoring
Any changes to the existing systems for teaching quality evaluation should be evaluated 18 months post implementation to ascertain their added value and effectiveness.
Review and Revision of SECAT (second term of reference)
Overarching Issue Primary Areas to be Addressed Associated Issues
Capacity should be included for teachers to ‘design their own’ teacher-specific survey with the option of qualitative and quantitative questions, to be administered centrally on the teacher’s behalf at any time.
Guidelines on the design of questions would need to be available, and cautions given about the use of qualitative questions and the comments that may result.
In accordance with the issues addressed previously, teacher specific information would be confidential to individual teachers, although HoDs/Is/Ss would be informed of which papers had undergone a teacher-specific evaluation.
A ‘template’ teacher-specific survey which builds on the literature related to quality teaching should be available to teaching staff, and administered centrally on the teacher’s behalf at any time.
Teachers would have the option of ‘selecting’ questions from a pre-determined item bank and including one or two qualitative questions.
The “course” survey could be comprised of standard and optional questions selected from an item bank.
The results of the “course” survey would be ‘public’ and available to students.
6
The composition of the Student Rating Instrument (formerly SECAT)
A summative (end-of-paper) “course” survey would be available and compulsory for all papers over a given period of time (for example, once in every three offerings).
As with the existing SECAT, ‘course’ surveys could be requested by a third party (such as a representative of the Students’ Association), and consideration must be given to the appropriate procedures for this process.
A faster turnaround of centrally administered survey results is critical.
Copy-typing the qualitative results is expensive and largely responsible for the delays in results turnaround. Weighing the costs and benefits, copy-typing is unnecessary.
Presenting discrete data in continuous line graphs is inappropriate so bar graphs, medians and modes would be used.
7 Presentation of the Student Rating Instrument Results Results should be presented in
an appropriate format.
In order to discourage punitive targets or inappropriate result comparisons, the questions and results would be presented in terms of satisfaction or acceptability of practice rather than a one-to-five rating.
23
Overarching Issue Primary Areas to be Addressed Associated Issues
Given that “course” survey results would be ‘publicly available”, the paper coordinator would have the option of including a short commentary that would appear on the results sheet – this commentary could contain information about important aspects of the learning environment such as whether the paper was lecture, practicum or web-based.
Information about whether the course is a compulsory or elective paper for the student is already collected as part of SECAT.
Results should contain critical ‘contextual’ information, such as whether the course was compulsory or optional, numbers of students enrolled, numbers who responded.
Key demographic data (age, ethnicity, gender, student status (international, NZ Permanent Resident, First Year student etc)) would be included for collection at the end of the survey but results aggregates would be generated for College, Campus and University-wide levels only.
24
Appendix 3: List of Submissions Received During the Consultation Period
Name Date
Recvd Department
Form of Submission
Interview Date (if
applicable)
Edwards, Dr Howard 3/26/04 Information & Mathematical Sciences Note
Hendrickson, Dr Mark 3/29/04 Social and Cultural Studies Note
Todd, Mr Arthur 3/29/04 Information Systems Written
Murray, Mr Jhanitra 4/7/04 Psychology Verbal 6 May 2004
Clarke, Dr Dave 4/7/04 Psychology Written
Anon. 4/7/04 Psychology Written
Anon. 4/8/04 Psychology Written
Brinn, Dr Fran 4/8/04 Psychology Verbal 13 May 2004
Ronan, A/Prof Kevin 4/8/04 Psychology Written
Munford, Prof Robyn 4/14/04 Sociology, Social Policy & Social Work Written 7 May 2004
Alley, A/Prof Maurice 4/14/04 IVABS Written
Roche, A/Prof Mike 4/16/04 People, Environment & Planning Written
Stewart, Dr Terry 4/16/04 Natural Resources Written
White, A/Prof Gillian 4/16/04 Health Sciences / PVCs Office Written 13 May 2004
Anon. 4/19/04 Psychology Written
Blakey, Mrs Judy 4/19/04 Psychology Written
Anderson, Dr Bill 4/19/04 Learning & Teaching Written
Nulsen, A/Prof Mary 4/20/04 IVABS Written
Churchman, Mrs Rosalind 4/20/04 PVCs Office, Design, Fine Arts & Music Written 13 May 2004
Lockhart, Dr James 4/20/04 Management Note
Dept. Submisson 4/20/04 Information Systems Written
Edwards, Ms Flora 4/20/04 Conservatorium of Music Endorsement
Tipping, Mr Simon 4/21/04 Conservatorium of Music Endorsement
Sayers, Ms Emma 4/22/04 Conservatorium of Music Endorsement
Morris, Mr Simon 4/22/04 Fine Arts Written
Toulson, A/Prof Paul 4/22/04 Human Resource Management Written
Rhodes, Dr Karen 4/22/04 Arts & Language Education Written
Vitalis, Prof Tony 4/23/04 Management Written
Collins, Mr John 4/23/04 Technology & Engineering Written 13 May 2004
Carryer, Mr Simon 4/23/04 MUSA Written
Morriss, Mr Stuart 4/23/04 Natural Resource Management Written
Barker, Mrs Liz 4/23/04 EXMSS 10 May 2004
Hardman, A/Prof Michael 4/23/04 PVCs Office, College of Sciences Written
Lapwood, A/Prof Keith 4/23/04 IVABS Written
Ciochetto, A/Prof Lynne 4/26/04 2D Design Written
Halford, A/Prof Dean 4/26/04 Fundamental Sciences Written
Scott, Prof Barry 4/26/04 Molecular BioSciences Written
Hendrickson, Dr Mark 4/26/04 Social & Cultural Studies Written
Jagath-Kumara, Dr Don 4/27/04 Information Sciences & Technology Written
Simpson, Dr Mary 4/27/04 Social & Policy Studies in Education Note
Prochnow, Dr Jane 4/27/04 Learning & Teaching Written
McPherson, Cluny 5/24/04 Social & Cultural Studies Written
25
Hazelhurst, Mr David Information Systems Verbal 13 May 2004
Sligo, A/Prof Frank Communication & Journalism Verbal 13 May 2004
Hemara, Mr Ross Arts & Design Verbal 13 May 2004
Shaw, Dr Richard Sociology, Social Policy & Social Work Verbal 6 May 2004
Campbell, Mr Andrew, Rhodes, K. & Jones, H.
AUS Verbal 6 May 2004
Phibbs, Dr Suzanne Health Sciences Verbal 6 May 2004
Hunt, Dr Lynn Human Resource Management Verbal 6 May 2004
Hodgkinson, Mr Gray 2D Design Verbal 13 May 2004
Leach, Dr Linda Social & Policy Studies in Education Verbal 13 May 2004
Viskovic, Ms Alison Social & Policy Studies in Education Verbal 13 May 2004, 16 July 2004
Morgan, Prof Sally PVCs Office, Design, Fine Arts & Music Verbal 13 May 2004
O'Brien, A/Prof Mike Social & Cultural Studies Verbal 12 May 2004
Lind, Mr Peter Office of Teacher Education Verbal 7 May 2004
Weir, Mrs Kama Health & Human Development Verbal 14 May 2004
Guilford, Prof Grant 6/18/04 IVABS Written
Macdonald, Prof Barry 8/10/04 Humanities & Social Sciences College Board Written
Pennington, Ms Helen 6/22/04 Psychology Written
Te Wiata, Ms Ina 6/27/04 TDU Written
Churchman, Mrs Rosalind 7/1/04 PVCs Office, Design, Fine Arts & Music Written
Cullen, Prof Joy 7/28/04 Learning & Teaching Written
Carryer, Mr Simon 7/5/04 MUSA Written 5 July 2004
Campbell, Mr Andrew 7/14/04 AUS Verbal 14 July 2004
Prebble, Prof Tom 8/19/04 Social & Policy Studies in Education Written
Linzey, Ms Kate 7/30/04 Arts and Design Written
Wilson, A/Prof Bruce 8/4/04 College of Business College Board Written
Watson, Dr James 8/5/04 History, Philosophy & Politics Written
Churchman, Mrs Rosalind 8/5/04 College of Design, Fine Arts & Music College Board
Written
Board Submission 8/5/04 College of Sciences College Board Written
O'Neill, John 8/6/04 College of Education College Board Written
Halford, A/Prof Dean 8/6/04 Fundamental Sciences Written
Watson, Dr James 8/9/04 History, Phil & Politics Notes
Group Submission 8/10/04 College of Education Written
Fielden, Ms Jan 8/13/04 Health Sciences Written
Macdonald, Prof Barry 8/16/04 Humanities & Social Sciences College Board Notes
Various 8/16/04 23 Turitea & Hokowhitu staff members attending TDU training course on evaluation
Notes
Various 8/17/04 11 Wellington staff members attending TDU training course on evaluation
Notes
Various 8/20/04 4 Albany staff members attending TDU training course on evaluation
Notes
26
Appendix 4: Websites Examined
University of Auckland http://www2.auckland.ac.nz/cpd/evaluations/itembank.html
AUT http://intouch.aut.ac.nz/intouch/iru/knowledge_base/kb_sub.php?articleid=4
Lincoln University http://www2.auckland.ac.nz/cpd/evaluations/itembank.html
University of Canterbury http://www.stu.canterbury.ac.nz/
University of Otago http://hedc.otago.ac.nz/evaluation/download.asp?menuID=forms
Victoria University http://www.utdc.vuw.ac.nz/evaluation/index.html
Waikato University http://tldu.waikato.ac.nz/appraisal/index.shtml
Australian National University http://training.anu.edu.au/default.asp
Australian Catholic University http://www.acu.edu.au/index.cfm
Central Queensland University http://ses.cqu.edu.au/next/question.htm
Charles Darwin University http://www.cdu.edu.au/lr/eval/qspotform.html
Charles Sturt University http://csu.edu.au/division/celt/html/evalunit.html#infoect
Edith Cowan University
Flinders University http://www.flinders.edu.au/teach/SET/Resources/questions.doc
Griffith University http://www.gu.edu.au/centre/gihe/teachinglearning/evaluation/serb/home.html
James Cook University http://www.jcu.edu.au/office/tld/teacheval/optqitembank.doc
La Trobe University http://www.latrobe.edu.au/adu/student_eval.htm#2
Macquarie University http://www.cpd.mq.edu.au/TEDS/questionbank.htm
Monash University http://www.adm.monash.edu.au/cheq/evaluations/MonQueST/monquest_previews.html
RMIT University http://www.rmit.edu.au/browse;ID=7ckqtu6pwlzh
Queensland University of Tech http://www.talss.qut.edu.au/service/EVAL/index.cfm?fa=displayPage&rNum=621373
Southern Cross University http://www.scu.edu.au/services/tl/1feedback_eval.html
University of Adelaide http://www.adelaide.edu.au/ltdu/staff/evaluation/SELT.html
University of Ballarat http://www.ballarat.edu.au/aasp/acsupport/lts/setseu/docs/banks.doc
University of Canberra http://www.canberra.edu.au/celts/sfs/service.htm
University of Melbourne http://www.cshe.unimelb.edu.au/academic_dev.html
University of New England http://www.une.edu.au/tlc/evaluation/instruments.htm
University of New South Wales http://www.staffdev.unsw.edu.au/catalogue/cataloguefront.htm
University of Newcastle http://www.newcastle.edu.au/services/statistics/university_surveys/sec.html
University of Queensland http://www.tedi.uq.edu.au/EVAL/Itembank.html
University of South Australia http://www.unisanet.unisa.edu.au/learningconnection/staff/teachg/evaldata.doc
University of Sydney http://www.nettl.usyd.edu.au/use/
University of Tech - Sydney http://www.clt.uts.edu.au/contentssfs.html
University of Western Australia http://www.catl.osds.uwa.edu.au/etu
University of Wollongong http://cedir.uow.edu.au/CEDIR/programs/tsse.html
27
Appendix 5: Teaching Quality Website Concept Map
Teaching Quality
Evaluation
MU Philosophy,
Strategy &
Requirements
Recommended
Resources
Order a Course
Evaluation
Course Results &
Interpretation Guide
Important Links
Teaching
Excellence Awards
FIET
Choosing the
appropriate
evaluation methods
Problem Solving Particular Aspects of
Teaching
Teaching Development or Improvement
Confirmation or Promotion
Accountability or Quality Assurance
Requirement
Direct links to teaching evaluation approaches & methods
Using students’ experience of your teaching
Course and teaching questionnaires,discussion groups, SGID and researchsupervision evaluations
Using peer perceptions
Internal and external peer review andmoderation procedures includingtechniques for observation and feedback
Using self-reflective practices
Self-review, personal observation,reflective journals
Using information related to student learning
Use of student assessment data, fastfeedback
Accessing programme evaluation data
CEQ/PREQ/GDS InformationCompiling a Teaching Portfolio
28
Appendix 6: Student Paper Rating Instrument: Standard Questions to be Administered at the Conclusion of a Paper for Summative Feedback
In my case, this paper is Compulsory Elective
Str
on
gly
Ag
ree
Ag
ree
Ne
utr
al
Dis
ag
ree
Str
on
gly
Dis
ag
ree
Do
n’t
Kn
ow
1 Overall, I was satisfied with the quality of the learning experience in this paper
2 This paper helped me develop my thinking skills (eg. problem solving, analysis etc.)
3 The content of the paper was structured in a way that assisted my learning 4 I was motivated to learn in this paper 5 This paper had clear aims and objectives 6
The support materials (e.g., handouts, study guides, and/or e-learning environments etc.) were useful to my learning
7 It was clear how the parts of this paper (lectures, tutorials, assessment etc.) fitted together
8 I was satisfied with the information provided about the paper (e.g., Paper Outline)
9 I felt part of a group committed to learning in this paper 10 Assessment requirements were clear 11 The assessment allowed me to demonstrate what I had understood 12 Marking criteria and standards were clear 13 The assessment feedback helped me learn 14 My marked assessment was returned within a reasonable time (3 weeks) 15 The learning environment was free of unfair discrimination 16
The learning environment took account of the diversity (ethnicity, religion, gender, political beliefs) of student backgrounds
17 The workload for this paper was reasonable The aspects of the paper that most helped my learning were: The paper could be changed in the following ways to improve my learning:
29
Demographic Information
The following section will enable the University to obtain information about the quality of our paper offerings as rated by different student groups. Please answer the following questions as accurately as you can.
Gender
1 Female 2 Male
Age
1 Under 24 years 2 25 – 34 years
3 35 – 44 years
4 45 – 54 years
5 55 years and over
Ethnicity (Tick as many as apply)
1 NZ European/ European/ Pakeha
2 Maori 3 Pacific Peoples
4 Chinese 5 Indian
6 Other Asian 7 Other
Student status for your course
1 New Zealand Student (citizen or permanent resident)
2 International Student
Is English your first language?
1 Yes 2 No
Do you live with the effects of a significant injury, long-term illness or disability?
1 Yes 2 No
Your Level
1 1st Year Undergraduate 2 Undergraduate above 1st Year
3 Postgraduate
Notes on Results Presentation
As the results of the Student Paper Ratings will be available to all staff and students upon secure login to the Results Section of the Teaching Evaluation Website. It is important that the paper coordinator and teaching staff have an opportunity to provide a comment on the outcomes of the survey, so once the results have been generated, they would be distributed to the paper coordinator and relevant teaching staff who would be given a specified time (such as two weeks) to submit commentary that would accompany the results as presented on the website.
30
Appendix 7: Example Teaching Evaluation Instrument
In my case, this paper is Compulsory Elective
For this paper, my teacher:
Str
on
gly
Ag
ree
Ag
ree
Ne
utr
al
Dis
ag
ree
Str
on
gly
Dis
ag
ree
Do
n’t
Kn
ow
1 is enthusiastic about the subject area 2 is enthusiastic about encouraging student learning 3 comes to class well prepared 4 organises class time effectively and efficiently 5 organises and sequences the subject matter well 6 presents an appropriate amount of material for the time available 7
communicates clearly what is expected of me to be successful in this paper
8 presents the subject matter clearly 9 communicates effectively in class 10
makes good use of examples, illustrations, or other techniques to explain difficult concepts
11 encourages appropriate student participation 12 encourages me to learn 13 encourages relevant student discussion 14 shows genuine interest in assisting students’ learning 15 treats students fairly and with respect 16 is sensitive to student needs and concerns 17 is approachable to students 18 is helpful to students 19 is accessible to students 20 increased my interest in this subject 21 is responsive to student feedback 22 has high academic standards for this class 23 provides constructive feedback on my assessment 24 overall, is effective as a university teacher 25 optional question 26 optional question
31
27 optional question 28 optional question 29 optional question 30 optional question
Student Teacher Rating Instrument Notes
It is envisaged that staff using the STRI could create their own questions, or select questions from item banks to explore particular issues of interest. The survey could be conducted at any time during paper delivery for either formative or summative feedback. The STRI questions presented here provide a ready-to-use instrument for staff wanting broad feedback on all aspects of their teaching.
32
Appendix 8: Student Focus Group Feedback Summary
• Focus groups were held with a 200 level class in the College of Business (six students) and a combined class of 100, 200 & 300 level students in the College of Design, Fine Arts & Music (35 students). Students who participated in the focus groups regarding the draft SPRI were generally positive about the proposed instruments, issues that arose were as follows:
• Definition of ‘reasonable’ time in relation to assignment marking and return: some students
noted that a reasonable time was the three week turnaround stated in the study guide, others were satisfied as long as the assignments were returned in time to use marker comments to improve their performance in the next assignment.
• Students considered the question regarding a community of learners to be ambiguous and while they ‘sort of got it’, they were not sure exactly what was being asked.
• Similarly, the question relating to the “learning environment taking into account the diversity of students’ backgrounds” was confusing – some believed it was referring to EO issues (which was the aim of the question), others thought it referred to their prior knowledge of the subject.
• All students were happy to complete the demographic section and could see how it would assist the lecturer in improving the course to meet student learning needs. Most students preferred the use of age bands rather than having to write their age in the space supplied. There was some confusion amongst New Zealand students as to whether to fill in the NZ permanent resident or NZ student box.
• In programmes where one-on-one relationships were developed with lecturers or tutors, the use of a standardized survey regarding the paper was seen as inappropriate. In such cases, alternative methods of teacher evaluation (such as focus groups) would be more appropriate.
• The majority of students found the concept of dividing the paper and teacher surveys favourable.
33
References
Braskamp, L. A. & Ory, J. C. (1994). Assessing Faculty Work. San Francisco: Jossey-Bass.
Cashin, W. E. (1995). “Student Ratings of Teaching: The Research Revisited”. IDEA Paper No. 32: Centre for Faculty Evaluation and Development: Kansas State University.
Cohen, P. (1981). Student ratings of instruction and student achievement: a meta-analysis of multisection validity tests. Review of Educational Research, Vol. 9, p. 78-82.
Hoyt, D. P. & Pallett, W. H. (1999). “Appraising Teaching Effectiveness: Beyond Student Ratings”. IDEA Paper No. 32: Centre for Faculty Evaluation and Development: Kansas State University.
Kane, R., Sandretto, S. & Heath, C. (2004). “An investigation into excellent tertiary teaching: Emphasising reflective practice”. Higher Education, Vol. 47, p. 283-310.
Kulik, J. A. (2001). Student ratings: validity, utility, and controversy. In M. Theall, P. Abrami, & L. Mets (Eds.), The student ratings debate. Are they valid? How can we best use them? New directions for institutional research: No. 109. San Francisco: Jossey-Bass.
Levinson-Rose, J. & Menges, R. J. (1981). Improving college teaching: A critical review of research. Review of Educational Research, Vol. 51, p. 403-424.
Mallard, T. (2004). Speech to biennial conference for Teacher Education Forum for Aotearoa New Zealand (TEFANZ), Auckland College of Education, Epsom Ave, Auckland. 6 July 2004.
Marsh, H. W. (1997). Students’ evaluations of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research, Vol. 14, p. 441-447.
Marsh, H. W. & Roche, L. A. (1997). “Making Students’ Evaluations of Teaching Effectiveness Effective: The Critical Issues of Validity, Bias, and Utility”. American Psychologist, Vol. 52, No. 11, p. 1187-1197.
Ministry of Education. (2005). Statement of Tertiary Education Priorities 2005-2007. New Zealand Government: www.minedu.govt.nz.
Menges, R. J. & Austin, A. E. (2002). Teaching in higher education. In V. Richardson (Ed.), Handbook of research on teaching, 4th ed. (p. 1122-1156). Washington, D. C.: American Educational Research Association.
Neuman, R. (2000). “Communicating Student Evaluation of Teaching Results: Rating Interpretation Guides (RIGs)”. Assessment & Evaluation in Higher Education, Vol. 25, No. 2, p. 121-134.
New Zealand Government (1990). Education Amendment Act. Wellington: Author. Penny, A. R. & Coe, R. (2004). “Effectiveness of Consultation on Student Ratings Feedback: A Meta-
Analysis”. Review of Educational Research, Vol. 74, No. 2, p. 215-253.
Richardson, J. T. (2003). “A review of the literature” in Collecting and Using Student Feedback on Quality and Standards of Teaching and Learning in Higher Education: HEFCE Report, retrieved 12 December 2004 from the World Wide Web: http://www.hefce.ac.uk/pubs/rdreports/2003/
Zepke, N. (1996). “Towards a Transformational Student Feedback Process”. Connections, Vol. 44, p. 3-9.