evaluation of adjunct faculty in higher education institutions

13
This article was downloaded by: [University of Glasgow] On: 20 December 2014, At: 02:24 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Assessment & Evaluation in Higher Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/caeh20 Evaluation of adjunct faculty in higher education institutions Jill M. Langen a a Baker College Center for Graduate Studies , Flint, Michigan, USA Published online: 21 Oct 2009. To cite this article: Jill M. Langen (2011) Evaluation of adjunct faculty in higher education institutions, Assessment & Evaluation in Higher Education, 36:2, 185-196, DOI: 10.1080/02602930903221501 To link to this article: http://dx.doi.org/10.1080/02602930903221501 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: jill-m

Post on 15-Apr-2017

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Evaluation of adjunct faculty in higher education institutions

This article was downloaded by: [University of Glasgow]On: 20 December 2014, At: 02:24Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in HigherEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/caeh20

Evaluation of adjunct faculty in highereducation institutionsJill M. Langen aa Baker College Center for Graduate Studies , Flint, Michigan, USAPublished online: 21 Oct 2009.

To cite this article: Jill M. Langen (2011) Evaluation of adjunct faculty in highereducation institutions, Assessment & Evaluation in Higher Education, 36:2, 185-196, DOI:10.1080/02602930903221501

To link to this article: http://dx.doi.org/10.1080/02602930903221501

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Evaluation of adjunct faculty in higher education institutions

Assessment & Evaluation in Higher EducationVol. 36, No. 2, March 2011, 185–196

ISSN 0260-2938 print/ISSN 1469-297X online© 2011 Taylor & FrancisDOI: 10.1080/02602930903221501http://www.informaworld.com

Evaluation of adjunct faculty in higher education institutions

Jill M. Langen*

Baker College Center for Graduate Studies, Flint, Michigan, USATaylor and FrancisCAEH_A_422324.sgm10.1080/02602930903221501Assessment & Evaluation in Higher Education0260-2938 (print)/1469-297X (online)Original Article2009Taylor & Francis0000000002009Dr. [email protected]

The role that part-time faculty play in higher education is changing. No longer arepart-time faculty used on an occasional basis at a few institutions. Theseindividuals now play a critical part in the delivery of higher education to students.This study was developed to answer questions regarding how the performance ofadjunct faculty is evaluated. The researcher gathered data on what sources ofinformation were used to evaluate adjunct faculty, and how this information wasused by various administrators during evaluation and reappointment decisions.The underlying goal was to develop a better understanding of current evaluationpractices so that higher education administrators can ensure that quality learningopportunities are available in the classroom.

Keywords: adjunct faculty; evaluation; higher education

Introduction

A noteworthy and important change is taking place in higher education classrooms.Instead of a traditional full-time tenured professor fulfilling the role of educator, it isquite likely that students will see a part-time faculty member in the classroom, lecturehall or laboratory. According to the American Association of University Professors(AAUP) Contingent Faculty Index of 2006, the number of full-time tenured facultypositions declined by more than 2000 from 1995 to 2003 (AAUP 2006). According tothe National Center for Education Statistics, 47.5% of faculty members at collegesthat awarded federal financial aid in the fall of 2005 were part-time faculty (Lederman2007). From 1976 to 1995 the number of part-time faculty increased 91%, while thenumber of full-time faculty, during the same period, increased only 27% (Sonner2000). The University of Phoenix, the largest for-profit university in the USA,reported in 2001 that they employed only 45 full-time faculty, but employed 45,000part-time faculty (Winston 1999).

With the dramatic increase in the use of adjunct faculty in higher education class-rooms, it is critical that we understand how these faculty are being evaluated, and howthese evaluation results are utilised. Without a clear and consistent process availableto measure performance, it becomes increasingly difficult for administrators to ensurethat quality learning opportunities are available in the classroom. This study examineshow higher education academic administrators evaluate their adjunct faculty, and howthese results are used. This research will specifically address how often administratorsevaluate their adjunct faculty, what sources of information they use and what criteriathey consider when evaluating and making reappointment decisions for adjunct

*Email: [email protected]

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 3: Evaluation of adjunct faculty in higher education institutions

186 J.M. Langen

faculty. As administrators become more knowledgeable about how they evaluate andassess their adjunct faculty, they can become more proficient at utilising this valuableasset.

Current guidelines and concerns

While there is much debate regarding the positive and negative impact of the transfor-mation of who is teaching in the classroom, there is little direction or guidance regard-ing the role adjunct faculty should play in an educational institution. Few restrictionsor recommendations can be obtained from accrediting agencies. Not one of theregional accrediting agencies for higher education institutions gives a specific percent-age of faculty that must be full-time; however, the North Central Association ofColleges and Schools does state that, ‘it is reasonable to expect that an institutionwould seldom have fewer than one full-time faculty member for each major that itoffers’(Greive 2000, 35). The only general requirement made by the agencies is thatthe part-time faculty be qualified, with professional experience (Greive 2000). Withlittle incentive to limit the use of part-time faculty, and multiple motives for doing so,it is not surprising that increasingly the person delivering higher education is a part-time faculty member.

The reasons for the increased use of part-time faculty are varied. According toRouche, Rouche, and Milliron (1995), adjunct faculty bring real-world experience andexpertise to the classroom, while providing the institutions greater flexibility in addingor deleting classes. In addition, this increased flexibility comes at a lower cost.According to Benjamin (2002), the core principle for utilising part-time faculty is costsavings, but he stresses that cost savings is not the same as cost effectiveness.Educational institutions must ensure that they not only continue to provide qualitylearning opportunities in the classroom, but that they also have assessment proceduresto guarantee this.

There is a growing concern regarding what higher education institutions provideto students, and a public demand for accountability and assessment. Recently, JohnBoehner, chairman of the National Committee on Education and the Workforce,asked the question, ‘what are they (students, parents and taxpayers) getting for theirmoney?’ (Burd 2003, 1). In fact, much of the 2006 Commission on the Future ofHigher Education hearings focused on the fact that the public is demanding resultsfrom America’s colleges and universities. ‘Half of the states in the United Statesconsider higher education performance when allocating resources, with 70%expected to have such programs within the next 5 years’ (Simpson and Siguaw 2000,199). This seems to indicate that the concern for performance exists at the state levelas well.

While much of the discussion regarding results and assessment revolves aroundstudent outcomes, teaching evaluation is inevitably a critical component of the assess-ment process. Many experts stress the crucial role an effective performance evaluationsystem plays in assuring quality instruction in higher education institutions (Grieve2000; Pluchinotta 1986). There is an old adage that states to achieve a goal – you mustinspect what you expect. As Rouche, Rouche, and Milliron (1995) affirm, if a collegeexpects learning to happen in a classroom then it would seem necessary to review theprocesses of the classroom experience.

Stoops (2000) summarises recent research by stating that successful evaluationprogrammes must visibly support the values of the institution, be supported by the

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 4: Evaluation of adjunct faculty in higher education institutions

Assessment & Evaluation in Higher Education 187

administration, accepted by faculty and evaluate those aspects of the faculty role thatare considered important and are rewarded. Welfel (2000) points to a report publishedby the AAUP, entitled Guidelines for Good Practice: Part-time and Non-tenure TrackFaculty (1997), which established guidelines that specifically state that facultyappointments should include a description of the required duties, that performance offaculty should be regularly reviewed based on those duties and that compensation andpromotion decisions should be based on job performance.

Student evaluations

Given these guidelines, it is surprising that aside from utilising student evaluationtools (SETs) there is no widely accepted protocol for the evaluation of adjunctfaculty. Student evaluation of instructors, an instrument type originally designed asformative evaluation, has now become accepted as a summative evaluation instru-ment (Stoops 2000). According to Germain and Scandura, ‘it has now becomecommon practice in universities and colleges for students to “grade” the professorsthat grade them’ (2005, 1). Just how common this practice is has been studied bySeldin (1998), who reports that the number of institutions using student ratings toevaluate instructors has risen from 29% in 1973, to 68% in 1983, to 86% in 1993.Simpson and Siguaw (2000) point to a study by the Carnegie Foundation for theAdvancement of Teaching that states approximately 98% of the universities surveyedcurrently use systematic student evaluation of classroom teaching. It becomes clearthat this tool is heavily relied upon.

One reason that student evaluations are widely utilised is because they are so easyto administer and score. In addition, students are in the best position to rate theirincreased knowledge and comprehension of the subject matter, and to rate issues suchas teacher punctuality, teacher enthusiasm and test comprehension (Scriven 1995).While there are many practical reasons to utilise SETs, much research has been doneregarding the reliability of this tool, and its ability to generate valid responses.

A study conducted by Tang (1997) found that students were fairly reasonable inconsidering the important aspects of the learning process when evaluating a profes-sor’s teaching effectiveness. According to Tang’s results, the 12 factors thatpredicted overall teaching effectiveness were: (1) instructor presents material clearly,(2) instructor answers student’s questions, (3) instructor treats students in a courte-ous and professional manner, (4) instructor appears to be well prepared for eachclass, (5) student’s expected grade for the course, (6) the clarity of grading criteria,(7) assignments were reported within a reasonable amount of time, (8) instructor isaccessible to talk with students outside of class, (9) class sessions were relevant tocourse subject matter, (10) classes students have missed, (11) course requirementsare clear, and (12) classes end on time. Many researchers have shown that studentratings are positively correlated with student learning and achievement (Arreola1995; Rouche, Rouche, and Milliron 1995; Watchel 1998). The preponderance ofevidence supports the use of SETs and the process of having students evaluate theirinstructors.

Other studies question the construct validity of evaluations because there is noclear definition of effective teaching criteria (Germain and Scandura 2005). Whileresearchers seem to agree that good faculty evaluations will reflect the amount learnedin class, some studies have shown that faculty evaluations are influenced by otherfactors. Likeability, gender, grade expectations, class size, appearance and even age

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 5: Evaluation of adjunct faculty in higher education institutions

188 J.M. Langen

have the possibility of affecting instructor ratings (Germain and Scandura 2005;Stoops 2000). Arubayi states, ‘the literature on validity, though extensive, remainsvery fluid and not perfectly conclusive’ (1987, 270). As several studies indicate thatSETs may not be accurate in assessing an instructor’s performance, this raises concernover the reliance on SETs.

Perhaps there is such a large variance in opinion over the validity of studentevaluations because the rating forms themselves vary to such a great extent. It wouldbe misleading to generalise that student ratings are a good or bad indicator of teachermerit, since many forms present questions that could influence the respondent, offerquestions that require the students to compare the instructor with other instructors,question if a student would recommend the course to a friend, or are structured in aleading format. There are other problems that could arise with the form design, suchas length of questionnaire, or with the context of how and when evaluations areadministered. Being rushed to complete the form or being cajoled for a positive ratingwould make the responses invalid. Common interpretation errors include the use ofonly averages regardless of distribution, central tendency, using factor analysiswithout logical validation and failure to establish appropriate comparison groups(Scriven 1995). These studies appear to indicate that while SETs may have value, theyare not always valid or reliable assessment tools.

The research shows that while student ratings are useful, they should never be thelone basis for evaluating teacher effectiveness (Arreola 1995; Rouche, Rouche, andMilliron 1995). Arreola (1995) supports the idea that there are four major dimensionsof managing a course: (1) content expertise, (2) instructional delivery skills, (3)instructional design skills, and (4) course management skills. While students may bean excellent source for determining dimensions such as instructional delivery skillsand instructional design skills, with their limited experience they are not qualified tojudge content expertise or course management. Given this information, the questionarises: how do administrators determine how well their adjunct faculty are managingthe courses they are teaching?

Literature review

Decision-making theory

Before an analysis of the evaluation process can take place, a basic understanding ofthe judgement and decision-making process must be developed. Any practical activityinvolves both a deciding process and a doing process. It is a mistake to assume thatdecision-making in administrative organisations is limited to comprehensive policydecisions. Instead the decision-making process continues indefinitely as the organisa-tion continues on with the cycle of doing, or of implementing and living with theestablished policies (Simon 1997). This process clearly occurs with the hiring andreappointment of adjunct faculty.

Many individuals have studied and written about decision-making theories in anattempt to better understand both the context in which decisions are made, and theprocess involved in decision-making. The two theories most applicable to thisresearch are rational choice theory (RCT) and status quo theory (SQT).

Silver and Mitchell (1990) describe RCT as a theory which asserts that decisionmakers thoroughly evaluate each alternative available to them before selecting thealternative with the greatest expected value. Shultz (1996) believes that the economicand utility analysis associated with RCT can be applied to personnel decisions. The

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 6: Evaluation of adjunct faculty in higher education institutions

Assessment & Evaluation in Higher Education 189

monetary benefits of personnel decisions can be simplified to the idea that utility issimply the benefits of a decision less the costs of the decision.

In practice, many decisions that managers face do not involve deliberation overadopting a new goal, plan or action, but rather revolve around the continuation of anexisting course of action. These ‘how are we doing’ questions are considered progressdecisions. This theory explains how some decisions involve the rule-followingconcept. This model applies to situations where decision makers avoid cost-benefitanalysis and in-depth analysis, often due to time and resource constraints. In thesecircumstances, decision makers base their decisions on rules or heuristics. Heuristicsare ‘principles which reduce the complex tasks of assessing probabilities and predict-ing values to simpler judgmental operations’ (Tversky and Kahneman 1974, 1124).

Recommended practice

Keeping in mind these decision-making models, it becomes necessary to look at therecommended processes and procedures for faculty evaluation. Almost all currentwritings concerning faculty evaluation agree that whatever the evaluation process ofthe institution includes, there should be multiple sources of information (Stoops2000). These sources most often include peer evaluation, self-appraisals, studentappraisals, department chairpersons or supervisor appraisals and teaching portfolios.Boyer (1990) refers to the Carnegie Foundation for the Advancement of Teachingreport, which suggests that a triad approach be used, with student, peer and portfolioevaluations be utilised to evaluate instructors’ performance.

A study conducted by Yon, Burnap, and Kohut indicated that:

peer observation reports are seen as an important component in evaluating teachingeffectiveness, though perhaps not the best indicator of effective teaching. Despite flawsin peer observation instruments, the results from classroom observation are seen as validand are used in deliberations about faculty teaching performance. (2002, 104)

According to Scarlett and Turner, the recommended teaching portfolio ‘provides anatural bridge into the evaluation process’ (1996, 92). The portfolio could includeitems such as syllabi, narratives of why the course is structured the way it is, activities,assignments and self-reflections. It could also include samples of students’ work(1996). Appling, Naumann, and Berk (2001) suggest that the portfolio include areflective analysis of the instructor’s philosophy of teaching, goals, teachingapproach, desired outcomes, teaching methods, creative teaching techniques andevidence of methods including course syllabi, handouts, use of technology, studentwork samples and performance on examinations. A final piece that can be included isself-appraisal. Shrauger and Osberg (1981) reviewed current literature on self-apprais-als and found that the validity of self-appraisals, compared with other common eval-uation methods used in psychological evaluation, was as predictive as otherassessment methods.

If the assumption can be made that the decision-making process for evaluation offull-time tenure-track faculty involves a logical and thorough decision-makingprocess (and if not it at least provides a basis for comparison), the decision-makingprocess for evaluation of part-time faculty should be just as logical and thorough. Theevaluation process for full-time tenure-track faculty typically includes multiplesources of information including outside evaluations, a review committee anddocumentation of scholarship and service. If part-time instructors are to play a critical

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 7: Evaluation of adjunct faculty in higher education institutions

190 J.M. Langen

role in a higher education institution, the decision-making process should be asdetailed, consistent and thorough as for full-time instructors. In reality, part-timeinstructors are often evaluated on the basis of enrolment numbers, availability to teachand lack of problems that occur during the semester (Charfauros and Tierney 1999).As Benjamin states, when higher education institutions move away from full-timefaculty they ‘replace precise input standards based on faculty qualifications,appointment policies and performance standards with vaguely defined requirementsfor institutionally developed student “outcomes” measures’ (2002, 4).

This study sheds some light on the decision-making process that administrators gothrough regarding adjunct faculty. Benjamin’s (2002) claim that vaguely definedrequirements are used when administrators decide who will teach at their institutionmay well be true. It is likely that time and budget constraints may encourage admin-istrators to rely on the status quo decision-making process.

Methodology

The first step in a strategic analysis process is to ask the important question, ‘whereare we now?’ The purpose of this study was to answer that question, and gatherinformation regarding the current state of evaluation processes for adjunct faculty.Specifically, this research reveals what sources of information administratorscurrently use to evaluate adjunct faculty, and what criteria are used during evaluationand reappointment decisions.

Because of the number and variety of higher education institutions, it was impor-tant to be able to gather information from a large sample, as well as compare theresults based on demographic characteristics of the institution and professional char-acteristics of the academic administrator. For this reason, it was beneficial to conducta quantitative study, utilising a survey to gather information regarding what kinds oftools and information administrators use to evaluate their adjunct faculty, as well ashow the evaluation results are utilised. A qualitative pilot study, using a focus groupinterview with small sample of higher education administrators, was used to ensurethat a useful and complete survey tool was sent to the larger population being utilisedin the research study.

The population utilised for this large quantitative research project included highereducation academic administrators in Michigan. Michigan has a large and diversepopulation of higher education institutions, including 94 two- and four-year publicinstitutions, two- and four-year private non-profit institutions and two- and four-yearfor-profit institutions. While they are not governed by a state higher education board,Michigan higher education institutions are typical of other states in regards to thecharacteristics of their institutions. According to state higher education profiles,Michigan higher education institutions are representative of institutions across theUSA (Barbett et al. 1994). The state of Michigan has no remarkable or outstandingdifferences from the national profile in the areas such as student demographics, insti-tutional demographics, labour intensiveness, degrees awarded, or faculty expendi-tures. There are no policies or regulations that would make Michigan higher educationhiring and staffing practices notably different from any other institution acrossthe USA.

This study used a stratified sample based on the institutional categories describedabove, sampling 25% of all institutions. The website of every third institution, in eachof the six categories, was mined to develop a list of deans, associate deans, assistant

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 8: Evaluation of adjunct faculty in higher education institutions

Assessment & Evaluation in Higher Education 191

deans, department chairpersons and programme directors. If an institution did nothave a directory or contact information available, the next institution on the list wascontacted. The process continued until the appropriate numbers of institutions in eachcategory was developed. The final number of participating institutions for this studyincluded six two-year public and four four-year public, one two-year private non-profit, 13 four-year private non-profit, one two-year for-profit and one four-year for-profit institution. While the population was stratified at the institutional level, thecontacted participants were not stratified, resulting in a population study.

Of the 750 surveys sent to administrators with valid and available addresses,155 responses were received. A 21% response rate is an acceptable level for dataanalysis (Creswell 2005). The majority of respondents (52%) were from four-yearpublic institutions. Two-year public institutions accounted for 25% of respondents,and four-year private institutions accounted for 18% of respondents. Two-yearprivate, four-year for-profit and two-year for-profit institutions each accounted for 2%or less of respondents.

The majority of survey participants were department chairpersons (51%) andprogramme directors (22%). Only 12% of respondents were deans. Associate deansand assistant deans each accounted for 3% or less of respondents. A category titled‘other’ included the remaining 11% of respondents.

The majority of the individuals that responded to the survey had little experiencein their position, with the average time spent at their current position only 1.8 years.This could be concerning as 65% of respondents had responsibility for evaluating theadjunct faculty in their department. Respondents from a wide variety of academicfields answered the survey, including business administration, health sciences,computer information and technology, engineering, music, general education, arts,human services, education, religion and construction.

Results

Evaluation frequency

The first research question addressed how administrators evaluate their adjunctfaculty. The study revealed that 20% of the higher education institutions studied donot require part-time instructor evaluation on a scheduled basis, and 7% do not requireany evaluation of their adjunct faculty. This lack of evaluation knowledge wouldmake it difficult for an administrator to make adjunct faculty evaluation andreappointment decisions that could support the goal of providing quality learningopportunities. This is a concern and indicates that some administrators are not gather-ing the necessary information to make rational, data-driven decisions. However, thefact that 63% of higher education institutions are evaluating their adjunct faculty on ascheduled basis is positive news.

Sources of information

The first group of questions asked administrators to rate how heavily they relied onvarious sources of information when evaluating their adjunct faculty, when evaluatingfor formative purposes and when evaluating for summative purposes. A six-pointscale was used, with a rating of six indicating the source was highly relied upon anda rating of one indicating the source was rarely relied upon. The results of thesequestions are shown in Table 1.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 9: Evaluation of adjunct faculty in higher education institutions

192 J.M. Langen

SET results received the highest administrator rating, with a mean rating of 5.51.The second highest rated source was classroom observation, with a mean rating of4.39.

Eighty-seven per cent of respondents gave SETs a strong reliance rating as asource of information used for evaluating adjunct faculty. Only 58% of respondentsgave the second highest rated tool, classroom observation, a strong reliance rating.This indicates a substantial preference for the use of SETs when evaluating adjunctfaculty. Self-evaluations appeared to be the least relied upon tool, with the majority ofrespondents giving it an N/A rating.

In addition to administrators being asked to rate their reliance on various sourcesfor overall evaluation purposes, they were also asked to rate their reliance on thesesources of information for formative evaluations and summative evaluations.

The results of this study show very little difference between higher educationadministrators’ source reliance ratings for overall evaluation purposes, summativepurposes and formative purposes. In all three situations, SET results were relied uponmore than any other source of information, and classroom observations received thesecond highest reliance rating. In addition, syllabus reviews and informal facultyfeedback were among the top five relied upon sources of information for all threesituations. Self-evaluations received the lowest or second to lowest rating in all threecategories. Some differences were noted however. Peer evaluations received a higherrating for summative evaluation purposes (4.35 mean rating) than for formativeevaluation purposes (3.97 mean rating) and overall evaluation purposes (3.85 meanrating). Further research is needed to discover why peer evaluations have morevariance in administrator reliance for evaluation purposes than any other informationsource.

Administrators were also asked to rate the accuracy of various sources of informa-tion. Again, a six-point scale was utilised, with a rating of six indicating a high levelof accuracy and a rating of one indicating a low level of accuracy. Table 2 shows theresults of this question.

Interestingly, when asked which of the criteria were the most accurate, classroomobservation received the highest mean rating (5.0), while SET results had a meanrating of 4.65. While it does not appear rational to state that classroom observationsare the most accurate evaluation factor, but continue to rely more heavily on SETresults for evaluations, there could be circumstances that restrict administrators’ use

Table 1. Mean reliance rating for sources of information.

SourceEvaluation

[Mean (SD)]Formative evaluation

[Mean (SD)]Summative evaluation

[Mean (SD)]

SET results 5.51 (.93) 5.30 (1.01) 5.38 (1.00)Classroom observations 4.39 (1.82) 4.73 (1.53) 4.84 (1.520)Syllabus reviews 4.33 (1.52) 4.06 (1.60) 4.06 (1.61)Review of teaching materials 4.20 (1.48) 3.98 (1.58) 4.01 (1.71)Informal faculty feedback 4.00 (1.45) 4.38 (1.50) 4.34 (1.52)Peer evaluation 3.85 (1.65) 3.97 (1.49) 4.35 (1.49)Grade reviews 3.65 (1.55) 3.40 (1.65) 3.56 (1.68)Informal student feedback 3.53 (1.63) 3.82 (1.44) 3.60 (1.63)Instructor self-evaluation 2.96 (1.48) 3.50 (1.65) 3.34 (1.65)

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 10: Evaluation of adjunct faculty in higher education institutions

Assessment & Evaluation in Higher Education 193

of classroom observations for evaluations. Research has shown that administratorsresort to satisficing, where the course of action that is satisfactory is the chosen alter-native due to resource limitations (Holton and Naquin 2005; Morrell 2004; Simon1997). The fact that using classroom observations to gather evaluation information ismore time-consuming and expensive than SET results may affect how evaluationinformation is gathered.

Another possible explanation for the reported difference between the accuracyrating and the reliance rating of classroom observations is that administrators relymore on classroom observation than they realise. When asked how frequently theyrefer back to various sources of information, classroom observation received the high-est mean rating (4.04) and was the only source to have a mean rating over 4.0. Thisindicates that administrators may rely more on classroom observations than theyinitially reported.

Other sources also showed a difference between the accuracy rating and the reli-ance rating. For example, the mean rating for the accuracy of syllabus reviews was3.98 but the mean reliance rating of syllabus review for overall evaluation was 4.33.This would indicate that administrators rely on syllabus reviews at a level higher thantheir accuracy rating would suggest. Other factors, including informal faculty feed-back, informal student feedback, SET results, grade reviews, syllabus reviews andreview of teaching materials, also received higher reliance ratings than accuracyratings.

Evaluation factors

The next group of survey questions addressed what factors were most important toadministrators for evaluation and reappointment decisions. Administrators were askedto rate the importance of various factors using a six-point scale, with six representingthe highest level of importance and one representing the lowest level of importance.Their responses are displayed in Table 3.

The results displayed a great deal of similarity between factors considered forevaluation purposes and for reappointment decisions. Teaching performance receivedthe highest mean rating for both evaluations (5.64) and reappointment (5.69). Workexperience (5.31/4.98), positive SET results (5.04/5.14) and availability (4.97/5.04)were the next three highest rated factors for both evaluation and reappointmentdecisions. For both purposes, the factors with the lowest mean rating were research

Table 2. Accuracy ratings for sources of information.

Source Mean (SD)

Classroom observation 5.00 (1.13)SET results 4.65 (1.24)Peer evaluation 4.20 (1.40)Review of teaching materials 4.20 (1.40)Syllabus review 4.19 (1.56)Informal faculty feedback 3.97 (1.36)Informal student feedback 3.50 (1.39)Grade review 3.43 (1.56)Instructor self-evaluation 3.32 (1.45)

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 11: Evaluation of adjunct faculty in higher education institutions

194 J.M. Langen

activity (2.22/2.47) and salary rate (2.50/2.46). This is not surprising as scholarship israrely required of adjunct faculty. A possible explanation for the low rating given tosalary rate is that adjunct faculty may be paid on a similar scale across the institution,so it becomes a null factor. Further research would be needed to support this idea.The consistency of factor ratings for these two purposes is a positive sign for theevaluation process.

It is concerning that while SET results were rated as a highly accurate measure ofteaching performance, they have almost the exact same level of importance asavailability. Ninety-five per cent of respondents gave teaching performance a strongimportance rating but only 79% of respondents assigned SETs with a strong impor-tance rating. Surprisingly, 72% of respondents assigned relevant work experience astrong importance rating and 75% gave instructor availability a strong importancerating. This raises the question: are administrators reappointing faculty because theyare excellent teachers or because they are available to teach the class?

A final item should be noted regarding the respondents and the departments thatthey work in. When asked what definition of adjunct faculty most closely matched thedefinition used in their department, 1% of the respondents described adjunct facultyas non-teaching faculty that are not full-time employees. The percentage of respon-dents that indicated adjunct faculty do not have teaching responsibility was not largeenough to impact the results.

Future studies and recommendations

The area of adjunct faculty evaluation is ripe for investigation. There are a limitednumber of research studies that have been conducted in this area, yet the importanceof adjunct faculty and their impact on what a higher education institution provides tostudents is considerable. Further investigation is needed regarding the evaluationprocess of adjunct faculty – how and why it is conducted, what tools are used, whatfactors are considered and why different institutions have different processes. It wouldbe particularly helpful to have a study which included a larger number of for-profitinstitutions to confirm this study’s finding. What constraints administrators faceduring both the evaluation and reappointment process also need to be researched.While this study revealed a great deal of information, it also raised many questions.

Table 3. Importance of evaluation and reappointment factors.

Factor evaluation Mean (SD) Factor reappointment Mean (SD)

Teaching performance 5.64 (.69) Teaching performance 5.69 (.63)Work experience 5.31 (1.07) SET results 5.14 (1.04)SET results 5.04 (1.04) Availability 5.04 (1.16)Availability 4.97 (1.24) Work experience 4.98 (1.22)Collegiality 4.63 (1.19) Collegiality 4.61 (1.19)Commitment to institution 4.12 (1.53) Commitment to institution 4.20 (1.58)Service 3.70 (1.67) Length of time with institution 3.70 (1.59)Length of time with institution 3.54 (1.49) Service 3.44 (1.63)Administrative duties 3.13 (1.55) Administrative duties 3.40 (1.66)Salary 2.50 (1.55) Research 2.47 (1.64)Research 2.22 (1.50) Salary 2.46 (1.52)

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 12: Evaluation of adjunct faculty in higher education institutions

Assessment & Evaluation in Higher Education 195

Far too little is still understood about the evaluation process regarding adjunct facultyand how this evaluation information is utilised.

The data developed from this study clearly show that there are a wide variety ofevaluation practices for adjunct faculty in higher education. While many of the prac-tices do not support established decision-making theory, even those that follow recog-nised theories such as SQT are not fully understood. If as indicated, evaluations andreappointment decisions are made using status quo tendencies, the heuristics andrules involved should be clearly defined. The factors considered when developingthese heuristics, whether they include costs, accuracy, time requirements or otherfactors, need to be clearly identified. Once the guidelines for how sources of informa-tion for adjunct faculty evaluation should be used are established, they should befollowed consistently and assessed for their effectiveness. Without effective tools forgathering evaluation information, without a clear picture of what performance factorsare important, evaluation will be nothing more than a haphazard gathering of data withlittle meaning or usefulness. The quality of education and instruction provided byhigher education institutions is too important to be evaluated by ineffective assess-ment tools.

Notes on contributorJill Langen is dean of the MBA programme at Baker College Center for Graduate Studies. Shehas a BBA in marketing, an MBA and a PhD in leadership in higher education. Herresearch interests relate to adjunct faculty and the administrative issues associated with thisfaculty base.

ReferencesAmerican Association of University Professors. 2006. AAUP Contingent Faculty Index. http://

www.aaup.org/AAUP/pubsres/research/conind2006.htm (accessed August 22, 2007).Appling, S.E., P.L. Naumann, and R.A. Berk. 2001. Using a faculty evaluation triad to

achieve evidence-based teaching. Nursing and Health Care Perspective 22: 247–51.Arreola, R.A. 1995. Developing a comprehensive faculty evaluation system: A handbook for

college faculty and administrators designing and operating a comprehensive facultyevaluation. Boston: Ankar.

Arubayi, E.A. 1987. Improvement of instruction and teacher effectiveness: Are student ratingsreliable and valid? Higher Education 16, no. 3: 267–78.

Barbett, S.F., R.A. Korb, M. Black, and M.L. Hollins. 1994. State higher education profiles:A comparison of state higher education data for fiscal year 1991. 7th ed. Washington,DC: National Center for Education Statistics. http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED376779&ERICExtSearch_SearchType_0=no&accno=ED376779(accessed January 29, 2008).

Benjamin, E. 2002. How over-reliance on contingent appointments diminished facultyinvolvement in student learning. Peer Review 5, no. 7: 4.

Boyer, E.L. 1990. Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ:Carnegie Foundation for the Advancement of Teaching.

Burd, S. 2003. Republican lawmakers call for more accountability in higher education.Chronicle of Higher Education 49. http://chronicle.com/weekly/v49/i37/37a02301.htm(accessed June 19, 2006).

Charfauros, K.H., and W.G. Tierney. 1999. Part-time faculty in colleges and universities:Trends and challenges in a turbulent environment. Journal of Personnel Evaluation inEducation 13, no. 2: 141–51.

Creswell, J.W. 2005. Educational research: Planning, conducting, and evaluating quantita-tive and qualitative research. Upper Saddle, NJ: Pearson Education.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14

Page 13: Evaluation of adjunct faculty in higher education institutions

196 J.M. Langen

Germain, M., and T.A. Scandura. 2005. Grade inflation and student individual differencesas systematic bias in faculty evaluations. Journal of Instructional Psychology 32, no. 1:58–67.

Greive, D.E. 2000. Leadership issues of the new millennium. In Managing adjunct & part-time faculty for the new millennium, ed. D.E. Greive and C.A. Worden, 19–44. Elyria,OH: Info-Tec.

Holton III, E.F., and S. Naquin. 2005. A critical analysis of HRD evaluation models froma decision-making perspective. Human Resource Development Quarterly 16, no. 2:257–80.

Lederman, D. 2007. Inexorable march to part-time faculty. Inside Higher Education. http://www.insidehighered.com/news/2007/03/28/faculty (accessed May 16, 2007).

Morrell, K. 2004. Decision making and business ethics: The implications of using imagetheory in preference to rational choice. Journal of Business Ethics 50: 239–52.

Pluchinotta, J. 1986. Considerations on policy formulation regarding the utilization of part-time/adjunct faculty within private, independent institutions of higher education. PhDthesis, University of New Mexico.

Rouche, J.E., S.D. Rouche, and M.D. Milliron. 1995. Strangers in their own land. Washing-ton, DC: American Association of Community Colleges.

Scarlett, C., and S. Turner. 1996. A performance management system for adjunct faculty:Selection, orientation, development and evaluation. Paper presented at the 16th Alliance/ACE annual conference. pp. 83–97. ERIC Document No. ED420511.

Scriven, M. 1995. Student ratings offer useful input to teacher evaluations. PracticalAssessment, Research & Evaluation 4, no. 7. http://PAREonline.net/getvn.asp?v=4&n=7(accessed November 16, 2005).

Seldin, P. 1998. How colleges evaluate teaching. AAHE Bulletin 7, no. 50: 3–7.Shrauger, J.S., and T.M. Osberg. 1981. The relative accuracy of self-predictions and

judgments by others in psychological assessment. Psychological Bulletin 90: 322–51.Shultz, K.S. 1996. Utility analysis in public sector personnel management: Current issues and

keys to implementation. Public Personnel Management 25, no. 3: 369–78.Silver, W.S., and T.R. Mitchell. 1990. The status quo tendency in decision making. Organiza-

tional Dynamics 34, no. 13: 37–45.Simon, H.A. 1997. Administrative behavior: A study of decision-making processes in

administrative organizations. 4th ed. New York: Free Press.Simpson, P.M., and J.A. Siguaw. 2000. Student evaluations of teaching: An exploratory study

of the faulty response. Journal of Marketing Education 22, no. 3: 199–213.Sonner, B.S. 2000. A is for ‘adjunct’: Examining grade inflation in higher education. Journal

of Education for Business 76, no. 1: 5–8.Stoops, S.L. 2000. Evaluation of adjunct faculty in a process for effectiveness. In Managing

adjunct & part-time faculty for the new millennium, ed. D.E. Greive and C.A. Worden,221–46. Elyria, OH: Info-Tec.

Tversky, A., and D. Kahneman. 1974. Judgment under uncertainty: Heuristics and biases.Science 185: 1124–31.

Watchel, H.K. 1998. Student evaluation of college teaching effectiveness. Assessment &Evaluation in Higher Education 23, no. 2: 191–211.

Welfel, E.R. 2000. Ethical issues for adjunct faculty and their managers. In Managing adjunct& part-time faculty for the new millennium, ed. D.E. Greive and C.A. Worden, 2101–17.Elyria, OH: Info-Tec.

Winston, G.C. 1999. For-profit higher education: Godzilla or chicken little. Change 31, no. 1:12–9.

Yon, M., C. Burnap, and G. Kohut. 2002. Evidence of effective teaching: Perception of peerreviewers. College Teaching 50, no. 3: 104–10.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

lasg

ow]

at 0

2:24

20

Dec

embe

r 20

14