the status of management oriented evaluation in public administration and management graduate...

9
Articles The Status of Management Oriented Evaluation in Public Administration and Management Graduate Programs1 ANNA-MARIE MADISON ABSTRACT This paper details an assessment of the evaluation course content of graduate programs in public affairs and administration programs. The research describes the distribution of evaluation courses in these programs by academic unit, program focus, and subject matter content. The findings reveal that 85% of the programs provide at least one evaluation course and evaluation is a core requirement in 49% of the programs. Analysis of course syllabi demonstrates that these courses are more likely to focus on program outcomes and policy impact than management issues concerning the effects of Madison organizational structures, processes, and internal resource allocations on organizational performance. The author argues that more courses in these programs should take this emphasis to enable students to use evaluation more effectively as a management tool. INTRODUCTION Stufflebeam (1971) and Wholey (1983) were among the first evaluators to focus on using evaluation to improve the management and outcomes of public programs (Wholey, 1983). Stufflebeam’s focus on decision-oriented evaluations to develop, improve, and defend the worth of public education programs led to the development of the CIPP model for educational accountability. The CIPP model provides a systematic methodology for including the context, inputs, process, and products of a program to inform decisions to improve overall program performance. On the other hand, Wholey focuses on result-oriented evaluations as a tool to improve public sector performance. Results-oriented evaluation was introduced by Wholey as Anna-MarieMadison l PhD, Assistant Professor, Human Services Management Graduate Program, University of Massa- chusetts-Boston, Boston, MA. EvaluationPractice, Vol. 17. No. 3, 1996. pp. 251-259. Copyright 0 1996 by JAI Press Inc. ISSN: 0886- I633 All rights of reproduction in any form reserved. 257

Upload: anna-marie-madison

Post on 17-Sep-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The status of management oriented evaluation in public administration and management graduate programs

Articles

The Status of Management Oriented

Evaluation in Public Administration and

Management Graduate Programs1

ANNA-MARIE MADISON

ABSTRACT

This paper details an assessment of the evaluation course content of graduate programs in public affairs and administration programs. The research describes the distribution of evaluation courses in these programs by academic unit, program focus, and subject matter content. The findings reveal that 85% of the programs provide at least one evaluation course and evaluation is a core requirement in 49% of the programs. Analysis of course syllabi demonstrates that these courses are more likely to focus on program outcomes and policy impact than management issues concerning the effects of

Madison

organizational structures, processes, and internal resource allocations on organizational performance. The author argues that more courses in these programs should take this emphasis to enable students to use evaluation more effectively as a management tool.

INTRODUCTION

Stufflebeam (1971) and Wholey (1983) were among the first evaluators to focus on using evaluation to improve the management and outcomes of public programs (Wholey, 1983). Stufflebeam’s focus on decision-oriented evaluations to develop, improve, and defend the worth of public education programs led to the development of the CIPP model for educational accountability. The CIPP model provides a systematic methodology for including the context, inputs, process, and products of a program to inform decisions to improve overall program performance. On the other hand, Wholey focuses on result-oriented evaluations as a tool to improve public sector performance. Results-oriented evaluation was introduced by Wholey as

Anna-Marie Madison l PhD, Assistant Professor, Human Services Management Graduate Program, University of Massa-

chusetts-Boston, Boston, MA.

Evaluation Practice, Vol. 17. No. 3, 1996. pp. 251-259. Copyright 0 1996 by JAI Press Inc.

ISSN: 0886- I633 All rights of reproduction in any form reserved.

257

Page 2: The status of management oriented evaluation in public administration and management graduate programs

252 EVALUATION PRACTICE, 17(3), 1996

a systematic evaluation approach designed to help managers identify realistic results-oriented objectives and performance indicators as well as to identify the effect of management struc- tures, processes, and resource allocation decisions on results attainment.

Results-oriented evaluation activities include linking evaluation to the management functions of defining agency goals and objectives, setting priorities, defining performance measures and performance targets, assessing performance results, and using performance information to improve performance and results, and to identify and abandon unproductive activities (Wholey, 1983, p. 5).

Over the last 20 years, Wholey and other evaluators have developed and tested evalua- tion strategies to improve management in various public agencies and have demonstrated that results-oriented evaluations at various stages of program development can help managers to produce demonstrable improvements in overall performance at federal, state, and local levels of government (Wholey, Hatry, & Newcomer, 1994; Wholey & Hatry, 1992; Wholey & New- comer, 1989; Wholey, Abramson, & Bellavita, C., 1986; Wholey, 1983).

To institutionalize results-oriented management and management-oriented evaluation, professionally trained managers with knowledge and skill in these methods are required. Graduate public administration and management programs tend to attract entry level and mid- dle managers who are currently working in the public arena or will be in the future. This gen- eration of managers must be able to use performance measurement technology to assess and improve management (Bouckaert, 1993; Cohen, 1993) and must be willing to initiate the structural, procedural, and ideological administrative reforms necessary to institutionalize evaluation as a management tool (Kim & Wolff, 1994). It is through their graduate education that they are most likely to gain these skills.

The study reported here examines the current status of evaluation course work in public administration and affairs programs. This paper raises the following two questions: (1) What types of evaluation courses are required or elective, and how do they vary by program focus? (2) What is the subject matter content in the various types of evaluation courses, and how does that content vary by program focus? This paper assesses the evaluation course content in 140 Master of Public Administration and Public Affairs programs. The research describes the dis- tribution of evaluation courses by the academic unit in which they are located, the graduate program focus, and the subject matter content.

STUDY DESIGN

The study population consisted of all master’s programs listed in the National Association of

Schools of Public Affairs and Administration (NASPAA), 1992 Directory. NASPAA is the accrediting organization for public affairs and administration graduate programs. NASPAA member institutions collectively establish guidelines for the educational content of graduate programs and monitor compliance with these guidelines through a peer review process. NASPAA’s membership is comprised of both public and private educational institutions and includes large, medium, and small campuses across the United States with varied student pop- ulations.

This study was conducted by the survey research method. Address labels for designated program representatives were purchased from NASPAA; questionnaires were mailed to the 205 NASPAA member institutions. The person designated as the institution’s representative was requested to ask the professors who teach evaluation courses on a regular basis to com-

Page 3: The status of management oriented evaluation in public administration and management graduate programs

Management Oriented Evaluation 253

plete the questionnaire. The designated respondents also were requested to submit course syl- labi for all evaluation courses considered appropriate for administration and management students. While 14% of the programs offered more than one evaluation course, no program

sent more than one syllabus. Thus, each syllabus represents a different program in the analy- ses. Study respondents were instructed to omit basic research design and methods courses from the category of evaluation courses. An evaluation course was defined on the survey as

any course that included evaluation in the title and/or 50% or more evaluation content. Eval- uation was defined as “the systematic application of scientific methods to assess and improve decision making and program and organizational performance.” The initial mailing and a fol- low-up mailing of the questionnaire yielded 149 returned questionnaires; 140 were satisfacto- rily completed, representing an actual return rate of 68%.

The responding schools represented a cross-section of public affairs and administration schools by size, geographical region, and focus. The distribution of respondents by region was as follows: South, 30%; Mid-west, 25%; West, 17%; Northeast, 21; and the Southwest, 7%. This compared well with the total NASPAA membership distribution by region: South, 29%; Mid-west, 26%; West, 17%; Northeast, 20%; and Southwest, 8%. The representation of the responding institution by size (using graduate program student enrollment as an indicator) was as follows: small, 68%, medium, 24% and large, 8%. This distribution also compared well with the total NASPAA membership distribution by size: small, 74%; medium, 16%, and large, 10%. The majority of the programs identified themselves as having a generalist or generic management/administration focus.

FINDINGS

Academic Unit and Program Focus

Forty-seven percent of the programs were located in Political Science Departments in Arts and Sciences Colleges; the remaining 53% were free-standing programs in their own Schools of Public Affairs and Administration, or located in public administration departments in other professional schools. The majority of the programs had a generalist or generic public administration/management focus (54%). Non-generalist programs included ones focusing on urban management, 23%; public policy, 19%; and others, 4%. The “other” category included programs such as criminal justice, state and county administration, health administration, eco- nomic development, and nonprofit administration.

Evaluation Course Offerings and Requirements

The number of evaluation courses offered by the graduate programs ranged from zero to four. Seventy-one percent of the respondents offer one course in evaluation; 10 % offer two; 4% offer three or more; and 16% did not offer a specific evaluation course. Some of the respondents in the latter group noted that evaluation courses were available to students through other departments, such as education, economics, psychology, and other social and behavioral science units.

The data presented in Table 1 indicate that public policy programs were the most likely to have evaluation courses among their core requirements with 57% of these programs report- ing they require an evaluation course, as compared to 49% of all programs. Forty-seven per-

Page 4: The status of management oriented evaluation in public administration and management graduate programs

254 EVALUATION PRACTICE, 17(3), 1996

TABLE 1 Percentage Distribution of Evaluation Course Requirements By Program Focus

Program Focus Core Requirement

Generalists/Generic (n = 78) 47%

Urban Management (n = 32) 48% Public Policy (n = 25) 57%

Other (n = 5) 50%

Required for Some Specializations’

48%

28% 21%

33%

Notes: n = 140 institutions.

I. Some specializations refers to areas of specialization in management. such as budgeting and finance,

human resource management. etc.

cent of the general/generic programs and 48% of the urban management-focused programs

require evaluation as a core requirement. Thirty-seven percent of the programs require evalu- ation for some specializations, but not as a core requirement, and 14% offer evaluation as an elective.

The data also reveal variances in course requirements for evaluation courses according

to the academic unit in which they are located. Programs located in political science depart- ments require evaluation as a core course more often than those located in public administra-

tion departments or in schools of public affairs. Sixty percent of the political science programs require evaluation as a core course as compared to 24% of the free-standing programs in their

own Schools of Public Affairs and Administration and 16% of the public administration departments.

Course Content

Content analysis of course syllabi was conducted to discern the content of the evaluation courses. Course syllabi were used instead of the catalogue course descriptions because syllabi

are assumed to be more reflective of the actual course content. Nonetheless, the author

acknowledges that the course syllabi are not a perfect indicator of the content of courses. Some of the problems inherent in using the syllabi content analysis are that some instructors

provide less detail on the syllabus than others and instructors make adjustments in the syllabus throughout the course. Instructors sometimes cover topics that were not included in the sylla- bus and/or omit some of the topics included. Finally, the syllabus does not always indicate

how much actual time is spent on each topic.

Even though the syllabi have limitations, for the purpose of empirical observation, they are considered more reliable than the course catalogue; yet, they are considered to be less reli- able than direct participant observation and student assessment of the course content. Due to budgetary and other constraints, it was not possible to conduct participant observations or to collect student assessments of the course content.

Although syllabi were not uniform, all included the primary textbooks and other reading material, topics to be covered, and course objectives and requirements. The assigned readings, topical outlines, and project descriptions allow one to make some assumptions about the focus of the course, the body of knowledge upon which the course is based, and the possible leam- ing outcomes. For example, if the primary textbook for the course is an experimental design

Page 5: The status of management oriented evaluation in public administration and management graduate programs

Management Oriented Evaluation 255

research methods text, supplemental readings include articles on large-scale studies of pro- gram impact, and the class projects focus on measurement of program impact or the c-effec- tiveness of the program, one can assume that the course focus is not on management issues such as performance monitoring and internal resource allocation.

The content of the course syllabi included a broad range of course objectives, topics, course readings, and class projects. This content was used to cluster the courses into three cat- egories of evaluation content: (1) traditional quantitative program evaluation focus, with an emphasis on the assessment of program outcomes and a heavy concentration on experimental and quasi-experimental design; (2) management-oriented program evaluation, with an emphasis on evaluation as an information gathering, measurement, and analysis activity to imprbve the performance of public programs; and (3) policy analysis, with an emphasis on measurement of the projected impact of various public policy options. The difference between the first and third categories is that the first emphasizes programs, whereas the third empha- sizes policy with particular focus on the economic and political cost factors.

Traditional quantitative program evaluation. Traditional quantitative program evaluation courses represent 3 1 percent of the evaluation courses (see Table 2). The common objective of these courses is to provide knowledge in the application of scientific methods to analyze the outcomes of government programs. The course descriptions indicate that out- comes refer to goal attainment and/or the impact of the program on the identified public prob- lems. The content of these course syllabi emphasize the application of social science research methods in evaluation, with heavy emphasis on scientific procedures and methodological rigor. The topics include: design and measurement, sampling procedures, statistical proce- dures, and data analysis. Prerequisites in statistics or research methods were required in three- fourths of the courses.

The course syllabi indicate that, in most cases, little emphasis is placed on the contextual environment in which evaluation is implemented. Neither the readings, the topics for discus- sion, nor the project descriptions indicated any attention to the contextual environment of evaluation. Political and administrative problems that affect successful use of evaluation were also not included among the topics discussed or readings assigned in the majority of these courses. Finally, the course objectives failed to give attention to utilization of evaluation to answer process questioni that may be relevant to managers.

TABLE 2 Percentage Distribution of Evaluation Course Content by Program Focus

Traditional Quantitative Policy Management-oriented

Program Focus Program Evaluation Analysis Program Evaluation

General/Generic Administration 76% 9% 15% (n = 70 )

Urban Management 71% 8% 21% (n = 22)

Public Policy 17% 83% 0 (n = 27)

Overall Course Content 31% 42% 27%

Note: n = 119 syllabi.

Page 6: The status of management oriented evaluation in public administration and management graduate programs

256 EVALUATION PRACTICE, 17(3), 1996

Policy analysis. Forty-two percent of the evaluation courses are represented in the pol-

icy analysis category (see Table 2). The policy analysis course syllabi emphasize scientific

methods with heavy attention given to the use of advanced statistical procedures for projecting

and forecasting policy outcomes. These course syllabi include topics such as: (1) theoretical

models and conceptual approaches to policy analysis, (2) policy decision models, (3) cost ben-

efit and cost effectiveness analysis, and (4) scientific techniques and measures for projecting

policy impact. The course syllabi tend to stress the fiscal aspects of policy options. Therefore,

economic modeling and cost analysis are key components of these courses. Proficiency in advanced statistics is required as a prerequisite to taking these courses.

A subcategory of courses described as policy analysis cover the policy process with little

attention given to the technical aspects of policy analysis. This subcategory, which represents

about one-third of the policy analysis courses, includes topics such as: (1) problem analysis

and (2) policy agenda and goal setting; (3) the political context of the policy process, includ- ing policy formulation and adoption; and, (4) evaluation of policy outcomes. Policy evalua-

tion was the last component of these course syllabi and was allocated ten percent of the total

course time or less. These courses are more frequently found in public administration pro-

grams located in political science departments.

Management-oriented program evaluation. Management-oriented program evalua- tion courses emphasize the application of research methods as an analytic tool to better under-

stand both management processes and structures. Evaluation is used as a means to

strategically manage resources to improve organizational performance. Twenty-seven percent

of the evaluation courses are classified in this category. One common characteristic of the

management-oriented program evaluation syllabi is an emphasis on the application of evalu- ation to management problem-solving. According to the syllabi, the objective of these courses

is for students to acquire the technical analytic skills and knowledge required to answer ques-

tions concerning the effect of management structures, processes, and resource allocation deci-

sions on organization performance. The topics presented in the management-oriented

program evaluation syllabi include technical and contextual aspects of evaluations. Although

not all management-oriented program evaluation courses include both of these components,

60% included both.

The contextual topics included the political and organizational context of evaluation.

The technical topics include: (1) measurement validity and reliability, (2) statistical proce-

dures and data analysis, (3) and interpreting and reporting data. Approximately 50% of the management-oriented program evaluation syllabi indicated that a prerequisite course in

research methods was required. One-third require both a research methods and a statistics

course as prerequisites.

Evaluation Course Content by Program Focus

Analysis of evaluation course content by program focus was conducted to better under- stand evaluation content in the context of programs. The distribution of evaluation content by program focus presented in Table 2 revealed that 71% of the urban management programs emphasized traditional quantitative program evaluation content, with only 2 1% emphasizing

management-oriented program evaluation course content. Eighty-three percent of the pro- grams having a public policy focus emphasized policy analysis evaluation course content and

Page 7: The status of management oriented evaluation in public administration and management graduate programs

Management Oriented Evaluation 257

76% of the programs having a general/generic administration focus placed emphasis on tradi-

tional quantitative program evaluation.

DISCUSSION

The present status of evaluation course work in over 60 percent of NASPAA-accredited pub-

lic administration and related programs indicates that evaluation is not being taught as a man-

agement tool in the majority of the programs. According to the course syllabi, less than one-

third of the programs provide evaluation content that is oriented to management issues.

Thirty-one percent focused on traditional quantitative program evaluation and 42 percent

focused on policy analysis. The course syllabi indicate that policy analysis and traditional

quantitative program evaluation courses are more likely to address program impact, cost-

effectiveness, and policy issues. Evaluation practitioners and scholars note that evaluation

questions related to program impact and the fiscal and political costs related to programs are

more likely to be of concern to external users of evaluation (such as legislative leaders, exec-

utive policy decision makers, and top administrative leaders) than to managers (Chelimsky,

1987; Epstein, 1992; Oman & Chitwood, 1984; Nyhan & Marlowe, 1995). On the other hand,

managers are more interested in evaluations that will help them identify the strategic options

available to attain more effective, efficient, management of public resources (DuPont-Morales

& Harris, 1995; Oman & Chitwood, 1984; Rothman, 1980).

Overall, 95% of the general/generic administration programs and 75% of the urban man-

agement programs required at least some of their students to complete an evaluation course:

however, few of these courses focus on management-oriented evaluation. As the numbers in

Table 2 show, such content is presented in only 27% of all programs. Thus, students in almost

three-quarters of the programs do not receive an orientation to evaluation as a tool to inform

management decisions and improve program outcomes. This finding raises questions con-

cerning whether graduates will be proficient in the use of evaluation as a management tool.

One of the objectives of professional education in public affairs and administration is to pro-

vide leadership for the judicious, competent management of public resources (Jennings,

1989). If we are to achieve this objective, we must expand the use of management-oriented

evaluation in our graduate programs.

Given the current state of evaluation education in public administration and manage-

ment, the author does offer some suggestions based on experience teaching evaluation in a

public administration program. The first step in expanding the evaluation content in public administration and management programs is to identify the knowledge and basic skills man-

agers need to become more effective users of evaluation (Anderson & Ball 1978; Covert,

1992; Mertens, 1994). Management-oriented evaluation should include proficiency in the

application of scientific measurement techniques to establish realistic, measurable goals and

objectives; measurable performance indicators; and measurement criteria for the assessment

of overall management results. These skills alone will not prepare managers to use evaluation

effectively, training in evaluation should extend beyond the basic methods competencies. Technical skills should be merged with behavioral knowledge concerning the political and administrative context of evaluation including organizational and fiscal constraints. Course

content should also address the evaluation process, particularly as it relates to team building and working with internal and external evaluators.

Page 8: The status of management oriented evaluation in public administration and management graduate programs

258 EVALUATION PRACTICE, 17(3), 1996

One approach to the inclusion of management-oriented evaluation in the public affairs and administration curriculum would be to provide a performance monitoring and evaluation course separate from program evaluation courses. An alternative approach would be to inte- grate management-oriented evaluation content into existing courses in program evaluation. Such courses would present evaluation as an ongoing process of performance monitoring and fine-tuning to improve performance, as well as an activity to assess overall performance and results.

A third, more innovative approach to the incorporation of performance monitoring and evaluation into the curriculum would be to combine performance monitoring and evaluation, strategic planning, and performance budgeting into one comprehensive three-part seminar. This seminar could be offered over a 2 to 3 semester period. Such a course introduces an eval- uation paradigm that more clearly articulates the relationships among strategic planning, per- formance evaluation, and performance budgeting. This course would also demonstrate the link between managerial accountability (DuPont-Morales & Harris, 1995) and performance budgeting as tools for fiscal decision-making. An integrated seminar could be team taught. The team approach would allow public affairs and administration professors to demonstrate that performance monitoring and evaluation are team processes which interface with other aspects of management. Integration of performance monitoring and evaluation, strategic plan- ning, and performance budgeting into one comprehensive course presents a systematic approach to the planning, implementation, monitoring, and evaluation of public programs.

The relevance of management-oriented evaluation content to public affairs and adminis- tration is indisputable. Within the last decade evaluation as a management tool has become a staple in municipal government (Poister & Streib, 1989) and with the passage of the Govem- ment Performance and Results Act of 1993, evaluation has become a mandated management tool in the federal government. Managers who effectively use evaluation to improve organi- zational performance will more successfully meet this challenge than those who do not.

NOTES

1. This study was funded by a University of North Texas faculty research grant.

REFERENCES

Anderson, S. B., & Ball, S.( 1978). The professional andpractice of program evaluation. San Francisco, CA: Jossey-Bass.

Bouckaert, G. (1993). Measurement and meaningful management. Public Productivity Review, 17(l), 3143.

Chelimsky, E. (1986). Linking program evaluation to user needs. In D. Palumbo (Eds.), The Politics of Program Evaluation, pp. 72-99. Newbury Park, CA: Sage Publications.

Cohen, S. A. (1993). Defining and measuring effectiveness in public management. Public Productivity and Management Review, I7( l), 45-57.

Covert, R. W., (1992). Successful competencies in preparing professional evaluators. Paper presented at the annual meeting of the American Evaluation Association, Seattle, WA.

DuPont-Morales, M. A., & Harris, J. E. (1994). Strengthening accountability: Incorporating strategic planning & performance measurement in budgeting. Public Productivity & Management Review, 17(3), 231-239.

Page 9: The status of management oriented evaluation in public administration and management graduate programs

Management Oriented Evaluation 259

Epstein, P. D. (1992). Measuring the performance of public services. In M. Holzer (Ed.), PubEic Produc- tivity Handbook, pp. 161-193. New York: Marcel Dekker, Inc.

Jennings, E. T. (1989). Accountability, program quality, outcome assessment, & graduate education for public affairs & administration. Public Administration Review, 49, 438-446.

Kim, P. S., & Wolff, L. (1994). Improving government performance: Public management & the national performance review. Public Productivity & Management Review, 18, 73-87.

Mertens, D. M. (1994). Training evaluators: Unique skills & knowledge. In J. W. Altschuld & M. Engle (I%.), The preparation ofprofessional evaluators: Issues, perspectives, &programs. New Direc- tions for Program Evaluation, No. 62 San Francisco: Jossey-Bass.

Nyhan, R. C., & Marlowe, H. A. (1995). Performance measurement in the public sector: Challenges and opportunities. Public Productivity and Management Review, 18(4), 333-348.

Oman, R. C., & Chitwood, S. (1984). Management evaluation studies. Evaluation Review, S(3), 282-

305.

Poister, T., & Streib, G. (1989). Management tools in municipal government: Trends over the past decade. Public Administration Review, 49, 24G247.

Rothman, J. (1980). Using research in organizations. Newbury Park, CA: Sage. Stufflebeam, D. L. (1971, Fall). The relevance of the CIPP evaluation model for educational account-

ability. Journal of Research and Development in Education.

Wholey, J. S. (1983). Evaluation and Effective Public Management. Boston: Little, Brown. Wholey, J. S., Abramson, M. A., & Bellavita, C. (1986). Pegormance and Credibiliw: Developing

Excellence in Public & Nonprofit Organizations. Boston: Lexington Books. Wholey, J. S., & Hatry, H. P. (1992). The case for performance monitoring. Public Administration

Review, 52, 604-610.

Wholey, J. S., Hatry, H. P., & Newcomer (Ed.) (1994). Handbook of Program Evaluation. San Fran- cisco: Jossey-Bass Publishers.

Wholey, J. S., & Newcomer, K. (1989). Improving government performance: Evaluation strategies for government agencies &programs. San Francisco, CA: Jossey-Bass.