making evaluation viable: the response of graduate programs in evaluation

8
American Journal of Community Psychology, Vol. 12, No. 2, 1984 Making Evaluation Viable: The Response of Graduate Programs in Evaluation Jonathan A. Morell 1 Program in Evaluation and Applied Social Research, Hahnemann University Evaluation brings the power of social science to bear on assessing the effec- tiveness and efficiency of programs. By so doing evaluation can play an im- portant role in developing powerful and practical responses to ever- changing needs. If that role is to be played well there must be a cadre of knowledge; giving graduates a chance for employment; furthering the field by socializing neophytes into a particular belief system; and pro- evaluation programs in training that cadre. Understanding the relation between evaluation and its training pro- grams first necessitates understanding the general functions of graduate education. Those functions include preparing students to use specialized knowledge; giving graduates a chance for employment; furthering the field by socializing neophytes into a particular belief system; and providing a context for research and scholarship. These are considerable responsibilities for graduate training programs, and how they live up to these responsibil- ities can have a profound effect on the viability of an entire field. Crucial to understanding graduate training programs is an apprecia- tion of the fact thai they are "open systems" which must continually adapt to a changing environment.3 Adaptation is particularly important in evalua- tion because of its traditionally heavy focus on social and educational ser- 1All correspondence for reprints should be sent to Jonathan A. Morell, John F. Kennedy CMH/MRC Research and Evaluation, 112 N. Broad Street, Philadelphia, Pennsylvania 19102. ZA detailed analysis of evaluation as a scientific and technological endeavor can be found in Morell (1979). 3An excellent discussion of the "open system" concept can be found in Katz and Kahn (1969). 209 0091-0562/84/0400-0209503.50/0© 1984 Plenum Publishing Corporation

Upload: jonathan-a-morell

Post on 10-Jul-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Making evaluation viable: The response of graduate programs in evaluation

American Journal o f Community Psychology, Vol. 12, No. 2, 1984

Making Evaluation Viable: The Response of Graduate Programs in Evaluation

J o n a t h a n A . M o r e l l 1

Program in Evaluation and Applied Social Research, Hahnemann University

Evaluation brings the power of social science to bear on assessing the effec- tiveness and efficiency of programs. By so doing evaluation can play an im- portant role in developing powerful and practical responses to ever- changing needs. If that role is to be played well there must be a cadre of knowledge; giving graduates a chance for employment; furthering the field by socializing neophytes into a particular belief system; and pro- evaluation programs in training that cadre.

Understanding the relation between evaluation and its training pro- grams first necessitates understanding the general functions of graduate education. Those functions include preparing students to use specialized knowledge; giving graduates a chance for employment; furthering the field by socializing neophytes into a particular belief system; and providing a context for research and scholarship. These are considerable responsibilities for graduate training programs, and how they live up to these responsibil- ities can have a profound effect on the viability of an entire field.

Crucial to understanding graduate training programs is an apprecia- tion of the fact thai they are "open systems" which must continually adapt to a changing environment.3 Adaptation is particularly important in evalua- tion because of its traditionally heavy focus on social and educational ser-

1All correspondence for reprints should be sent to Jona than A. Morell, John F. Kennedy C M H / M R C Research and Evaluation, 112 N. Broad Street, Philadelphia, Pennsylvania 19102.

ZA detailed analysis of evaluation as a scientific and technological endeavor can be found in Morell (1979).

3An excellent discussion o f the "open system" concept can be found in Katz and Kahn (1969).

209

0091-0562/84/0400-0209503.50/0 © 1984 Plenum Publishing Corporation

Page 2: Making evaluation viable: The response of graduate programs in evaluation

210 Moreli

vices, areas which have been profoundly affected by changes in funding levels and funding structures.

Understanding the future of evaluation training also necessitates an appreciation of the dichotomy between what a field is and what its practi- tioners do. It may be difficult to precisely define what medicine is, but many physicians feel they spent too little t ime doing it, and too much time doing other things. Similarly in science, social work, engineering, and evaluation. We may not know precisely what our fields or professions are, but whatever they are, we are forced to spend too much time doing other things. There is a difference between a person's j o b - t h e set of roles and tasks he or she is paid to p e r f o r m - a n d a person's core professional acts. 4 I believe that if evaluators are to be successful their training must not only deal with core professional acts, but must also treat the entire range of activities which are likely to confront evaluators. Further, there must be an attempt at syn- t h e s i s - an at tempt to show students how a set of skills interact and combine to guide the work life of a successful evaluator.

A final important general aspect of training programs is the inherent irrelevance to practical application of any graduate training program (Morell & Flaherty 1978). Training programs rely heavily on classroom ex- perience, a process particularly well suited for the construction of simplified, and hence artificial, models of reality. There is a great advantage in this because such models allow one to isolate important elements of a phenomenon and to see the relationships among those elements. But those models are also problematical because they are not representative of prac- tical settings. Professional education's heavy emphasis on field placement activity is a response to this problem, a n d is intended to help students understand the practical application of a body of abstract knowledge. Un- fortunately the diverse nature of evaluation and the rapidly changing demands which are placed on evaluators make it difficult to design place- ment activities which have an enduring relevance, s

So far this discussion has focused on graduate training in general. Before we can turn to the specifics of evaluation we must have a sense of where the field came f rom and where it is going. Evaluation in its modern form goes back not much farther than two decades. 6 This period was marked by three developments. First is the application of social research

4The notion of a core professional act derives from the sociology of professions, where there is much discussion about the importance of bodies of specialized knowledge which a group may possess. These and related issues are well covered in Becker (1970), Bucher and Strauss (1961), Goode (1969), and Wilensky (1964).

5The place of practical experience in evaluation training is discussed in greater depth in Korn, Keiser, and Stevenson (1982) and Weeks (1982).

6An excellent history of evaluation can be found in Cronbach and Associates (1980, Chap. 1).

Page 3: Making evaluation viable: The response of graduate programs in evaluation

Graduate Evaluation Programs 211

methodology to large-scale social programs. Second is a recognition that evaluation can be used as an aid to decision makers whose responsibilities include the day to day viability of a program or an organization. The third event is parallel to the other two and affects both. It is evaluators' realiza- tion that the political, organizational, and psychological factors which in- fluence information-use must be incorporated into their work. Otherwise evaluation will not be used. 7

For the most part the intellectual leaders in all this activity were social scientists with backgrounds in psychology, education, and sociology. By in- terest and inclination their tasks ran to the social and human services, and to work in the nonprofit-making sector.

More difficult is determining where our field is going. Here I attempt some prognostication based on my experience, discussions with colleagues, and a knowledge of where job opportunity and field placement work seems

t o be developing for students in Hahnemann University's Program in Evaluation and Applied Social Research. More work will certainly be done in the private sector, assessing need for and effectiveness of human resource development programs, efforts to increase productivity, and the like. Health care will remain an important area, especially topics which relate to cost containment and the development of new ways to organize health care. The evaluation of energy programs may become important. Evaluation in the defense field will increase. Programs for the elderly are likely to main- tain reasonably consistent funding and needs for evaluation. Issues of pro- gram accountability and program cost are likely to become increasingly im- portant, to the detriment of evaluation with a heavy "research" component. Finally, the shift in program responsibility to State and Local levels will in- crease needs among State and Local governments for evaluation services. Although this need will increase, funds for the requisite evaluation will be scarce, thus placing a premium on small-scale projects and the development of appropriate methodologies for such projects. Opportunities to compare semicomparable programs across different sites will decrease because of the greater number of organizations instituting those programs. More than ever the labels given to people who do evaluation, and their official role in tables or organizations, will become more and more diverse. A large proportion of evaluation always did take place without being called evaluation. This state of affairs will increase because the areas which have begun to develop some traditions as recognizing evaluation-education, mental health, etc.--have been affected profoundly by recent budget cutbacks.

Given the responsibilities and dynamics of training programs, and given developmental trends in evaluation, how should evaluation training

7For those interested in promoting information-use, the following works make a good starting point: Havlock (1979), Morell (1982), and Weiss (1980; 1981).

Page 4: Making evaluation viable: The response of graduate programs in evaluation

212 Morell

programs adjust their curricula? I believe the first step is to map out evalua- tion training needs relative to what evaluators are likely to do in their jobs. For this task I recommended a decision tree model developed by Ingle and Klauss (1980), and reproduced here as Figure 1. The Ingle and Klauss model is based on the assumption that different evaluation roles require different levels of proficiency in various skills. The advantage of this approach is that it helps predict what evaluation students may do as professionals, identifies skills needed in various evaluation roles, and provides an estimate of skill level as a function of role.

The second step in shaping an evaluation training program is to assess the types of research and investigatory skills needed by evaluators, and the attendant theoretical grounding that must accompany those skills. In order to make this determination I propose a model which identifies the investiga- tions which evaluators may undertake and the relationships among those in- vestigations (Morell, 1981). This model is depicted in Figure 2. By virtue of interest or job one might find oneself being called an evaluator and working at any point in the model. Evaluators must be cognizant of relationships among different parts of the model, and must consider those relationships when initiating and developing projects. Whenever possible, choices should be made which make a project more, rather than less, immediately relevant to improving the quality of a program relative to particular target groups. As an example, one might have an opportunity to help develop more effec- tive means of psychotherapy. Some research might lead to psychotherapies which would be extremely effective if they could be applied, but which are not likely to find such application. They may be particularly expensive, their delivery may require considerable organizational change in delivery organizations, or they may be perceived by patients as being unacceptable. Contrast this with an effort to improve forms of psychotherapy which are already known to patients and to provider groups, and which can be economically delivered through existing channels. Research on either type of psychotherapy is potentially useful (or irrelevant, or harmful) but the evaluator will choose the latter research over the former.

These two models (Morell and Ingle-Klauss) yield an estimate of the universe of skills and theory which might be imparted to students in evalua- tion training programs. It now remains to choose from among that universe and to specify what will and will not be included in a specific program's cur- riculum. Two considerations are important in making these choices. First is the personal interest, expertise, and contact network of the program's facul- ty. Without enthusiastic support a training program cannot succeed, and that support is most likely when the curriculum and the interests of the students coincide with those of the faculty. In theory, one might begin pro- gram development with either students, curriculum, or faculty, but in most cases the faculty are the most fixed of all three elements.

Page 5: Making evaluation viable: The response of graduate programs in evaluation

Graduate Evaluation Programs 213

g a . - 7 - ~

-6

E m

o

w

m ~ Z I m Z m Z Z m m

.-~ . ._~ , ~ .~

o~

I = > o

~'~

E ~.~

o~

Page 6: Making evaluation viable: The response of graduate programs in evaluation

214 Morell

I I~ I ~ _ ~ E ~ ~- E i g ~- o ' - E . -o . - o ~ m

~ ~ 0 , , 1 0 0 3

___JL t K

o~ ~5 ~=

- & =

!o.

H ~

= ~

~ <

Page 7: Making evaluation viable: The response of graduate programs in evaluation

Graduate Evaluation Programs 215

Second, one must consider the potential job market for graduates and the related issue of field placement opportunities. Jobs and field placements are related because field placements often give students a knowledge of systems, a set of personal contacts, and special expertise which can be in- strumental in obtaining employment. Work opportunities influence pro- gram curriculum because there is an implicit contract between a training program and its students to the effect that a reasonable possibility of employment awaits upon graduation. From a more selfish point of view, satisfied graduates are invaluable in establishing the credibility of a training program. They also play an important role in placing future generations of students and in providing research opportunities for faculty.

When analyzing job markets it is important to consider situation- specific factors as well as more general issues such as social needs, funding levels for general types of programs, and the like. As an example, our pro- gram in Evaluation and Applied Social Research at Hahnemann works under special constraints. Many of our students are returning women or people interested in a midlife career change, and thus tend to have family roots and/or existing employment in the Delaware Valley. This makes it dif- ficult for them to move, and requires that we train them for jobs available in the area. Thus when we look beyond labels to find where people with evaluation backgrounds may work, we are particularly sensitive to local conditions. Other programs may not have this constraint but may well have others.

There is a continually changing set of relationships among society's needs for evaluation, the needs of training programs, the needs of students, the field of evaluation, and the work that evaluators do. People responsi- ble for evaluation training programs must be able to react to changes in that set of relationships, and whenever possible, to anticipate events. Above all, those responsible for evaluation training must be sensitive to the influence their activities can have on the field, and must take a proactive role in developing evaluation training programs which meet the needs of the diverse groups who have a stake in program evaluation.

REFERENCES

Becker, H. S. The nature of a profession. In 14. S. Becker (Ed.), Sociological Work, Chicago: Aldine, 1970:

Bucher, R., & Strauss, A. Professions in process. American Journal of Sociology, 1961, 66, 325-334. Cronbach, L. J., and Associates. Toward reform of program evaluation. San Francisco:

Jossey Bass. 1980. Goode, W. J. The theoretical limits of professionalization. In Amitai, E. (Ed.), The semi-

professions and their organization. New York: Free Press, 1969.

Page 8: Making evaluation viable: The response of graduate programs in evaluation

216 Moreil

Havlock, R. G. Planning for innovation through dissemination and utilization of knowledge. Ann Arbor. Institute for Social Research, University of Michigan, 1979.

Ingle, M. D., & Klauss, R. Competency-based program evaluation: a contingency approach. Evaluation and Program Planning, 1980, 3, 277-287.

Katz, D., & Kahn, R. L. The social psychology of organizations. New York: Wiley, 1969. Korn, J. H., Keiser, K. W., & Stevenson, J. F. Practicum and internship training in program

evaluation. Professional psychology, 1982, 13, 462-469. Morell, J. A. Program evaluation in social research. Elmsford, N. Y. :Pergamon Press, 1979. Morell, J. A. Evaluation in prevention: Implications from a general model. Prevention in

Human Services, 1981, 1. 7-40. (Theme issue on evaluation) Morell, J. A. Threats to the utility of social science for practical decision making. American

Behavioral Scientist (Special Issue: Values and Applied Social Science), 1982, 26(2), November/December.

Morell, J. A. & Flaherty, E. W. The development of evaluation as a profession: current status and some predictions. Evaluation and Program Planning, 1978, 1, 11-17.

Weeks, E. C. The value of experiential approaches to evaluation training. Evaluation and Program Planning, 1982, 5(1).

Weiss, C. H. Knowledge creep and decision accretion. Knowledge, 1980, 1, 381-404. Weiss, C. H. Measuring the use of evaluation. In J. Ciarlo (Ed.), Utilizing evaluation:

Concepts and measurement techniques. Beverly Hills: Sage, 1981. Wilensky, H. L. The professionalization of everyone. The American Journal of Sociology, 70

(82), 1964.