assessing faculty research productivity in graduate public policy programs
TRANSCRIPT
Policy Sciences 16 (1984) 281 289 281 Elsevier Science Publishers B.V., Amsterdam - Printed in The Netherlands
Assessing Faculty Research Productivity in Graduate Public Policy Programs* MICHAEL FARBER, PATRICIA POWERS and F R E D T H O M P S O N
Graduate Program in Public Policy and Administration, Columbia University, New York, N Y10027, U.S.A.
ABSTRACT
This study assesses the faculty research productivity of 23 graduate public policy programs, and is based upon data from the Social Sciences Citations Index. The results clearly show that a handful of policy programs have outs tanding research records: UC Berkeley, Princeton, Michigan, Chicago, Duke, Carnegie- Mellon, Rand, Syracuse, and, of course, Harvard. Most of the rest also have quite commendable records by this criterion.
Introduction
We have long been able to assess the reputation and scholarly productivity of graduate academic departments. We can measure both reputation and productivity and know many of the pitfalls associated with these measures [1]. We know, too, that reputation, publication, and citations tend to be highly correlated [2] indeed, the relationship between these three variables is so consistent and so reliable that many students of the sociology of science conclude they must all be measuring the same thing. And we know
* Inquiries regarding this study should be addressed to Professor Fred Thompson, Graduate Program in Public Policy and Administrat ion, School of International and Public Affairs, Columbia University, New York, New York 10027, where Michael Farber is currently a second-year student. Patricia Powers graduated in 1983 and is currently employed by 1CF in Washington, D.C.
0032-2687/'84/$03.00 �9 1984 Elsevier Science Publishers B.V.
282
that scholarly product iv i ty / reputa t ion is only one criterion by which the distinction of
a program should be judged. Of equal or perhaps greater importance are the ability
and motivat ion of the student body, the quality of instruction, and the coherence and
effectiveness of the curriculum. These variables may not be well correlated with faculty
research product iv i ty / reputa t ion and should also be considered by anyone interested
in evaluating program quality.
Our objective here is the more modest one: evaluating the relative faculty productiv-
ity of graduate public policy schools and public policy programs located at compre-
hensive schools of public affairs. To accomplish this objective, we could have counted
either a faculty's publications or citations of its publications.
We chose to count citations rather than publications. In the first place, many
sociologists of science believe the citation is simply the best unit of analysis for studies
of scholarly productivity. Not only do citations tend to reflect total scholarly produc-
tivity - books as well as articles; they register something about the " impact" or
"influence" of scholarly publication. As Cole and Zuckerman (1982:12) observe:
This is so because recognition of the cognitive contributions of others in footnotes or references is well-established practice in science. Thus the number of times a particular paper has been cited is a rough indicator of the number of different occasions on which other authors have taken note of it . . . . Citation counts for individual authors are.., significantly related to a variety of forms of scientific recognition such as prizes or a~vards, as well as to independent peer assessments of the significance of scientists' contributions. In fact, citation counts are a better predictor of influence or impact of contributions by individual scientists, as they are measured by awards, than are publication counts.
In the second place, the alternative, count ing publications, appeared to be either
impractical or inconclusive. A comprehensive publication count (books, articles in
books , journa l articles, etc.) was on the face of it not feasible. And a l though it has been
shown that publication counts drawn f rom key journals in a field correlate highly with
more comprehensive publications measures (so long as the usual statistical require-
ments are met i.e., the sample set must be representative of the field under examina-
t ion and sample size must be sufficient to avoid small numbers problems), we did not
believe that it would be possible to identify satisfactorily a set of key journals.
The fact of the matter is that policy analysts come f rom a variety of disciplines and
study a variety of subject matters, and publish frequently in many different journals.
Any publication count f rom any small number of journals would necessarily be subject to a large random error [3].
Method and Findings
Before we could count citations, it was necessary to identify public policy faculty and,
prior to that, to identify public policy programs. We solved the problem of p rogram identification by self-selection. That is, we chose to focus our at tent ion on the
institutional members of the Association for Public Policy Analysis and Management
283
(APPAM). These are all programs that have sought to identify themselves as public
policy schools. Furthermore, while A P P A M comprehends several public service
management programs located in business schools (e.g., Boston University or Yale's
School of Organization and Management) as well as a couple of programs that might
better be classified as more traditional public administration programs (e.g., Penn
State University or University of Pittsburgh's School of Public and International Affairs), it includes many of the comprehensive programs offering a public policy
option [4] and all of the major free-standing public policy schools. To identify their
faculty, we asked each A P P A M member institution to provide us with a list of their
"core public policy faculty, including those on temporary or service leave, but exclud- ing those holding temporary or visiting appointments, together with a statement of the
research interests of each faculty member listed." Most of the A P P A M member
institutions responded to our request, although perceptions of core faculty varied
considerably (UC Berkeley, University of Maryland, Pennsylvania, Rochester, Penn
State, UC Irvine, and Columbia listed ten or fewer core faculty; Harvard, over
seventy), and about half failed to provide statements of research interests.
Next, drawing on the Social Science Citation Index (SSCI), we traced the cumula- tive citation histories for each scholar listed for a five-year span beginning in 1978 and
going up to midyear of 1982. These citation data are nearly exhaustive since the Source
Index from which they are derived covers all of the major and most of the minor social science journals.
Our enumerat ion of cumulative citations is largely complete. However, we made
certain modifications in the data in an at tempt to minimize some of the more common
errors in citation counting. First, self-citations were excluded wherever possible.
Second, we made an effort to limit the homonym problems by checking the fields
represented in the citing journals listed under the name of a given scholar. Both of
these modifications may have resulted in underestimates in some cases. But we felt that
such errors were smaller than those that would have occurred had we not made these
adjustments. We also tried to check cases in which frequent citation was associated with sparse publication to make sure the references really were to the scholars in our sample.
We did not, however, adjust for the first-name problem. The SSCI enumerates citations only for sole authors or the first-named author in the author set. This means that citation counts are underestimated for scholars who collaborate extensively and
whose names tend not to be first in author sets. However, this bias does not seem too
serious - even for individual authors, the disparity between SSCI counts and complete
counts is not great. Aggregated to the program or school level, the bias introduced by the first-name problem should be minimal. Nevertheless, it is possible that, for this reason and others, our citation counts are less reliable for small programs (e.g., those with few faculty members) than for larger ones.
Finally, we computed program productivity rankings based on the mean logarith- mic five-year citation count of the faculty members in each participating A P P A M
284
program. Note that the use of the logarithmic mean involved two subjective judg-
ments. The mean was employed in part to adjust for differences in definitions of core
faculties and in part because we simply did not believe that program scale per se should
enter into an evaluation of faculty productivity. Individual citation counts were
adjusted to base 10 logs to reduce the weight given the few individuals in our sample
who have had an extraordinary impact on their peers and therefore to reflect more accurately the productivity of a public policy program's total faculty (in several cases,
less than 5 percent of the public policy faculty accounted for over half of the program's
citations). Of course, we recognize that these decisions are open to question, as are our
justifications of them. Had we been located at a large program, at Harvard, for
example, rather than at a small one, our decisions might have been different. Conse-
quently, we have also reported rankings based on sample means and total citations.
The sensitivity of these rankings to our decision to focus attention on logarithmic
mean citation counts is demonstrated by comparison with the rank order of programs
by total citation counts: 1. Harvard, 2. Chicago, 3. Princeton, 4. Carnegie-Mellon,
5. Michigan, 6. Rand Graduate Institute, 7. Syracuse, 8. Arizona, 9. UC Berkeley,
10. Duke. Note that two highly regarded smaller programs (Duke and UC Berkeley) drop to the bot tom half of the top ten and two (Maryland and UC Irvine) drop out of it
altogether. Nevertheless, we stand behind this decision with one caveat: it appears to
discriminate against Harvard. For example, had we considered only the fifty most
frequently cited faculty members at the Kennedy School, in which case its faculty population would still be nearly 50 percent larger than that of the next largest program
surveyed, it would rank first on our logarithmic mean scale and third in terms of the
mean number of citations per faculty member. We would surmise that such a ranking
might more accurately reflect the relative research productivity of the faculty of the
Kennedy School than does the ranking shown in Table 1.
Discussion
This is, of course, not the first at tempt to assess the research productivity or reputa-
tions of public policy programs or of individual scholars in the field. Consequently, it
may be useful to contrast our findings with those of two recent studies - Morgan et al.
(1981) and Robey (1982).
Morgan et al. at tempted to assess both the reputation and the scholarly productivity of a very large number of public administration and public affairs programs in the United States. To assess reputation, they asked the representatives of the member institutions of the National Association of Schools of Public Administration and Affairs to rank their compeers. To assess productivity, they counted articles, drawing their sample from both public administration and public policy journals. Sixteen of the programs appearing in Table 1 were ranked by Morgan et al. on reputation and fifteen on productivity.
From our standpoint, the most surprising aspect of their work is the complete
285
Table 1
Faculty Productivity Rankings: A P P A M Member Institutions a
Institution Logarithmic mean Institution Mean
I. UC Berkeley 4.1 1. Chicago 150 Princeton d 4.1 2. Princeton d 126
3. Michigan, Ann Arbor 4.0 3. UC Berkeley 71 4. Chicago 3.5 Michigan 71 5. Duke 3.2 5. Harvard 68 6. Harvard 2.8 6. Rochester 66 7. Carnegie-Mellon 2.6 7. Carnegie-Mellon 59
UC Irvine b 2.6 8. Duke 57 Maryland 2.6 9. Syracuse b 55 Rand c 2.6 10. Rand c 47 Syracuse c 2.6 I I. Columbia 42
12. Columbia 2.5 Maryland 42 Pennsylvania 2.5 13. Arizona c 33
14. Rochester 2.4 Pennsylvania 33 15. Arizona c 2.2 15. UC lrvine b 28 16. Penn State d 2.0 16. Penn State d 20
SUNY Albany c 2.0 SUNY Albany c 20 18. UT Austin 1.8 18. UT Austin 19
Washington 1.8 19. Minnesota 15 20. Minnesota 1.7 Washington 15 21. UT Dallas 1.5 2 I. UT Dallas 11 22. SUNY Stony Brook 1.3 22. Pittsburgh d 10 23. Pinsburgh d 1.2 23. SUNY Stony Brook 9
a APPAM institutional membersexcluded from this analysisare: Boston University; Brookings Institution; Center for Naval Analysis; Mathematica Policy Research, Inc.; Stanford University; Tulane University; Vanderbilt University; Yale University. Several of these are not degree-granting programs. Others did not respond to our request for information.
b Public management program in Graduate School of Management. c Public policy faculty only. d All public affairs faculty.
a b s e n c e of a n y s t a t i s t i ca l r e l a t i o n s h i p b e t w e e n t he r e p u t a t i o n a l a n d the p r o d u c t i v i t y
r a n k i n g s of t he p u b l i c po l i cy p r o g r a m s i n c l u d e d in t h e i r a n a l y s i s (see T a b l e 2). Al l
e i g h t o f t he p u b l i c p o l i c y p r o g r a m s r a n k i n g in t h e i r t o p t w e n t y b y r e p u t a t i o n r a n k
l o w e r o n p r o d u c t i v i t y - f o u r of t h e s e p r o g r a m s d o n o t e v e n r a n k in t he t o p twen ty .
A n d o n e v e r y d i s t i n g u i s h e d p u b l i c po l i cy p r o g r a m - t he S c h o o l o f U r b a n a n d P u b l i c
A f f a i r s a t C a r n e g i e - M e l l o n U n i v e r s i t y - s i m p l y d o e s n o t a p p e a r in t h e i r p r o d u c t i v i t y
r a n k i n g s . O n t he o t h e r h a n d , of t he r e m a i n i n g t w e l v e p r o g r a m s r a n k e d in t h e i r t o p
t w e n t y by r e p u t a t i o n , s e v e n a re r a n k e d h i g h e r o n p r o d u c t i v i t y t h a n o n r e p u t a t i o n .
F u r t h e r m o r e , if t h e p u b l i c po l i cy p r o g r a m s a re e x c l u d e d f r o m M o r g a n et a l . ' s r a n k -
ings , we f ind t h e i r p r o d u c t i v i t y sco res d o c o r r e l a t e r a t h e r wel l w i t h t h e i r r e p u t a t i o n a l
r a n k i n g s - a l t h o u g h by n o m e a n s per fec t ly .
W e a r e i n c l i n e d to a t t r i b u t e t h e s e s u r p r i s i n g r e su l t s to a n i m b a l a n c e in t he set o f
286
Table 2
Morgan et al.'s Reputational Rankings and Publication Scores: APPAM Member Institutions a
Institution Reputation Publications
1. UC Berkeley 3 25 Princeton 5 9
3. Michigan, Ann Arbor 7 15 4. Chicago not ranked b not ranked 5. Duke 12 13 6. Harvard 2 22 7. Carnegie-Mellon 8 none reported
UC Irvine 14 14.8 Maryland 15 18 Rand not ranked not ranked Syracuse 1 33
12. Columbia 11 11 Pennsylvania not ranked not ranked
14. Rochester not ranked not ranked 15. Arizona not ranked not ranked 16. Penn State 16 1.3
SUNY Albany 11 2.5 18. UT Austin 4 11.5
Washington 13 14.8 20. Minnesota 9 8.5 21. UT Dallas not ranked not ranked 22. SUNY Stony Brook not ranked not ranked 23. Pittsburgh 6 16.5
a These rankings are revised to reflect the omission of all non-APPAM member institutions ranked by Morgan et al.
b The Morgan et al. study dealt only with NASPAA members. Most of the unranked APPAM member institutions were simply not affiliated with NASPAA.
journa l s f rom which they drew their sample of articles and to the futility of assessing
both public policy and public adminis t ra t ion programs according to a single criterion.
For example, five of their top ten programs by reputa t ion are public policy programs,
two others are comprehensive programs offering public policy options, while only
three of their top ten could be considered primarily public adminis t ra t ion programs
(i.e., 50 percent, 20 percent, 30 percent), but over two-thirds of the articles compris ing
the sample set they used to assess faculty product ivi ty were drawn from public
adminis t ra t ion journals . This is clearly not a representative sample.
F r o m our perspective, we would also quest ion Morgan et al.'s decision to include the
Policy Studies Journal in their sample. The Policy Studies Journal accounted for fully
half of the public policy articles they surveyed. Well over 75 percent of these articles
were authored by political scientists. However, political scientists comprise less than
25 percent of the faculty of public policy programs. The qual i ty of the Policy Studies
287
Journalis not in question, but during the period examined by Morgan et al. it was not
representative of the composition of the faculties of public policy schools. Having accused Morgan et al. of selecting an unrepresentative sample, we should
hasten to add that there was not much they could have done to improve it. Efforts to balance their sample would have either been defeated by the paucity of policy journals or, if they had sought balance via a reduction in the number of public administration
journals surveyed, achieved at the cost of severe small numbers problems. Based upon our analysis, it appears that they could have stratified their sample, assigning extra weight to articles published in Policy Analysis and Public Policy. It turns out that there is a rather strong correlation (r 2 = 0.50) between their public policy article counts
and our total citation scores. And it appears that this correlation would have been
substantially increased had they substituted Policy Sciences for the Policy Studies Journal. But while stratification might have been justified on statistical grounds, it would have been unacceptable on any other.
Instead, we believe Morgan et al. would have been wiser to have limited the scope of their study to the more traditional training programs for public service, perhaps including the comprehensive schools of public administration but excluding public policy programs. Of course, this criticism also applies to our work. Indeed, the pitfalls
associated with comparing dissimilar programs using an evaluative criterion appro- priate for one but perhaps not the other is well illustrated by our rankings of two outstanding schools of public administration - Syracuse and Pittsburgh. Although our productivity scores generally correlate fairly well (r 2 = 0.25) with Morgan et al.'s reputational ranking, both of these schools rank significantly lower on the basis of citation counts than would be predicted by their reputations. Surely this is because our productivity measure is somehow inappropriate to these schools.
Robey (1982) used a methodology similar to ours to identify the thirty most frequently cited members (or past members) of the Policy Studies Organization.
Contrasting his results with ours, we find two remarkable anomalies. First, only two of the thirty individuals identified by Robey (1982:446) as "major contributors to public policy" appear in our faculty sample: Graham Allison, Harvard, and Joseph P. Newhouse, Rand. Second, none of the five most frequently cited scholars identified as core faculty members of public policy programs (B. J. L. Berry, Carnegie-Mellon; A. J. Coale, Princeton; J. S. Coleman, Chicago; Frederick Mosteller and Howard Riaffa, Harvard) are found on Robey's list, although they are cited as often as those ranking at the head of his list.
We draw two inferences from this surprising lack of overlap, one trivial and one important. First, Robey should, perhaps, have entitled his article "The Most Fre- quently Cited Members of the PSO, the Majority of Whom Are Political Scientists." Second, most of the major contributors to the field of policy research and policy analysis are not members of the core faculties of schools of public policy. Probably no more than 10 percent of the academically-employed portion of our community is affiliated with public policy programs. We would guess that a similar number is
288
located at graduate schools of business and that a somewhat smaller number is at other
professional schools (e.g., law, public health, education, public administration, etc.). But undoubtedly the majority continues to be housed in departments in the traditional social sciences (see Schneider et al., 1982: 102-103).
What is the point of this exercise? More than anything else it was motivated by pure
curiosity and a conviction that others might be curious about the same things we were.
But our results may also be useful to prospective doctoral students. It seems clear that
there are a handful of public policy programs with outstanding research programs: UC
Berkeley, Princeton, Michigan, Chicago, Duke, Carnegie-Mellon, Rand, Syracuse,
and, of course, Harvard [5]. If one wishes to earn a PhD in public policy, one should
attend one of these schools. However, one should also understand that this may not be the best course of study leading to a career in public policy research careful
consideration should also be given to training in the traditional public policy disci-
plines: economics, political science, law, or OR /MS.
Notes
1 For a non-technical survey of this literature we would recommend Jones (1980). 2 This is true of individual scholars as Well as of departments and programs. 3 Ellwood identifies five distinct models of graduate education for public service in the United States:
1) programs located in political science departments, 2) public service management programs located in business schools, 3) free-standing public administration programs oriented to teaching POSDCORB skills, 4) public policy programs, 5) programs that comprehend two or more of the other types, often offering both public policy and public administration tracks, e.g., Indiana University's School of Public and Environmental Affairs and the Maxwell School of Syracuse University. For our purposes, the principal boundary dividing these models lies between public administration (e.g., American or VPI) and public policy (e.g., the Kennedy School at Harvard or the Graduate School of Public Policy, UC Berkeley). As he explains, graduate public policy programs are different from other kinds of training programs for public service in the United States, not only in their curricula, which tend to stress the development of formal analytical skills, but also in the interdisciplinary composition of their faculties. Whereas the faculties of other kinds of public affairs programs are largely drawn from a subdiscipline of political science - public administration the faculties of schools of public policy are: " . . . dominated by economists ( . . . 31 percent of all their unit faculty), followed by political scientists ( . . . 23 percent), lawyers ( . . . 12 percent), and sociologists ( . . . 6 percent). The public administration faculty so evident in the other models is almost totally absent from unit faculties, averaging no more than 2 percent" (Ellwood, 1982: 68).
4 The major omissions here are the University of Indiana and the University of Southern California. 5 Of course, not all of these programs offer a PhD in something called public policy. Maryland may also
be moving into this group. The citation scores of its faculty show a dramatic upward trend over the five years we surveyed.
References
Cole, Jonathan R. and Harriet Zuckerman (1982). The Productivity Puzzle: Persistence and Change in Patterns of Publication o f Men and Women Scientists, Working Paper#87-2, Center for the Social Sciences at Columbia University, September.
Ellwood, John William (1982). A Morphology of Graduate Education for Public Service, Woodrow Wilson School of Public and International Affairs, Princeton University, Spring.
289
Jones, Lyle V. (1980). "The assessment of scholarship," Directions for Program Evaluation 6:1 20. Morgan, David R., Kenneth J. Meier, Richard C. Kearney, Steven W. Hays, and Harold B. Birch (1981).
"Reputation and productivity among U.S. public administration and public affairs programs," Public Administrative Review 41 (6): 666 673.
Robey, J. S. (1982). "Maj or contributors to public policy analysis," Policy Studies Journal 10 (3): 442 447. Schneider, J. A., N. J. Stevens and L. G. Tornatsky (1982). "Policy research and analysis: An empirical
profile, 1975 1980," Policy Sciences 15 (2): 99 114.