on the ranking of psychological journals

10
lnformurm Pro~earng & Mm,yemenl Vol 25, No 2, pp 205-214, 1989 03064573/89 $3.00 + 00 Pr,nted I” Great Eatal” CopyrIght 0 1989 Pergamon Press plc ON THE RANKING OF PSYCHOLOGICAL JOURNALS PATRICK DOREIAN Department of Sociology, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A. (Received 28 September 1987; accepted in final form 18 July 1988) Abstract-Attempts to rank journals common to a particular discipline usually take one of two forms. A citation network among the journals can be used to generate an “objec- tive” measure while surveys of discipline members can be used to generate a “subjective” measure. While these rankings are often compared with each other, the comparison is a limited objective. If a richer set of variables tapping the evaluative bases for journals is obtained, it is possible to use them to account for the distribution of journal stand- ing. This is done for a set of psychological journals and provides a basis for understand- ing the dynamics of journal evaluation and standing. Journals form a central institution of science [1,2] and as science is stratified [3] it is scarcely surprising that there is a persistent effort to evaluate the standing of journals in their distinct disciplines. Science grows and accumulates through work done on the research fronts of scientific disciplines, where “each group of new papers is ‘knitted’ to a small, select part of the existing scientific literature but connected rather weakly . . . to a much greater part” (emphasis added) [4, p. 5121. Many of the research productions on these fronts take the form of articles appearing in the professional journals. Moreover, these arti- cles are linked by citation ties that generate citation networks [4]. Each citation is a pub- lic expression of a transaction where acknowledgement is exchanged for useful information [3]. A published paper becomes important to the extent that its content is seen to be impor- tant and publicly acknowledged. Discussing stratification in science, Zuckerman argues “(D)isciplines, publication in particular journals, types of research, organizations, and rewards are also ranked” (empha- sis added) [5, p. 2371. Concerning the journals, she adds “. . . there seems to be an infor- mal consensus on the order of journals to which submissions should be made so that the most demanding get them first and the least last” [5; p. 2491. It follows that papers are sub- mitted and, if necessary, resubmitted while descending through the ordered journals, until published (or withdrawn). The more demanding journals tend to get, and publish, the bet- ter papers, which also reinforces the informal consensus. Journals high in the ordering maintain their position to the extent that they publish important or useful papers. Scientists become prominent to the extent that they are acknowledged as contributors of important work. In turn, this prominence is, in large part, a function of the extent to which scientists publish in important journals. Prominence is a cumulative process over the life course of a scientist [5,6], conferring many material and scholarly benefits, through a process that “consists in the accruing of greater increments of recognition for particu- lar scientific contributions to scientists of considerable repute and the withholding of such recognition from scientists who have not yet made their mark” [6, p. 581. The allocative processes in science [5] of selective recruitment and socialization, allocation of resources, differential access to publication, and the allocation of honorific awards rest, in large part, on a peer review system of, among other things, scientific productions. Thus, one moti- vation for the interest in establishing quantitative measures of journal importance is not hard to discern: they provide a concrete foundation for making scientific and administra- tive decisions affecting the careers of scientists. It inevitably follows that these measures become controversial. TWO BASES FOR MEASURES OF JOURNAL STANDING There are two broad strategies that can be adopted if measures of journal standing are sought. On the presumption that scientists are able to provide an evaluation of a jour-

Upload: independent

Post on 03-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

lnformurm Pro~earng & Mm,yemenl Vol 25, No 2, pp 205-214, 1989 03064573/89 $3.00 + 00 Pr,nted I” Great Eatal” CopyrIght 0 1989 Pergamon Press plc

ON THE RANKING OF PSYCHOLOGICAL JOURNALS

PATRICK DOREIAN Department of Sociology, University of Pittsburgh, Pittsburgh, PA 15260, U.S.A.

(Received 28 September 1987; accepted in final form 18 July 1988)

Abstract-Attempts to rank journals common to a particular discipline usually take one of two forms. A citation network among the journals can be used to generate an “objec- tive” measure while surveys of discipline members can be used to generate a “subjective” measure. While these rankings are often compared with each other, the comparison is a limited objective. If a richer set of variables tapping the evaluative bases for journals is obtained, it is possible to use them to account for the distribution of journal stand- ing. This is done for a set of psychological journals and provides a basis for understand- ing the dynamics of journal evaluation and standing.

Journals form a central institution of science [1,2] and as science is stratified [3] it is scarcely surprising that there is a persistent effort to evaluate the standing of journals in their distinct disciplines. Science grows and accumulates through work done on the research

fronts of scientific disciplines, where “each group of new papers is ‘knitted’ to a small, select part of the existing scientific literature but connected rather weakly . . . to a much greater part” (emphasis added) [4, p. 5121. Many of the research productions on these fronts take the form of articles appearing in the professional journals. Moreover, these arti- cles are linked by citation ties that generate citation networks [4]. Each citation is a pub- lic expression of a transaction where acknowledgement is exchanged for useful information

[3]. A published paper becomes important to the extent that its content is seen to be impor- tant and publicly acknowledged.

Discussing stratification in science, Zuckerman argues “(D)isciplines, publication in particular journals, types of research, organizations, and rewards are also ranked” (empha- sis added) [5, p. 2371. Concerning the journals, she adds “. . . there seems to be an infor- mal consensus on the order of journals to which submissions should be made so that the most demanding get them first and the least last” [5; p. 2491. It follows that papers are sub- mitted and, if necessary, resubmitted while descending through the ordered journals, until published (or withdrawn). The more demanding journals tend to get, and publish, the bet- ter papers, which also reinforces the informal consensus. Journals high in the ordering maintain their position to the extent that they publish important or useful papers.

Scientists become prominent to the extent that they are acknowledged as contributors of important work. In turn, this prominence is, in large part, a function of the extent to which scientists publish in important journals. Prominence is a cumulative process over the life course of a scientist [5,6], conferring many material and scholarly benefits, through a process that “consists in the accruing of greater increments of recognition for particu- lar scientific contributions to scientists of considerable repute and the withholding of such recognition from scientists who have not yet made their mark” [6, p. 581. The allocative processes in science [5] of selective recruitment and socialization, allocation of resources, differential access to publication, and the allocation of honorific awards rest, in large part, on a peer review system of, among other things, scientific productions. Thus, one moti- vation for the interest in establishing quantitative measures of journal importance is not hard to discern: they provide a concrete foundation for making scientific and administra- tive decisions affecting the careers of scientists. It inevitably follows that these measures become controversial.

TWO BASES FOR MEASURES OF JOURNAL STANDING

There are two broad strategies that can be adopted if measures of journal standing are sought. On the presumption that scientists are able to provide an evaluation of a jour-

206 P. DOREIAN

nal in terms of its importance, or significance, one strategy is to ask the practicing scien- tists for their evaluations of the journals in their fields. Even though the standards by which significance can be judged are ambiguous [7], there are undoubtedly social psycho- logical mechanisms whereby scientists become socialized about their journals [8] that lend some intuitive justification for soliciting their views. Such surveys have been conducted in a variety of fields including sociology [9,10], economics [ll], political science [12], and

geography [ 131. Following this tradition, Mace and Warner [ 141 published ratings of a set of psychol-

ogy journals, as did Koulack and Keselman [15]. Mace and Warner considered 64 psycho- logical journals, used a five-point rating scale, and obtained responses from a set of departmental chairs of psychology departments granting doctoral degrees. They reported the mean rating of each of the 64 journals, and used them to construct an overall rank- ing. They also reported the number of responses on which each journal rating was based and its variability. Controversy soon followed. Hohn and Fine [16] began with “we sus- pect that many, like us, perused the rank ordering of journals to see where their favorites fell. Since ours were ranked near the bottom of the list, we felt moved to perform an obj,c- dive analysis of the study” (emphasis added). While their criticisms were legitimate, they were also self-serving. Some of the journals were not comparable, some “important” jour- nals were omitted, and there was a marked bias with a respect to the choice of their respon- dents. Gynther [17] also objected to the use of departmental chairs, the exercise of “value judgments” made by them as they responded, and a static orientation ignoring the rise and fall of journals over time. His objection to a bias against clinical psychology is consistent with Hohn and Fine’s concern over the omission of certain journals. A significant corre- lation between the mean ratings and the number of responses on which the ratings were based casts doubt on the validity of the ratings [18].

Koulack and Keselman [ 191 attempted to overcome these deficiencies by considering 100 journals and selecting a random sample from the directory of the American Psycho- logical Association (APA) for a survey. As a result, they published an overall ranking of the 100 journals, separate rankings according to the psychologists’ categories of work, and another ranking disaggregated by area of special interest. Their results did not satisfy Levin and Kratochwill [19], who resurrected the issue of familiarity as a biasing response that

clouded any assessment of journal quality. This brief review, if nothing else, indicates considerable disagreement over the meth-

ods of obtaining “subjective” rankings and their value once they are obtained. At face value, there will always be questions concerning the selection of journals, the selection of respondents, and the rating instruments employed in these studies.

The second broad approach to establishing a measure of journal quality, or standing, is “objective” and found in the citations made in articles contained in the journals. The citation from one article in a given journal to another article in a second journal can be viewed both as an article-to-article citation and a journal-to-journal citation. The network

of articles to which Price [4] drew attention also can be aggregated into citation counts between journals [20]. While much of the current work uses Garfield’s insight as a point of departure, such an approach has a rather long pedigree dating at least back to 1936, when Cason and Lubotsky [21] constructed such a citation network for 28 psychological journals. Almost 20 years later, Daniel and Louttit [22] published citation data for a net- work of 19 psychology journals. Xhignesse and Osgood [23] published citation matrices for 21 psychology journals for both 1950 and 1960.

Given the construction of such a citation matrix, there are various ways in which a measure of standing can be obtained straightforwardly. Pinski and Narin [24] provide an integrated methodology for doing this. Included are measures of the influence a journal has within the network. The vector of influence weights is an eigenvector of a matrix obtained directly from the journal-to-journal citation matrix. Doreian [25,26] presents another eigenvector analysis based explicitly on input/output models and applies the method in several citation networks. The correlations between the two eigenvector-based measures for a set of geographical journals are 0.97 and 0.96 at two time points.

The network of ties among articles is foundational in the construction of the Science

Ranking of psychological journals 207

Citation Index and the Social Science Citation Index. With the construction of these large computerized databases, the way is clear to construct, in principle, very large journal-to- journal citation matrices. This is done in the Journal Citation Report, where indices such as Garfield’s [20] impact index are routinely published.

The two broad strategies (objective vs. subjective) of constructing measures of stand- ing are fundamentally different. There are, of course, variations within each broad approach. Whenever representatives from each group are considered jointly, it is an effort to provide some cross-validation between them or to demonstrate the inadequacies of the subjectively based measures. Porter [27] demonstrated that the Science Citation Index’s measure of the citations per article correlated poorly (Spearman’s p of .27 and .l 1) with ranks obtained by Koulack and Keselman, Buss and McDermott [28] selected three source journals in a two-year period and examined the citations in all of the articles contained in those sources. They extracted the citation rates for the journal set considered by Mace and Warner [14] and, predictably, found a low (Spearman’s p = .45) rank order correlation between the two indexes.

White and White [29] used a 10% sample of the pages of the Social Science Citation Index to eliminate the necessary subjective component of earlier studies, specifically those of Mace and Warner [14] and Koulack and Keselman [15]. The average number of cita- tions per published article for 57 psychology journals over the two year-sampling period had poor Spearman rank correlations with Koulack and Keselman’s index (0.39) and Mace and Warner’s index (0.56). The correspondence between the distributions is, at best, mod- est. However, objective measures are not without problems. Rushton and Roediger [30] argue that too few journals were included, the sampling method was unreliable, and the Social Science Citation Index was inferior to the Science Citation Index. Using impact fac- tors from the Social Science Index for 80 psychology journals, they found only a modest correspondence (Kendall’s r = S2) between their ranking and that obtained by White and White. Even within citation-based methods it seems that inconsistent results are obtained. Furthermore, when the subjective-bask measures are compared with the objectively based measures the inconsistency is more extreme.

A NEW RESPONSE

When subjectively based measures of journal standing are considered in conjunction with the objectively based measures, the research agenda focuses on the relation between these variables in an attempt to demonstrate the congruence between them-or the lack of such a congruence. This is a very limited objective. Assuming that the many methodolog- ical problems can be overcome, all that results is a pair of distributions that may or may not correspond. While a ranking of journals may serve an administrative purpose in ten- ure reviews (Are these publications in good journals?), or for awarding grants and con- tracts, it provides little insight into the dynamics of science nor, indeed, the basis for its own existence.

There is another way of linking citation-based measures to subjectively generated sur- vey data. Rather than simply generate a ranked distribution of journals it seems more fruit- ful to explore the underlying bases of such an overall evaluation. Given these evaluative bases, it is reasonable to account for the variability in the objectively based measures of standing in terms of them based on data from the scientists of a discipline. No single study has attempted to do this, but a prototype as an approximation to it can be obtained by joining data from two published studies.

Xhignesse and Osgood [23] assembled journal-to-journal citation matrices for a set of 21 psychological journals for 1950 and 1960. In a separate but closely related study, Jakobovits and Osgood [31] used a semantic differential in a study of 20 journals of interest to psychologists. The semantic differential was constructed by Osgood et al. [32] as a method of measuring the meaning of an object for an individual. Objects are rated on a series of bipolar seven-point rating scales. Responses to subgroups of scales can be aggregated and interpreted in terms of underlying dimensions. The sampling frame was the APA membership, from which a stratified sample of nondivisional members, single divi-

208 P. tbRBIGN

sion members, and multiple division members was selected. The percentage distribution of the sample across the sampling categories was nondivision members (19%), single division members (5 1 o?o), and multiple division members (29%). The corresponding distribution of usable responses was 17’3’0, 51 %I, and 32%, indicating no serious bias.

Factor anaIysis was used to construct a set of scaIes that tapped dimensions of assess- ment of the journals f31, pp. 793-7951, One factor measures the extent to which the jour- nals are seen as valuable-worthless and good-bad, and is taken as an underlying dimension measuring the v&e of the journals. Another factor taps the scientific-unscientific and the rigorous-loose connotations and measures the extent to which the journals are seen as rig- omm. A third factor taps the dimensions personal-impersona1 and interesting-dull and measures the extent to which the journal content is seen as interesfing. The actual scores are derived from the bipolar scales and are used as variables that are labeled by vague, rigor, and interest.

The correspondence between the journals in the Xhignesse and Osgood study and the Jakobovits and Osgood study is not exact. Table 1 contains the values of these variables for the 16 journals common to the two studies, together with Doreian’s f26] measure of standing for 1950 and 1960, that was computed from Xhignesse and Osgood’s data. The extent to which the variables measuring value, rigor, and interest can be used, together with the standing in 1950, to account for the variability in the standing of the journals in 1960 is the concern of this article.

HYPOTHESES

Science cumulates knowledge with journals a primary formal communication medium Within a given field or subfield, journals are differentially important. As specialty mem- bers share an informal consensus over the ordering of journals, it follows that journals of value to the field wilt also have high standing. Value and journal standing are positively correlated. tn science a great emphasis is placed on rigor, substantive@ and methodolog- icalIy, in the creation of authenticated, or jointly accepted, knowfedge as the foundation for further work. As psychology claims to be a science, journals seen as rigorous will have high standing while those lacking rigor will have low standing. Together, the value and rigor of journals should have some predictive utility with regard to the standing of jour- nals, although it must be recognized that value and rigor are likely to be positively cor- reIated also. For these variables, it seems that joumafs are valuable to the extent they are rigorous. A discipline that is cumulative, except, perhaps, during a genuine scientific revo- lution, builds in a coherent fashion. As a result, journal standing at one point in time will be highly correlated with the standing of journals at close subsequent points in time. There

Ranking of psychological journals 209

is no reason to expect that interesting journals will have high standing. Indeed, it is rea- sonable to speculate that journals seen as interesting, as measured with the semantic dif- ferential scale, will be seen to lack rigor and will have lower value in the disciplinary community.

ANALYSIS

Let STANDING denote the journal standing (in 1960), VALUE denote the variable measuring the extent to which the journals are viewed as valuable, RIGOR denote the extent to which the journals are seen as rigorous, INTEREST the extent to which journals are seen as interesting, and let STAND50 denote the journal standing at the prior point in time. Table 2 gives the product moment correlation for these variables. According to this table, STANDSO, VALUE, and RIGOR all have positive correlations with STAND- ING while INTEREST has a mild negative correlation with journal standing. Fig. 1 gives the scatterplot matrix for all pairs of variables. The scatterplots with standing as the de- pendent variable (Fig. 1, bottom row) indicate the presence of an outlier, and some pairs of possible predictors (Fig. 1, top three rows) appear highly correlated.

Table 2. Correlation matrix for standing and evaluative bases

Standing Value Rigor Interest Stand50

Standing 1.000 Value 0.791 1.000 Rigor 0.584 0.875 1.000 Interest -0.207 -0.506 -0.846 1.000 Stand50 0.869 0.795 0.727 -0.464 1.000

VALUE

RIGOR

INTEREST

STANDING

STAND 50

Fig. 1. Scatterplot matrix for all variables.

210 P. D0REi~i-4

Estimation in the form of ordinary least squares (OLS) is quite unreliable in this body of data. When journal standing is regressed on value, rigor, and interest, the equation accounts for 68% of the variance in journal standing with all variables insignificant. When journal standing is regressed on value, rigor, and prior standing, the equation accounts for 85% of the variance in standing with all variables significant. However, the sign of the coefficient for rigor is negative while the correlation between rigor and journal standing is positive. On the basis of an extensive set of diagnostic procedures, two distinct data ana- lytic problems can be detected. There are influential data points in Table 1 and there are serious collinearities among the set of potential regressor variables.

Five regression equations were considered (these two and three others; see Table 3). Using regression diagnostics of Belsley et al. [33] and Doreian and Hummon [34], the following journals were influential in one or more equations: PsychoZogica~ Bulietin (4), Psychological Review (3), and Journal of Experimental Psychology (2). The figures in parentheses give the number of equations for which the data point may be viewed as problematic.

For any of these data points, but particularly so for PB, there are at least three pos- sible treatments: (1) such a data point can be excluded; (2) distributions containing the influential points as outliers can be winsorized [35]; or (3) the raw data values for each vari- able can be replaced by their ranks on that variable with a view to obtaining a more robust regression.

Ideally, a regression equation provides a summary of an empirical relation in a spe- cific data set, to which each data point contributes equally. When some data points con- tribute far more than others, this ideal is not met. If the problematic data point(s) can be viewed as not properly belonging to some population (from which the sample is drawn) they can be removed. That cannot be done here as the journals are part of a coherent sys- tem, Further, removal of data points should also have a substantive rationale and one does not exist here. Winsorizing represents a compromise between throwing out data points and keeping them untreated in a data set. It also rests on the observation that virtuahy all dis- tributions are normal in the middle and departures from normality are found in the tails of distribution. Using ranks also represents a compromise between removal and failure to deal with a data problem. In addition, for this data set it is unlikely that the metric has the precision shown in Table 1 and ranks represent the “informal consensus on the order of journals,” rather than precise numeric measurements.

Serious as the influential data point problem is, a more consequential problem is found in the presence of collinearities. As mentioned earlier, when journal standing is regressed on value, rigor, and interest, no individual coefficient is significant while the regression as a whole is significant. Following the Belsley et al. [33] diagnostics, collinearity is serious for this equation which is confirmed by the ridge traces [36]. The OLS estima- tor for y = X@ + E is (X’X)-‘X’y which, with collinearity, leads to very inefficient (high standard errors) estimators. The ridge estimator is (X’X + kl)-‘X’y, where 0 < k < 1 and trades a small amount of bias for a rapid reduction in the standard errors. Figure 2 shows the ridge traces for each of the coefficients when journal standing is regressed on value, rigor, and interest. All traces show considerable movement from the ordinary least squares estimate of the standardized coefficients. The trace for value drops but then stabi- lizes. The trace for rigor quickly changes sign, as it should, and stabilizes, but at a figure

Table 3. Prediction equations for ranked variables

Ranked Standing = 0.374 Ranked Value + 0.623 Ranked Prior Standing (0.149)’ {O. 149) f?z = 0.97

Ranked Value = 0.591 Ranhed Rlgor + 0.403 Ranked Prior Standing 10.155) (0.155) R’ = 0.95

Ranked Rigor = 0.930 Ranked Prior Standing (0.095) R’ = 0.87

*Figures m parentheses itre standard errors.

211

p I

Ranking of psychological journals

STANDING = bo+ b,“ALUE i b$IGOR + b$NTEREST

1 21 t

t 0 VALUE

0 99 a 0 RIGOR

A INTEREST

0.77 + RZ

0.55

0.33

0.11

/:;:.:;:I;,.I:;:,,;;+ 00 0.2 0.4 0.6 06 10

-0.11 - K

Fig. 2. Ridge traces for standing on value, rigor, and interest.

considerably less than for value. The ridge trace for interest decays to a value close to zero. The ridge traces of Fig. 2 clearly indicate that interest has no predictive utility with regard to standing and that rigor fares only marginally better.

When prior standing (STANDSO) is included as a control, then it and value are the only variables having predictive value for standing, using ridge regression. Given the col- linearities among the predictors, these variables bear consideration: it is useful to consider the regression of value on rigor and interest. This also suffers from collinearities and influential data points. Also, interest is not only of no utility in predicting journal stand- ing but also of no utility in predicting the value of journals. From the consideration of col- linearity problems, a very simple model emerges. Journal standing can be predicted from the value the journal has and its own prior standing, while the value of a journal can be predicted well from its rigor and prior standing. The extent to which journals are seen as interesting is irrelevant for understanding their standing.

However, the problem of influential data points must still be considered. For these data, the journals were considered as part of a coherent network so that eliminating data points is very suspect and is not pursued here. (Even if pursued, it does not deal with collinearities.)

Similarly, winsorizing the distribution of journal standing is also ineffective in the face of the collinearities present in the data. The strategy of using the ranked distributions, in place of the raw variables, is a robust procedure that tackles successfully both the influen- tial data point problem and the collinearity problems. Table 3 contains a set of regression equations where the ranked variables are used. When influential data point diagnostics are considered, no point is flagged as unduly influential and the condition indices do not point to the presence of significant collinearities. It follows that these equations are reasonably robust.

The top panel of Table 3 contains a regression linking the ranked standing to ranked value and ranked rigor. The regression in the second panel of Table 3 links ranked value to ranked rigor and ranked prior standing. Both equations are estimated without a con- stant term as the specification of an intercept with ranked variables is a rather odd speci- fication. (Even when included, the intercept is insignificant in each equation which is consistent with the specification and indicates that nothing is lost by omitting it.) These

212 P. DOREIAN

equations also have an appealing interpretation. Given the equation in the top panel of Table 3, a journal ranking first in value and first in prior standing ought to rank first if the prediction equation is taken seriously. When these ranked values are substituted into that equation, the predicted rank on journal standing is 0.997, which is very close to the value of 1. Similarly, when journal ranks of 16 are substituted into this equation, the pre- dicted rank is 15.95. A journal ranking 16th on both independent variables ought to rank 16th on the measure of journal standing. For the equation in the second panel of Table 3, substitution of ranks of 1 for ranked rigor and ranked prior standing generates 0.991 as the predicted rank of value. Again, this is close to 1 and when the ranks of 16 are sub- stituted for each of the ranked independent variables, the predicted rank is 15.86, also close to the value of 16. In terms of examining which predictors are significant, the equations generated through the use of ranked variables correspond exactly to the regression sug- gested by the use of ridge regression for the raw variables. Although ranking has a met- ric only in relation to the journals in the data set, the ranked variables are used as they are not plagued by influential data point problems. The third panel of Table 3 shows a regres- sion of ranked rigor on ranked prior standing. In accordance with the simple hypotheses stated at the outset, the coefficient linking ranked rigor to ranked prior standing is positive.

DISCUSSION

The regressions shown in Table 3 can be linked in the heuristic diagram shown in Fig. 3. Ranked prior standing is taken as exogenous and can predict ranked rigor. Having prior ranked standing as exogenous is a specification dictated by the temporal ordering of the variables. A more general model would have prior standing dependent on even earlier value, and perhaps rigor. Indeed, the model contained in Fig. 3 could be projected forward and backward in time. Journals of high standing in the discipline are seen as more rigor- ous at subsequent points in time. Journals that are high in standing at one point in time and are seen as being rigorous are also valuable to the disciplinary members. Finally, those journals that are high in standing at one point in time, and are seen as valuable to the scien- tists in a field, enjoy high standing at a subsequent close points in time.

It is clear that these results are fragile and suggestive only given the limited size of the data set used to estimate these equations. Even though they are old, the studies by Xhig- nesse and Osgood [23] and Jakobovits and Osgood [31] are closely related and pertain to the same point in time. Joining the two sets is of value even though only 16 journals are common to both. Another weakness in the analysis is that the standing variables come from one type of study, while the evaluative bases come from another. It follows that ranked prior standing will emerge as a more potent predictor. However, ranked prior standing need not correlate highly with ranked standing, and the prior endogenous has to be used as a control. To the extent that the simple predictive model, together with its in- terpretation, is fruitful it provides hypotheses that could be tested for a much larger sample

of journals.

Ranked Value

’ Ranked Rlgor ’

Fig. 3. Heuristic diagram tor predicting ranhed variable,.

Ranking of psychoIogica1 journals 213

In terms of the initial discussion, this analysis does not resolve the issue of whether the objective measures of standing are better than, or inferior to, the subjective measures. Instead, the analysis is directed towards accounting for the measure of standing, obtained by either method, in terms of evaluative bases that discipline members utilize in assessing their journals. While using data for 1950 and 1960 rules out getting a subjective measure for that period, the model discussed in this article can be used equally well for subjective measures of discipline standing. Understanding why journals have their standings and how they are evaluated is an important task that takes us well beyond the debate of which par- ticular methods of measuring journal standing are better according to some implicit cri- terion. It points also to a way of joining data from multiple sources and using them in a complementary way to support each other.

Acknowledgemenfs-The author appreciates the comments of Esther Sales on earlier versions of the article, and Norman Hummon for making the regression with omitted variables software available.

REFERENCES

1. 2. 3. 4. 5. 6. 7.

8.

9. IO.

11.

12. 13.

14. 15.

16.

17. 18.

19.

20.

21.

22. 23.

24.

25.

26.

21. 28.

29.

30.

31.

Hagstrom, W.O. The scientific community. New York: Basic Books; 1986. Ziman, J. Public knowledge. Cambridge: Cambridge University Press; 1968. Cole, J.R.; Cole, S. Social stratification in science. Chicago: University of Chicago Press; 1973. Price, D. Networks of scientific papers. Science, 149: 510-515; 1965. Zuckerman, H. Stratification m American science. Sociological Inquiry, 4O(Spring): 235-257; 1970. Merton, R.K. The Matthew effect in science. Science, 1.59: 56-63; 1968. Medawar, P.B. Induction and intuition in scientific thought. Philadelphia: American Philosophical Society; 1969. Burt, R.S.; Doreian, P. Testing a structural model of perception: Conformity and devtance with respect to journal norms m elite sociological methodology. Quality and Quantity, 16: 109-150; 1982. Glenn, N.D. American sociologists’ evaluation of 63 journals. American Soctologist, 6: 298-303; 1971. Burt, R.S. Stratification and prestige among elite experts in methodological and mathematical sociology circa 1975. Social Networks, 1: 105-158; 1978. Hawkins, R.G.; Ritter, L.; Walter, I. What economists think of their journals. Journal of Pohtical Econ- omy, 81: 1017-1032; 1973. Giles, M.W.; Wright, G.C. Political scientists’ evaluatton of 63 journals. PS, 8(3): 254-256; 1975. Lee, D.; Evans, A. American geographers’ rankings of American geography journals. Professional Geog- rapher, 36(3): 292-300; 1984. Mace, K.C.; Warner, H.D. Ratings of psychology journals. American Psychologtst, 28: 184-186; 1973. Koulack, D.; Keselman, H.J. Ratings of psychology journals by members of the American Psychological Association. American Psychologist, 30: 1049-1053; 1975. Hohn, R.L.; Fine, M.J. Ratings and misratings: A reply to Mace and Warner. American Psychologist, 28: 1012; 1973. Gynther, M.D. On Mace and Warner’s journal ratings. Amertcan Psychologist, 28: 1013; 1973. Boor, N. Unfamiliarity breeds disdain: Comments on department chairmen’s ratings of psychological journals. American Psychologist, 28: 1012-1013; 1973. Levin, J.R.; Kratochwill, T.R. The name of the Journal fame game: Quality or familiarity? American Psy- chologist, 31: 673-674; 1976. Garfietd, E. Citation indexing: Its theory and application in science technology and humanities. Philadel- phia: ISI Press; 1979. Cason, H.; Lubotsky, M. The influence and dependence of psychological journals on each other. Psycho- logical Bulletin, 33: 91-103; 1936. Daniel, R.S.; Louttit, C.M. Professional problems in psychology. New York: Prentice-Hall; 1953. Xhignesse, L.V.; Osgood, C.E. Bibliographical citation characteristics of the psychological journal network in 1950 and in 1960. American Psychologist, 22: 778-791; 1967. Pinski, G.; Narin, F. Structure of the psychological literature. Journal of the American Society for tnfor- mation Science, 3O(May): 161-168; 1979. Doreian P. A revised measure of standing of journals in stratified journal networks. Scientometrics, 11: 71- 80; 1987. Doreian, P. Measuring the relative standing of disciplinary journals. Information Processing and Manage- ment, 24(l): 45-56; 1988. Porter, A.L. Use lists with caution. American Psychologist, 31: 674-675; 1976. Buss, A.R.; McDermott, J.R. Ratings of psychology journals compared to objective measures of journal impact. American Psychologist, 31: 675-678; 1976. White, M.J.; White, K.G. Citation analysis of psychology journals. American Psychologist, 32: 301-305; 1977. Rushton, J.P.; Roediger, H.L. An evaluation of eighty psychology journals based on the Science Citatton Index. American Psychologist, 33: 520-523; 1978. Jakobovits, L.A.; Osgood, C.E. Connotations of twenty psychological journals to their professional readers. American Psychologist, 22: 792-800; 1967.

214 P. DOREIAN

32. Osgood, C.E.; Suci, C.J.; Tannenbaum, P.H. The measurement of meaning. Urbana, IL: University of Illi- nois Press; 1957.

33. Belsley, D.A.; Kuh, E.; Welsch, R.E. Regression diagnostics: Identifying influential data points and collinear- ities. New York: Wiley; 1980.

34. Doreian, P.; Hummon, N.P. Regoo plots as a regression diagnostic. Social Science Methodology Confer- ence, 1988 May 29-June 3; Dubrovnik, Yugoslavia.

35. Koopmans, L.H. An introduction to contemporary stattsttcs, 2nd ed. Boston: Duxbury Press; 1987: 61. Win- sorizing a distribution is one way of dealing with outliers. If, say, an analyst thought that 5% of the data points required trimming (removal) then instead of removing them, these data points are replaced by the value of the next data point that would have been removed if any larger percentage (than 5%) required trimming.

36. Hoe& A.E.; Kennard, R.W. Ridge regression: Biased estimations for nonorthogonal problems. Technomet- rics, 12: 55-67; 1970.