Article contents
Journals in the Discipline: A Report on a New Survey of American Political Scientists
Published online by Cambridge University Press: 15 April 2003
Abstract
- Type
- THE PROFESSION
- Information
- Copyright
- © 2003 by the American Political Science Association
Along with books, scholarly journals constitute the primary media through which political scientists communicate the results of their research to their discipline. However, not all journals are created equal. There is a hierarchy of scholarly journals in political science, with some journals being highly respected and others less so. Articles published in the most highly regarded journals presumably go through a rigorous process of peer review and a competition for scarce space that results in high rejection rates and a high likelihood of quality. Articles published in these journals pass a difficult test on the road to publication and are likely to be seen by broad audiences of interested readers. Other journals publish research findings that are of interest to political scientists, to be sure, but articles published in these journals either pass a less-rigorous test or are targeted to narrower audiences.
The purpose of this paper is to report on new findings relating to how political scientists in the United States evaluate the quality and impact of scholarly journals in their discipline. Based on a survey of 565 political scientists who are on the faculties of both Ph.D.- and non-Ph.D.-granting departments, we consider subjective evaluations of the scholarly quality of 115 journals of interest to political scientists, as well as the degree to which political scientists are familiar with journals and are hence likely to be exposed to the findings reported in articles published in those journals. Following the work of Garand (1990) and Crewe and Norris (1991), we also create a journal impact rating that combines information about subjective evaluations of journal quality with information about respondents' familiarity with those journals.
While some research on journal quality in political science has focused on the citation rates of scholarly journals (Christenson and Sigelman 1985), perhaps the most widely cited approach for evaluating journal quality and impact is one based on subjective evaluations of journals, as measured in surveys of political scientists (Giles and Wright 1975; Giles, Mizell, and Patterson 1989; Garand 1990; Crewe and Norris 1991). Giles and Wright (1975) pioneered this approach with their initial study, which examined political scientists' subjective evaluations of 63 political science journals; Giles, Mizell, and Patterson (1989) followed up with a reassessment of the evaluations of 78 journals, including 56 journals included in the first survey.
Garand (1990) notes that the rankings of journals reported by Giles et al. (1989) include some interesting anomalies. In particular, some journals with very narrow audiences and foci are ranked highly by Giles et al. based on the high evaluations received from their relatively narrow readerships. The result is that some journals are ranked highly, even though a large majority of political scientists are not familiar with them and “not necessarily because they are highly visible and broadly recognized for the quality of the scholarship contained within their pages” (Garand 1990, 448).1
For example, the Journal of International Law and the Journal of Politics were both given approximately the same evaluation by those respondents rating these journals. However, over 90% of respondents reported being familiar with the Journal of Politics, while less than 20% reported familiarity with the Journal of International Law. As Garand suggests, the Journal of Politics is likely to have a broader level of visibility and potential impact on the profession, since a broader range of political scientists is likely to be exposed to its contributions. The Journal of International Law might have an important impact for scholars of international law, but far fewer political scientists are likely to be exposed to work published in this more specialized journal.
In this paper we follow the approach adopted by Giles and colleagues in collecting data on journal evaluations, as well as the approach adopted by Garand in creating a measure of journal impact. Our rationale is simple: we suggest that a journal's impact is a function of both the quality of research published in its pages and the degree to which its findings are disseminated broadly to the political science profession. Two journals with equally strong evaluations will have different impacts on the profession, depending on how many political scientists are familiar with and exposed to their articles.
We realize that an effort to rate the quality and impact of scholarly journals is controversial, particularly given recent debates about what constitutes a valued contribution in political science and the role of journals in reflecting the values of the discipline. Admittedly, the notion of combining evaluations and familiarity into an impact rating reflects a subjective value about journal publications, but we suggest that these underlying values are not unreasonable ones. Our intention is not to denigrate the contributions published in journals with relatively narrow foci and/or readerships. Rather, we merely point out that articles published in such journals, even if they are of high quality, will be seen by a smaller number of political science colleagues and are less likely to have as strong an impact on the political science discipline. We also suggest that there is some value in having research read by numerous scholars, especially when the broad readership crosses subfield boundaries. The potential for cross-fertilization that occurs when research findings are subjected to the scrutiny of numerous scholars and from different subfields is likely to enhance the quality of research. Arguably, the research of scholars in a given subfield is improved when it is read and evaluated by scholars from American politics, comparative politics, political theory, and international relations. This is more likely to occur in journals with wide readership.
Data and Methodology
In order to measure subjective evaluations of journal quality and familiarity with political science journals, we developed a questionnaire that was mailed to a sample of 1,400 American political scientists during the spring and summer of 2001. The sample was drawn from the membership of the American Political Science Association (APSA). Excluded from the sample were members with a non-U.S. mailing address, members indicating employment in a nonacademic position, and members who indicated that they did not have a Ph.D. In previous research, Giles and colleagues sampled only political scientists in Ph.D.-granting departments, but in this study we also include in our sample political scientists who teach at non-Ph.D. granting departments. In an effort to include scholars at both Ph.D. and non-Ph.D. granting institutions, we cross-checked university affiliations against the Guide to Graduate Studies, and the membership list was divided into those indicating an affiliation with a Ph.D. granting institution and those either indicating an affiliation with a non-Ph.D. granting institutions or for whom the affiliation was unclear. Random sampling was used to identify 800 potential respondents within the Ph.D. group and 600 respondents within the non-Ph.D. group. Responses were received from 559 respondents. The response rate was 47% among the Ph.D. sample and 23% among the non-Ph.D. sample. The overall response rate was 40%.2
The Ph.D. group is based on university affiliations clearly indicated in the membership list. The non-Ph.D. group consists of those clearly indicating an affiliation with a non-Ph.D. granting institution and those not providing information on affiliation. Some of the latter are actually affiliated with academic institutions, some with non-Ph.D. granting institutions, and some are not affiliated with academic institutions at all. Note that we only excluded from consideration members who clearly indicated a non-academic affiliation. We believe that the lower response rate within the non-Ph.D. subset may partially reflect the inclusion of non-academic and student respondents for whom the survey would have less relevance. By any means, this structured sampling assured the inclusion of respondents from non-Ph.D. institutions, and since the respondents were asked on the survey to indicate the Ph.D.-granting status of their home institution this accurate indicator was available for any analysis.
The questionnaire includes a wide range of items, including descriptive information about respondents and information about their views toward 115 political science journals. We made an effort to be inclusive in the list of journals that we asked respondents to evaluate. We included many of the journals found in earlier surveys, and after compiling a preliminary list we asked colleagues in our home departments (and from all subfields) to suggest names of other important journals that should be included on our list. Armed with our list of journals, we asked our political scientist respondents to “assess each journal in terms of the general quality of the articles it publishes,” using a scale from 0 (poor) to 10 (outstanding). We also asked respondents to indicate whether or not they were familiar with each journal. These items on journal evaluation and journal familiarity provide the basis for our analysis.
We also included some additional items of interest to this study. First, we asked respondents a series of descriptive items, including current institutional affiliation, highest degree attained, doctoral institution, age, sex, race, academic rank, and whether or not they are currently chair of their home department. Second, we asked respondents to indicate their substantive subfields, chosen from American politics, comparative politics, international relations, judicial politics, political theory and philosophy, methodology, public administration, and public policy; respondents were permitted to indicate up to three subfields. Third, we are interested in the degree to which journal evaluations range across different methodological approaches to the discipline, so we asked respondents to indicate up to two approaches from a list that included quantitative, qualitative, mixed (quantitative and qualitative), normative theory, and formal theory.
We are also interested in alternative ways of thinking about journal evaluations, so we included two additional sets of relevant items in the survey. First, we asked respondents the following question:
Assume that you have just completed what you consider to be a very strong paper on a topic in your area of expertise. Indicate the first journal to which you would submit such a manuscript. Assuming that the paper is rejected at your first choice, please indicate the second journal to which you would submit the manuscript.
Respondents were permitted to list up to three journals to which they would send a high quality paper that they had written. While hypothetical, we believe that this exercise presents the respondents with a more realistic context for assessing journals than does the 0–10 journal evaluation item and may yield a more valid rank ordering of journals.
Second, we are also interested in which journals political scientists read regularly for the best research in their fields of study. We asked respondents the following question: “Which journals do you read regularly or otherwise rely on for the best research in your area of expertise?” Respondents were permitted to list up to five journals.
Measuring Journal Impact
A key concept in this paper is journal impact, which we conceptualize as a function of both the strength of evaluations that political scientists give to a particular journal and the degree to which political scientists are familiar with a journal, and hence likely to be exposed to the findings reported in that journal. This suggests the need to weight journal evaluations by the proportion of respondents who are familiar with a given journal. This can be done by multiplying the journal evaluation and journal familiarity measures, but like Garand (1990), we find that this measure is more strongly related to journal familiarity (r = 0.987) than journal evaluation (r = 0.553). Given this, we utilize the approach adopted by Garand (1990):
Journal Impact = Journal Evaluation + (Journal Evaluation * Journal Familiarity)
This measure has a theoretical range from 0 to 20. A journal that achieves a perfect evaluation of 10.0 and that is familiar to all political scientists (i.e., familiarity = 1.00) would have a score of 20, while a journal that earns a 0 on its evaluation and/or has no political scientists familiar with it (i.e., familiarity = 0.00) would draw a score of 0. This impact measure is almost equally correlated with familiarity (r = 0.877) and evaluation (r = 0.821), so it appears to do well in giving journals relatively equal credit for having strong evaluations and strong familiarity among political scientists.
We should note that there is considerable stability in journal impact, journal evaluation, and journal familiarity from the 1989 Giles et al. survey to the present survey. There are 66 journals represented in both the 1989 and 2001 surveys, and this permits us to assess the stability in evaluations from one survey to the next. In Figures 1–3 we present the scatterplots for the relationship between journal impact, journal evaluation, and journal familiarity in 2001 and the same variables measured in 1989. As one can readily see, there is considerable stability in these three journal characteristics over time. We have also estimated a simple regression model that depicts 2001 measures of journal impact, journal evaluation, and journal familiarity, respectively, as a function of 1989 measures of the same variables. Our results verify the strong relationship between 2001 and 1989 measures; the R2 values are 0.886, 0.767, and 0.836, respectively, for the impact, evaluation, and familiarity models. Clearly, journals with a strong impact in 1989 also are likely to have a strong impact in 2001, and the same can also be said for journal evaluation and journal familiarity measures. These results suggest a high level of reliability in our impact, evaluation, and familiarity measures.



Empirical Results
In Table 1 we report the impact scores, mean evaluation ratings, and proportion familiar for each of the 115 journals of interest to American political scientists, ranked according to journal impact. In terms of journal impact, there are few surprises here. The top 10 journals represent what most political scientists would say are the most visible, rigorous journals in political science or related disciplines. The American Political Science Review, American Journal of Political Science, and Journal of Politics stake out the top three rankings; these journals are the most prominent “general” journals in the profession. These journals are followed by World Politics, International Organization, and the British Journal of Political Science, three journals that focus on international and comparative politics or that have an international audience. The bottom group in the top 10 journals includes three journals representing related disciplines, the American Sociological Review, the American Economic Review, and the American Journal of Sociology, as well as a leading comparative politics journal, Comparative Politics. All in all, the top 10 journals reflect the flagship journals of political science and related disciplines, as well as the leading journals in the fields of comparative politics and international relations.

The second tier of journals includes both broad-based regional journals (such as Political Research Quarterly, Polity, and Social Science Quarterly), as well as more specialized subfield journals, such as Comparative Political Studies, International Studies Quarterly, Public Opinion Quarterly, Legislative Studies Quarterly, Political Theory, Public Administration Review, American Politics Quarterly, and Political Analysis. These journals are generally well regarded by those able to offer evaluations, and they are familiar to relatively high proportions of respondents.
The third tier of journals is comprised of those that are either reasonably well regarded or reasonably well known, but not both. For instance, the Annals of the American Academy for Political and Social Science is familiar to about 38% of respondents, but it's mean rating of 5.726 on a 10-point scale falls somewhat below the mean evaluation for all journals. Publius, Review of Politics, Presidential Studies Quarterly, and Policy Studies Journal similarly score above average in terms of familiarity but somewhat below average in terms of their subjective evaluations. On the other hand, several journals are very well regarded by the political scientists who offered an evaluation, but are familiar to only a small proportion of respondents; these journals would include World Development, History of Political Thought, American Journal of International Law, Journal of Law and Economics, Journal of Latin American Studies, and Political Geography, among others.
Finally, in the bottom quartile are journals that are below average in both their evaluations and familiarity. This tendency is best reflected in the bottom five journals, which include the Journal of Black Studies, Social Science Journal, Simulation and Games, China Studies, and Politics and Policy.
Journal Evaluations
While the impact measures have a great deal of face validity, the evaluations of political science journals contain quite a few interesting surprises. In Table 1 we report the mean evaluations for all 115 journals, but in Table 2 we present rank-ordered mean evaluations for the top 30 journals. These figures represent the means for the 10-point evaluation scale for each journal.

Based on mean evaluations the three leading journals ranked by political scientists are not political science journals at all! The American Economic Review (mean = 8.350) is ranked first, followed by the American Sociological Review (8.163) and the American Journal of Sociology (7.912). It is astounding to think that the most positively evaluated journals in political science are actually in the fields of economics and sociology. We suspect that for most political scientists this does not reflect a broad exposure to articles published in these journals. While sizeable proportions of political scientists are generally familiar with these journals, most political scientists are unlikely to have regular contact with their articles. Rather, we suspect, political scientists recognize these journals as the flagships of their respective disciplines, and hence rate them so highly in recognition of their status in those disciplines.
The next group of journals includes a combination of more specialized subfield journals and some of the general journals that cover broader subject matter. Subfield journals World Politics, International Organization, Journal of Political Economy, Comparative Politics, and Political Theory all earn spots in the top 10 evaluated journals, along with broad-based journals like the Journal of Politics and the American Journal of Political Science. It appears that scholars give strong evaluations to the quality of articles published in the leading specialty journals in their respected subfields, as well as to the articles published in the leading general journals.
Perhaps the biggest surprise is the relatively low mean evaluation given to the American Political Science Review, the journal that scores the highest in terms of its disciplinary impact. The APSR achieves a mean evaluation of only 7.074, which gives it an evaluation ranking of 17th out of 115 journals. This is a very low score, given that the APSR is generally regarded as the flagship journal of the profession. The relatively low mean partly represents the relatively wide variance in the distribution of evaluations of the APSR, which is depicted in Figure 4. The standard deviation of this distribution is 2.62, which is among the highest for the journals in our study, and this suggests that there is substantial disagreement among political scientists on how the APSR should be evaluated. Over 50% of respondents give the APSR a rating of 8 or above, while fully 26% of respondents give the APSR a rating of 5 or below. We will explore why there is such substantial variation in the assessments of the APSR in the analysis described below.

Journal Familiarity
Besides respondents' evaluation of the quality of articles, journal impact is also a function of the degree to which political scientists are familiar with and exposed to the research published within a journal's pages. In Table 3 we display the proportion of respondents who report being familiar with each of the 115 journals in our survey.

There are only six journals for which a majority of respondents indicate familiarity. The American Political Science Review leads the field, with almost all respondents (95.1%) indicating that they are sufficiently familiar with the journal to offer a rating. This suggests that, even with a slightly lower mean evaluation than expected, the APSR is a major player in the distribution of research findings in the political science discipline. In fact, the lofty impact rating of the APSR is due primarily to the fact that the APSR combines a good evaluation with a familiarity level among political scientists that is so far ahead of other journals.
Three other journals—the American Journal of Political Science (75.3%), the Journal of Politics (71.7%), and PS: Political Science and Politics (70.3%)—are familiar to over 70% of political scientist respondents. There is then a further drop-off, with slightly over 50% of respondents familiar with World Politics (54.8%) and the British Journal of Political Science (54.1%). Several journals are familiar to more than 40% of respondents; these are primarily well-known specialty journals, such as Comparative Politics (45.9%), International Organization (44.5%), and American Politics Quarterly (44.2%), or broad-based (mostly regional) journals such as Political Science Quarterly (49.8%), Political Research Quarterly (48.9%), Polity (41.5%), and Social Science Quarterly (40.5%). After these top 13 journals, there are a series of mostly specialty journals that are familiar to between one-quarter and two-fifths of political scientist respondents. Beyond these top 30 journals, most journals are familiar to relatively small proportions of American political scientists.
Preferred Journal Submissions
As mentioned above, we asked respondents to indicate the journals to which they would submit a “very strong paper” that they had written in their area of expertise. This question is designed to give respondents an alternative way of thinking about the comparative status of political science journals. In Table 4 we list the first, second, and third preferences, as well as the total number of mentions across all three preferences. We list here only those journals that have at least 25 total mentions and 10 mentions in at least one of the three preference slots.

The American Political Science Review is the most frequently mentioned journal. A total of 161 respondents mention the APSR as their first choice and a total of 201 respondents as their first, second, or third choice. The first mentions far outpace those of any other journal in the list and are almost four times the 42 first-preference mentions for the American Journal of Political Science. This means that the APSR is the strongest choice as the journal to which scholars would want to submit their best work.
Three other journals have 100 or more mentions—the Journal of Politics, American Journal of Political Science, and World Politics. Although the JOP finishes second in total mentions, it is clear from the pattern of mentions that the AJPS is the more preferred outlet for political scientists' best work, insofar that the AJPS has many more first and second mentions than the JOP, which has the most third-place mentions. This would suggest a rankordered preference of APSR, AJPS, and JOP as the top journals to which scholars would prefer to send their best work.
The second group is dominated by highly regarded specialty journals with strong subfield followings, including World Politics (100 total mentions), Comparative Politics (64), International Organization (52), International Studies Quarterly (44), Political Theory (35), and Comparative Political Studies (30). The specialty journal Public Administration Review (29) and two regional journals, Political Research Quarterly (27) and Polity (26), finish the list.
What is not reported in Table 4 is the diversity of first preferences offered by respondents. Respondents listed a total of 112 different journals as the preferred journals to which they would submit their best work. Of these, 33 are cited by more than one respondent, so there are a number of journals that are of interest to multiple scholars. Of course, this also means that there are 79 journals listed by single respondents as the journal to which they would submit their best manuscripts. Overall, it would appear that political scientists would prefer to submit their best work to a variety of political science journals, though there are a small number of journals that draw the interest of a sizeable number of respondents.
Preferred Reading Sources
We also asked respondents to identify which journals they “read regularly or otherwise rely on for the best research” in their areas of interest. These results are presented in Table 5. We list here only those journals that have at least 25 total mentions across the three preference slots.

Careful readers will see that there is substantial similarity in journal reading and journal submission preferences. Here again, the American Political Science Review, American Journal of Political Science, Journal of Politics, and World Politics are in the top four positions, indicating that political scientists both submit their best work to these journals and go to these journals for the best research in their fields of study. The second tier of journals is very similar, with International Organization, Comparative Politics, International Studies Quarterly, Political Research Quarterly, International Security, Comparative Political Studies, Political Theory, Public Administration Review, and Polity appearing on both lists. The only exception is the Legislative Studies Quarterly, which is fairly well read but is not among the leading journals to which individuals send their best work.
A Discipline Divided?
Thus far we have focused our attention on general patterns of journal impact, evaluation, and familiarity for our complete sample of American political scientists. However, the observation of casual conversations among political scientists reveals considerable disagreement about the leading journals in the discipline. In particular, there appears to be disagreement about which journals are the leading outlets for scholars in different subfields of political science. Many scholars see general journals such as the American Political Science Review, American Journal of Political Science, and Journal of Politics as the leading journals in political science, regardless of subfield specialty or methodological approach. Other scholars see these journals as being dominated by the field of American politics and/or by quantitative methodologies, and they identify broad subfield journals (such as World Politics, Comparative Politics, Comparative Political Studies, or Political Theory) as the primary outlets for their research. Still, other scholars see very specialized journals as the leading journals in their fields; for such scholars a publication in Latin American Research Review, Studies in American Political Development, Publius, Europe-Asia Studies, Journal of Asian Studies, or Middle East Journal is more likely to reach the scholarly audiences of interest and more important than publications in either the general journals or broad subfield journals.
Subfield Differences
Are subfield cleavages reflected in our journal evaluations? Do scholars differ in their evaluations of journals, depending on whether they are in American politics, comparative politics, international relations, and political theory? There are several different ways of looking at this question. First, in Table 4 we report results on the preferred journals to which respondents would submit a high-quality manuscript. In Table 6 we break these results down by subfield, reporting submission preferences for respondents in the fields of American politics, comparative politics, international relations, and political theory.3
These preferences are ordered based on 1st preferences, rather than on total preferences. In addition, it should be noted that, because of relatively small sample sizes, we do not report data for respondents who report their primary fields as political methodology, public policy, public administration, and judicial politics.

These results suggest a fair amount of variation in preferred journal outlets across fields. In American politics, the preference ordering for journals is pretty clear; scholars report a clear preference for the American Political Science Review and a slight preference for the American Journal of Political Science over the Journal of Politics. Relatively few American politics scholars indicate a preference for other journals as one of their first three choices, suggesting that these journals are the premier journals for Americanists.
The APSR is the first choice of scholars in the fields of international relations and political theory, but this preference is not dominant in these fields. In international relations, the APSR is followed closely by International Organization as a first preference, and World Politics and the International Studies Quarterly have strong followings as the second and third choice journals, respectively. International Security has some support as a first preference, but it drops off quickly as a second and third preference. In political theory, the APSR is also a first preference for scholars seeking to submit their best work, with Political Theory a close second as a first preference. The Journal of Politics and Polity also have some support as second and third preferences. Clearly, in international relations and political theory, the APSR has some prominence as a publication outlet for scholars' best research, though once scholars in these fields get past their first choice they quickly move to other journals, particularly those in their subfields.
The field of comparative politics is somewhat of an outlier. World Politics is the top choice for comparative politics scholars, followed by Comparative Politics and the American Political Science Review, which are tied for second. World Politics and Comparative Politics are also strong second and third choices as outlets for comparative politics scholars, as is Comparative Political Studies, with the APSR dropping out as a second and third submission choice. These results suggest that some comparativists see the APSR as a viable outlet for their best work, but most focus on general subfield journals as a first choice and then move almost completely to subfield journals as second and third choices.
A second way of looking at subfield differences is to focus on journal reading preferences of respondents. In Table 7 we report the preferences for journal reading, again broken down by subfield. In American politics, the pattern is much the same as for submission preferences, with the APSR, AJPS, and JOP finishing in the first three positions, followed distantly by the Political Research Quarterly, Legislative Studies Quarterly, and Public Opinion Quarterly.

The ordering in the other three subfields gives the APSR and the general regional journals a much smaller role. In the field of international relations International Organization stakes out a strong position. Along with the APSR, International Organization is the first reading preference of international relations scholars, but it is also well positioned as a second choice and beyond. The APSR drops off very quickly after its strong showing as a first preference. Other journals are well read by international relations scholars, including International Studies Quarterly, World Politics, and International Security. In the field of political theory, scholars cite only two journals regularly—Political Theory and the APSR. Finally, in comparative politics World Politics and Comparative Politics play a somewhat dominant role as a source of reading by scholars in the field. The APSR is close in terms of first preferences but falls off after that. Comparative Political Studies and International Organization are also regularly cited as journals to which comparative politics scholars regularly go for reading in their field.
Third, in Table 8 we consider the possibility that the subjective evaluations of journals vary across subfields. Here we report the mean evaluation of selected journals that rank among the top 20 in terms of journal impact (see Table 1), both in total and for respondents in the fields of American politics, comparative politics, international relations, and political theory, respectively. We also report results from an analysis of variance that tests the null hypothesis that the mean evaluations are equal across subfields.

As one can see, for several journals there is a considerable difference in mean evaluations across subfields. For the American Political Science Review, American Journal of Political Science, and Journal of Politics, there is a consistent pattern of difference in mean evaluations. American politics scholars rate these journals highly, with scholars from comparative politics, international relations, and political theory rating these journals below the level of the overall mean. World Politics also generates some differences across subfield, with American politics and comparative politics respondents rating this journal higher than others. Finally, there is a weak relationship between subfield and journal ratings for Comparative Politics, Comparative Political Studies, and International Security, though the differences are not particularly stark.
Methodological Differences
It is possible that the observed differences among American political scientists from different subfields are actually a result of differences in methodological approach. Some journals, such as the American Political Science Review, American Journal of Political Science, Journal of Politics, Journal of Conflict Resolution, and Comparative Political Studies are thought of as favoring research that takes a more quantitative approach, while other journals, such as Comparative Politics, Political Science Quarterly, and Political Theory, are thought of as being less quantitative in nature. Insofar as the distribution of methodological approaches differs across subfield, it is possible that subfield differences in journal evaluations are really a function of those methodological differences.
In Table 9 we report the mean evaluations for a group of journals selected from among those in the top 20 journals in terms of journal impact, broken down by respondents' methodological approach.4
We focus here on those who report taking quantitative, mixed, and qualitative approaches to their research. Two other approaches, formal theory and normative theory, are excluded because of small sample sizes, though these two groups are included in the analysis of variance results reported in this table.

It is noteworthy that many (but not all) of the journals for which there is greater support among qualitative scholars are in the fields of comparative politics and international relations. This suggests that there may be differences among the subfields in the distribution of methodological approaches, and that these differences might account for the effects of subfield on journal evaluations. In order to account for this possibility, we estimate a series of regression models in which the evaluations of selected journals are depicted as a function of a set of subfield variables and a set of methodological approach variables. The results are presented in Table 10. We have estimated our model for all of the top 20 journals in terms of journal impact, but because of space limitations we present the results only for a representative group of journals.

The results in Table 10 suggest that the evaluations of some journals are driven more by methodological considerations than by subfield. For three of the journals—American Political Science Review, American Journal of Political Science, and the Journal of Politics—the patterns of evaluations are determined by methodological approach. Simply, quantitative political scientists evaluate these journals significantly more favorably than those who adopt a non-quantitative approach, even controlling for variables representing respondent subfield. For example, looking at the estimates for the APSR evaluation model, we find that quantitative political scientists rate the APSR almost three points higher on the 11-point evaluation scale (b = 2.895, t = 6.069) than those who adopt a normative approach, which represents the excluded group. Respondents who report that they mix quantitative and qualitative approaches are also substantially more supportive of the APSR (b = 1.629, t = 3.365). Qualitative political scientists are slightly more positive toward the APSR than normative theorists, though the difference is not statistically significant (b = 0.642, t = 1.342). What we see here is that the more quantitative one's approach to political science, the more likely one is to evaluate the APSR favorably. Coefficients for two subfield variables achieve statistical significance; both political theorists and public administration scholars are significantly more positive in their evaluations of the APSR than are comparative politics scholars, who represent the excluded subfield group. But it is clear that methodological approach variables are the important determinants of evaluations toward the APSR.
The same can be said about the AJPS and, to a lesser extent, the JOP. In both cases quantitative respondents are much more favorably disposed toward the journals, with respondents who mix quantitative and qualitative modes of analysis also evaluating these journals positively. There are some subfield effects for both journals, but for both the AJPS and JOP these effects are smaller in magnitude than the methodological approach effects.
On the other hand, in Table 10 we report results for journals that are rated more favorably by qualitative scholars. For World Politics, Comparative Politics, Political Science Quarterly, and (to some extent) International Organization, the coefficients for the qualitative approach variable are positive and significant, indicating that qualitative researchers have substantially more favorable views toward these journals than respondents who adopt a normative approach. A case in point is World Politics, in which the coefficients for quantitative, mixed, and qualitative approaches are all positive and significant. What is noteworthy, however, is that the coefficient for those adopting a qualitative approach (b = 1.597, t = 3.607) is almost twice the magnitude of the coefficient for those adopting a quantitative approach (b = 0.852, t = 1.871).5
The coefficients for the mixed and qualitative variables are each significantly different than the coefficient for the quantitative variable (results not shown).
It is also the case that some journals draw relatively equal evaluations from quantitative, mixed, qualitative, and other scholars. In Table 10, this appears to describe most closely International Organization and Comparative Political Studies; the former is slightly better evaluated by qualitative scholars, while the latter is slightly better evaluated by quantitative scholars, though in neither case is the effect a strong one. For both of these journals the primary differentiation in evaluation occurs among the subfield variables, with comparative politics scholars exhibiting stronger evaluations than other scholars from other subfields. Among the other journals ranked in the top 20 in terms of impact, several others appear to be undifferentiated in terms of methodological approach, including the British Journal of Political Science, American Sociological Review, American Economic Review, PS: Political Science and Politics, International Studies Quarterly, and Political Theory. For these journals, respondents appear to be similar in their evaluations, regardless of methodological orientation.
What do all of these results suggest about “a discipline divided” in terms of the journal evaluations? Our results suggest a definitive answer: simply, it depends. Some journals appear to stimulate patterns of evaluations that are based on political scientists' methodological orientations. We suspect, but have no firm empirical evidence, that this reflects the degree to which a given journal identifies with a specific methodological approach. Some journals are identified, correctly or incorrectly, as favoring quantitative research; for these journals, the evaluations of quantitatively-oriented scholars will be more favorable, and the evaluations of qualitatively-oriented scholars will be less so. Other journals are identified as favoring a qualitative approach, and evaluations will again depend on whether the evaluator is oriented toward the quantitative approach or the qualitative approach. Some journals avoid being characterized as quantitative or qualitative, and these journals are likely to generate similar evaluations among both quantitativelyand qualitatively-oriented scholars.
The distribution of methodological orientations differs by subfield, and thiscan have an effect on the overall distribution of evaluations of various journals. We have estimated a series of models in which the various methodological approach variables are depicted as a function of the subfield variables; for the sake of brevity, these results are not shown, but they are of interest nonetheless. On average, comparative politics scholars are, along with those in the field of normative theory, the least likely to adopt a quantitative approach, and they stand alone in their increased likelihood of adopting a qualitative approach. Simply, comparativists are less quantitative and more qualitative in their orientations than most other political scientists. No doubt this shapes the relative evaluations that scholars of different subfields give to various journals.
Do quantitative scholars within each subfield differ in their journal preferences from their qualitative subfield colleagues? In order to explore this, we have also estimated a series of models in which we depict evaluations of various journals as a function of subfield variables and interaction variables for subfield and quantitative orientation. The coefficients for the interaction variables indicate the degree to which quantitative political scientists in each subfield are more or less favorably inclined toward a given journal than qualitatively-oriented political scientists in the same subfield. Based on these results (not shown), it is also the case that scholars with at least some quantitative orientation (either quantitative or mixed quantitative and qualitative) are more supportive of quantitatively oriented journals such as the APSR or AJPS, regardless of subfield. For instance, quantitative comparativists have more positive evaluations of the APSR and AJPS than qualitative comparativists, a pattern that is also observed for American politics and international relations scholars. If, however, quantitative comparativists are a relative rarity among comparative politics scholars, it follows that comparativists will on average exhibit lower support for quantitatively oriented journals than scholars representing other subfields where there is a higher share of scholars who adopt a quantitative approach.
Conclusion
What do these results suggest about scholarly journals in political science? Our results suggest that political scientists use, publish in, and read a wide range of scholarly journals, but that not all journals are created equal. Some journals are widely read by political scientists, while others are read by small groups of specialists. Some journals are very positively evaluated by scholars who are familiar with the work published in their pages, while other journals are not so well regarded. Some journals are read by broad audiences that cross subfield boundaries, while other journals are read almost exclusively by scholars working within specific subfields. Ultimately, some journals have a major impact on the political science discipline, with other journals labor in relative obscurity.
In this paper we report results from a survey of 559 political scientists in both Ph.D. and non-Ph.D. granting departments conducted during the spring and summer of 2001. Our core findings are similar to those reported in previous studies. The American Political Science Review, American Journal of Political Science, and Journal of Politics continue to rank among the top three journals in terms of their impact on the political science discipline, as measured to take into account both scholars' evaluation of the quality of work reported in these journals and their familiarity with these journals. These three journals are followed in the impact rankings by a combination of highly regarded subfield journals (World Politics, International Organization, Comparative Politics), respected flagship journals in related disciplines (American Economic Review, American Sociological Review), and general journals with broad readerships (British Journal of Political Science, PS: Political Science and Politics, Political Research Quarterly). Publications in these journals are likely to draw the attention of large numbers of political scientists and pass a rigorous peer evaluation before being accepted for publication. Ultimately, publications in these journals represent a feather in one's proverbial hat or, in this case, in one's vitae.
We also introduce some new, alternative ways of looking at journal impact, primarily by asking scholars the journals to which they would prefer to send their best work and that they read for the best work in their fields. Here again, the general disciplinary hierarchy is relatively undisturbed, with the journals that rate highly on the impact rankings also holding prominent positions on the submission and reading preference lists. Not only do journals such as the American Political Science Review, American Journal of Political Science, World Politics, International Organization, and Comparative Politics rate highly in terms of journal impact, but they also are the journals that political scientists read and to which they want to submit their best research.
When one looks below the surface, however, one finds some disagreement about the relative impacts of scholarly journals in the discipline. For one thing, journals earn a high rating by being both well evaluated and familiar to large numbers of political scientists. Some journals do very well on the journal impact rankings because they do particularly well on one of these dimensions but not particularly well on the other. The result is that some journals are ranked very highly in terms of mean evaluation but are not ranked so highly in terms of familiarity, and vice versa. A case in point is the American Political Science Review, which earns an evaluation score that ranks it 17th on that dimension, but which is ranked 1st by a big margin in terms of familiarity to political scientists. In the end, the APSR is ranked first in terms of journal impact, in large part because it is so widely read by political scientists, including those who evaluate it unfavorably.
Moreover, we find considerable variation in journal impact, evaluation, and familiarity among scholars of different subfields and methodological approaches. Among American politics scholars, the preference ordering is clear, with the APSR, AJPS, and JOP earning top-tier status. In comparative politics, international relations, and political theory, journals such as the APSR and JOP have a prominent (but by no means dominant) role, but there is much greater impact attributed to broad subfield journals and more specialized journals within each subfield. The result is that, for international relations scholars, International Organization, International Studies Quarterly, or World Politics join the APSR as first-tier research outlets. For comparative politics specialists, World Politics, Comparative Politics, and Comparative Political Studies are leading outlets, along with the APSR for some comparativists. For political theorists, Political Theory and the APSR are in the first tier, along with broad-based journals such as the Journal of Politics and Polity.
We also find that methodological approach is a major source of cleavage in political scientists' assessments of journals. Quantitative scholars tend to evaluate certain journals more highly than qualitative scholars, and there are also journals that draw the interest of qualitative scholars but not much interest among quantitative scholars. The methodological divide seems to be particularly stark for journals that are identified as favoring research with a particularly methodological orientation.
All of this raises questions about the current status of the political science discipline. Are there scholarly outlets where political scientists subject their work to the scrutiny of others who do not share their subfield or methodological orientation? Should such disciplinewide journals exist, particularly given the seemingly balkanized nature of the discipline? Should scholars of American politics see the work of comparative politics scholars who read the research findings of international relations scholars who follow the work of political theorists? Is there value in such crossfertilization across subfields and methodological approaches?
References




Political Scientists' Impact, Evaluation, and Familiarity Ratings of 115 Selected Journals, 2002

Political Scientists' Subjective Evaluations, Top 30 Highest-ranked Journals, 2002


Journal Familiarity, Top 30 Highest-ranked Journals, 2002

Respondent Preferences for Journal Submissions of High-quality Manuscript

Respondent Preferences for Journal Reading

Respondent Preferences for Journal Submissions of High-quality Manuscript, by Subfield

Respondent Preferences for Journal Reading, by Subfield

Mean Evaluations of Selected Political Science Journals, by Respondent Subfield

Mean Evaluations of Selected Political Science Journals, by Respondent Methodological Approach

OLS Regression Results for Models of Journal Evaluations, Selected Journals
- 51
- Cited by