Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T07:43:16.077Z Has data issue: false hasContentIssue false

Under the Influence? Intellectual Exchange in Political Science

Published online by Cambridge University Press:  28 March 2008

David Carter
Affiliation:
University of Rochester
Arthur Spirling
Affiliation:
University of Rochester
Rights & Permissions [Opens in a new window]

Extract

Intellectual exchange is central to progress in any discipline, including political science. The transfer of knowledge, ideas, and techniques takes place in many forums (e.g., advisor-student meetings, conferences, department lounges) and it is no simple task to systematically identify or quantify this interchange. In general though, the fruition of a successful or insightful idea is a published journal article or book. The way in which the author(s) of a published piece of work acknowledges previous or contemporary work that contributed to its development is via references or citations. Thus, while we cannot easily keep track of the entire process of intellectual exchange that leads to publication, citations inform us of other (usually published) work that influenced and contributed to the articles and books that make up the research output of the field.We thank two anonymous referees for helpful comments on content and structure.

Type
THE PROFESSION
Copyright
© 2008 The American Political Science Association

Intellectual exchange is central to progress in any discipline, including political science. The transfer of knowledge, ideas, and techniques takes place in many forums (e.g., advisor-student meetings, conferences, department lounges) and it is no simple task to systematically identify or quantify this interchange. In general though, the fruition of a successful or insightful idea is a published journal article or book. The way in which the author(s) of a published piece of work acknowledges previous or contemporary work that contributed to its development is via references or citations. Thus, while we cannot easily keep track of the entire process of intellectual exchange that leads to publication, citations inform us of other (usually published) work that influenced and contributed to the articles and books that make up the research output of the field.

Peer-reviewed journals are one of the most prestigious places in which scholars publish their research.1

Obviously, this statement does not apply to book manuscripts. We focus on peer-reviewed journals and leave the influence of books and book publishers to another study. For political scientists' evaluations of various publishers, see Goodson, Dillman, and Hira (1999).

Almost all cutting-edge research in political science that becomes highly influential is published, in some version, in the top peer-reviewed journals, such as the American Political Science Review, American Journal of Political Science, International Organization, Comparative Politics, and the Journal of Politics. Understanding the role and importance of different journals in communicating knowledge is thus a crucial part of understanding how the field progresses. This is the task we undertake here.

As we explain below, we are hardly the first scholars to evaluate journals and attempt to quantify their contributions to the discipline. Current techniques and measures typically utilize raw counts of citations which, we contend, are poor proxies for the true variables of interest, or they rely on political scientists' assessments of various journals which are, by definition, subjective and relatively expensive to obtain. By contrast, in this paper, using commonly available citations data and the way that journals cross-reference each other, we show a way to systematically assess their contribution to intellectual exchange. As we show, this simple approach that uses information about the total pattern of citations between journals, first used in statistics by Stigler (1994), greatly expands our understanding of how the field communicates and advances.

Influence in the Discipline

Political scientists are evidently concerned with influence in the discipline. For instance, the November 2006 edition of the American Political Science Review (APSR) was solely devoted to the evolution of political science and listed the 20 most-cited articles in the history of the journal. While these 20 articles are undoubtedly very influential, a list of raw counts still does not tell us much about the exchange of knowledge throughout the discipline. And, certainly, a list of raw counts such as the one published in the APSR does not tell us which journal had the most influence on the discipline in 2004 or 2005.2

See Stigler (1994) for a number of other problems specific to the examination of data on citations of individual authors.

Citation counts are used in a variety of ways to evaluate the quality of journals. Christenson and Sigelman (1985) utilize citation counts in an early study of journal quality. Additionally, Journal Citation Reports, which is available from the ISI Web of Knowledge, calculates a variety of measures, such as the “impact factor” and “immediacy index,” as well as reporting the raw number of citations for each year. Both the impact factor as well as the immediacy index measure how frequently a journal's “average” article is cited, with the former using “recent” citations (i.e., from the previous two years) and the latter using only citations from the current year. The common problem with measures based upon raw citation counts like this is that a citation is treated as a standard unit regardless of the parties involved. Hence, in terms of impact, there is no difference between Journal A's citation by highly esteemed and well-read Journal B, and A's citation by third-rate Journal C. Yet most political scientists would consider B's citation of A more prestigious. For similar reasons, problems may be caused by journals that self-cite a great deal.

Most work by political scientists that evaluates the relative quality of journals relies on rankings provided by political scientists themselves through surveys. Studies by Giles and Wright (1975) and Giles, Mizell, and Patterson (1989) are examples. Noting that field journals with a narrow readership obtained “surprisingly” high rankings in scholars' subjective evaluations, Garand (1990) pioneered an approach, later used by Crewe and Norris (1991) and Garand and Giles (2003), that includes an additional measure to the subjective quality ranking to account for the discipline-wide familiarity of the journal.3

Their final impact score is calculated as follows: Journal Impact=Journal Evaluation + (Journal Evaluation * Journal Familiarity).

To repeat, this method is nonetheless subjective. Moreover, surveys are relatively expensive and typically time consuming to dissipate, fill, and assess.

Thus the potential utility of a simple, cheap, systematic method that estimates the contribution of journals to the exchange of knowledge in the discipline. Such a method would compliment current practices, and allow discipline-wide comparison of whether, for example, political scientists' subjective analysis corresponds closely to the actual exchange of ideas among the top journals in the discipline.

Data

We utilize data published by Journal Citation Reports and available online from ISI Web of Knowledge from 2003–2005. The data include total citation counts for 84 political science journals in the given year, as well as a variety of measures based upon these counts. Key to our enterprise, the data record multidirectional citations; that is, they count all citations from and all citations to a journal in each given year.4

The articles cited can be published in any year. Any given article can only be cited once in a paper, regardless of how many times it is referred to in the text.

To keep the below analysis meaningful and tractable, we limit our analysis to the pattern of intellectual exchange among 16 top journals from 2003–2005. We chose journals that scholars believe to be among the field's top journals, as reported by Garand and Giles (2003).5

American Economic Review (AER), American Journal of Political Science (AJPS), American Journal of Sociology (AJS), American Political Science Review (APSR), American Sociological Review (ASR), British Journal of Political Science (BJPS), Journal of Politics (JOP), Political Research Quarterly (PRQ), PS: Political Science and Politics (PS), International Studies Quarterly (ISQ), Political Opinion Quarterly (POQ), World Politics (WP), International Organization (IO), Comparative Politics (CP), Comparative Political Studies (CPS), and Political Science Quarterly (PSQ). See also Table 1.

This list provides a nice baseline for comparison while also giving us a list of prestigious journals that are comparable forums for intellectual exchange.6

Our analysis would not be of much interest if we included a list of journals that do not exchange any citations.

One of the more interesting findings of the recent survey of political scientists by Garand and Giles (2003) is that three of the top nine general journals (American Economic Review, American Sociological Review, and American Journal of Sociology) are not even political science journals (298–9).7

A recent article by Giles and Garand (2007) does not use survey data and exclusively examines journals that are primarily in the discipline of political science.

They posit that these three journals fare so well because political scientists recognize them as “the flagships of their respective disciplines,” not necessarily because they are familiar or contribute in a meaningful way to intellectual exchange in the discipline. In the following section, we assess this claim as well as some widely held beliefs about journal quality with actual data on intellectual exchange.

Method: The Intellectual Exchange Model

In Table 1 we present the citation counts for the journals in our study broken down by the journal that cited them for 2005. Clearly, there is much variation in these counts, and in which journals cite which journals; our concern here is to summarize these data in a straightforward way that captures the diffusion of ideas, information, and influence across journals.

Cross-Citations in Political Science for Selected Journals, 2005

As noted above, we conceive of each citation as an intellectual exchange between journals: when Journal B cites (an article in) Journal A, we say that A influences B; when the opposite pattern occurs, we say that B influences A. One could quibble with the term “influences,” but we believe the central idea is a sound one: if a journal makes reference to another it is clear that the journal being cited has made an impact on the citer. The citer may be extending the work of an article that appeared in the cited journal, making reference for further reading or completeness, or even attempting to wholly repudiate the cited findings. The nature of the case makes no difference to our argument; as we explain, what does matter is who takes part in the intellectual exchanges and with whom. Of course, the unit of analysis is the journal, rather than the article, and we assume that, in general, journals are influential because they publish influential articles.

To construct our measure, suppose that each journal has a power to influence which, though it exists and affects its relations with other journals, cannot be directly observed. In this sense, it is no different than a citizen's propensity to vote or a nation's resolve in war crisis negotiations. One way to measure this power is to use what we call the Intellectual Exchange Model. The idea is to estimate a score for each journal i, denoted Ei, which has the property that for any citation involving two journals as featured in Table 1, the (log) odds that it is A influencing B, rather than B influencing A, is given by EAEB. More formally:

or, equivalently,

The interpretation is very simple: the larger the relative value of Ei for each journal, the more influential it is. Once we have the scores, we can gauge how much more influential a journal is than another by computing the probabilities above for any pair. Since it may not be obvious from these equations, to understand the strength of this approach suppose as above that Journal A is cited by both Journal B and Journal C. Suppose also that Journal B is very well cited itself, but C is not. Then in our model, B's citation of A will contribute more to A's score than will C's citation of A. Otherwise put: it is not simply the quantity of citations that matters, but the quality.

This approach is known as the Bradley-Terry (Bradley and Terry 1952) model in statistics and has seen numerous uses in that discipline—one of which is extremely close to the current application (Stigler 1994). Such a model can be fit with any number of software packages; here we chose R (R Development Core Team) using the BradleyTerry library (Firth 2005).

Findings

In Table 2 we report the influence scores for each of the journals in our study for 2005. We include the ranking for each individual year in the sample as well as the overall ranking across the three years. Notice that, because the power to influence is relative to other journals, we assign one of the publications as the baseline or reference score of zero. We choose PS: Political Science and Politics (PS) for this purpose in all models. Additionally, in the fifth column we provide the rankings reported by Garand and Giles (2003) for comparison. From the overall results (including all years) in Table 2 we conclude that the American Economic Review (AER) is the most influential journal (of the 16 in the sample), followed by the APSR and World Politics (WP).

Rank Order of Scores for Journals from the Intellectual Exchange Model

Examination of individual year results demonstrates that this pattern is generally stable, although the exact rankings fluctuate slightly. The biggest surprise from the aggregate (and individual year) results is the poor performance of the American Journal of Political Science (AJPS) and JOP relative to the rankings provided by Garand and Giles (2003). Although these two journals are regarded as two of the top three by scholars in the discipline, the results in Table 2 indicate that journals such as WP and International Organization (IO) are more influential. The influence score of Public Opinion Quarterly (POQ) also consistently ranks higher than in the survey performed by Garand and Giles (2003), while the British Journal of Political Science (BJPS) does considerably worse. Political Science Quarterly (PSQ) is also surprisingly influential, although its performance is inconsistent across the three years in the sample.

As noted above, the scores in Table 2 have a direct interpretation outside of the rank order. Consider, for example, the probability that any particular citation involving the APSR and Political Research Quarterly (PRQ) has the APSR influencing PRQ. This is:

which is a near certainty. By contrast, the BJPS and Comparative Political Studies (CPS) are much more evenly matched in terms of influence. Indeed the probability that an intellectual exchange between them has the BJPS influencing the CPS, rather than the other way round, is:

which is close to a coin toss.

An objection here might be the inclusion of journals that are usually considered part of economics or sociology and hence outside of the discipline per se.8

Notice that we assume authors in all disciplines (and all journals) behave similarly in only citing work that is actually germane to their endeavor. From Table 1 we see no reason to suppose otherwise.

In response, we report our model finding just for political science journals in Table 3—it is readily apparent that the rank order of Table 2 is unperturbed by reducing the sample of journals.

Rank Order of Scores for Political Science Journals from the Intellectual Exchange Model

A potentially more interesting issue concerns the effect of a journal's origins and general audience in terms of its influence. That is, we could suppose that a journal's influence in political science is in part determined by the discipline it serves. In Table 4 we re-estimate our model making the scores a (linear) function of the discipline that a journal predominately operates within. Hence, we denote the AER as an economics journal, while the American Journal of Sociology (AJS) and American Sociological Review (ASR) are from sociology.

Effect of Political Science, Sociology, and Economics Focus on Journal's Influence

The interpretation of coefficients in Table 4 is not unlike that for a (multinomial) logistic regression, relative to a base category of political science. In particular, economics journals have an estimated advantage in influence of 1.33 over political science journals, while the influence of sociology journals in political science is much smaller (0.21). Both coefficients are statistically significant. This suggests that, for our sample years, for our sample journals, economic titles are much more influential than sociology journals in the discipline of political science, and, in fact, more influential than political science journals broadly construed. This finding is somewhat of a contrast to the argument put forth by Garand and Giles (2003).

Discussion

Riker (1982, 753) observed that “[political] science involves the accumulation of knowledge.” A part of this accumulation is the regular interchanges and diffusion of findings, ideas, innovations, models, and techniques. In this paper, we set out to measure this tendency as it applies to journals in the discipline. Breaking away from simple unitary counts of citations or expensive and subjective surveys, we showed a way to compare journals based on their performance as exchangers of information. The method is straightforward to compute and yields what we believe to be a valid and objectively justifiable rank ordering with the APSR, the AER, and WP as the top performing political science publications. We went on to show that, though sociology journals are clearly part of the exchange process in political science, they do not outperform economics journals as a source of ideas or influence. More tentatively, one might argue that this is evidence that the discipline increasingly adheres to the “Rochester School,” which emphasizes rational choice modeling of human political behavior in a style strongly reminiscent of that found in economics (Amadae and Bueno de Mesquita 1999). Of course, the temporal domain of our study only includes three years; we look forward to publishing updates to our findings as they occur.

Author Bios

David Carter is currently a Ph.D. candidate in the department of political science at the University of Rochester.

Arthur Spirling is currently a Ph.D. candidate in the department of political science at the University of Rochester.

References

Amadae, S. M., and Bruce Bueno de Mesquita. 1999. “The Rochester School: The Origins of Positive Political Theory.” Annual Review of Political Science 2: 26995.CrossRefGoogle Scholar
Bradley, R. A., and M. E. Terry. 1952. “Rank Analysis of Incomplete Block Designs I: The Method of Paired Comparisons.” Biometrika 39: 32445.CrossRefGoogle Scholar
Christenson, James, and Lee Sigelman. 1985. “Accrediting Knowledge: Journal Stature and Citation Impact in Social Science.” Social Science Quarterly 66: 96475.Google Scholar
Crewe, Ivor, and Pippa Norris. 1991. “British and American Journal Evaluation: Divergence or Convergence?PS: Political Science and Politics 24: 52431.Google Scholar
Firth, David. 2005. “Bradley-Terry Models in R.” Journal of Statistical Software 12.Google Scholar
Garand, James C. 1990. “An Alternative Interpretation of Recent Political Science Journal Evaluations.” PS: Political Science and Politics 23: 44851.Google Scholar
Garand, James C., and Micheal W. Giles. 2003. “Journals in the Discipline: A Report on a New Survey of American Political Scientists.” PS: Political Science and Politics 36: 293308.Google Scholar
Giles, Micheal W., and James C. Garand. 2007. “Ranking Political Science Journals: Reputational and Citational Approaches.” PS: Political Science and Politics 40 (October): 74152.CrossRefGoogle Scholar
Giles, Micheal W., Francie Mizell, and David Patterson. 1989. “Political Scientists' Journal Evaluation Revisited.” PS: Political Science and Politics 22 (September): 6137.CrossRefGoogle Scholar
Giles, Micheal W., and Gerald Wright. 1975. “Political Scientists' Evaluations of Sixty-Three Journals.” PS: Political Science and Politics 8: 2547.Google Scholar
Goodson, Larry, Bradford Dillman, and Anil Hira. 1999. “Ranking the Presses: Political Scientists Evaluations of Publisher Quality.” PS: Political Science and Politics 32: 25762.Google Scholar
R Development Core Team. 2006. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. ISBN 3-900051-07-0. www.R-project.org.Google Scholar
Riker, William H. 1982. “The Two-party System and Duverger's Law: An Essay on the History of Political Science.” American Political Science Review 76: 75366.CrossRefGoogle Scholar
Stigler, S. 1994. “Citation Patterns in the Journals of Statistics and Probability.” Statistical Science 9: 94108.CrossRefGoogle Scholar
Figure 0

Cross-Citations in Political Science for Selected Journals, 2005

Figure 1

Rank Order of Scores for Journals from the Intellectual Exchange Model

Figure 2

Rank Order of Scores for Political Science Journals from the Intellectual Exchange Model

Figure 3

Effect of Political Science, Sociology, and Economics Focus on Journal's Influence