Published online by Cambridge University Press: 06 August 2003
Scholars have recently focused increasing attention on the balance among alternative methodological approaches in empirical research and publications in political science.1
See, for example, Stephen Walt's critique of formal modeling and rational choice theory, “Rigor or Rigor Mortis? Rational Choice and Security Studies,” International Security 23 (spring, 1999): 5–48, and the responses to Walt by Bruce Bueno de Mesquita, James Morrow, Lisa Martin, Emerson Niou, Peter Ordeshook, Robert Powell, and Frank Zagare, and Walt's rejoinder in the symposium, “Formal Methods, Formal Complaints: Debating the Role of Rational Choice in Security Studies,” International Security 24 (fall, 1999): 56–130.
One exception to the dearth of data on this subject is Lisa Martin's survey of four years of publications by method in seven international relations journals in “The Contributions of Rational Choice: A Defense of Pluralism,” International Security 24 (fall, 1999): 81. Also, the American Political Science Review has periodically provided in PS data on the fields and methods of the articles it receives for review. For the most recent such review, see Ada Finifter, “American Political Science Review Editor's Report for 1999–2000,” PS (December, 2000): 921–928.
To address these issues, we undertook a survey of the methods used in empirical research in over 2,200 articles in 10 top journals in political science in the United States, and a companion survey of the methodological courses required and offered in graduate programs at the top 30 political science departments in the United States. Our results challenge widelyheld assumptions about the prevalence of alternative research methods. In particular, four key findings emerge from our data.
First, formal modeling does not appear to be increasingly prevalent in published journal research. Among the top journals that publish work from different methodological approaches, formal modeling was more common in articles published in 1985 than those published in the last several years. In general, the proportion of articles using each of the three leading approaches has remained relatively stable since 1975. This is in sharp contrast to the “behavioral revolution” of the 1960s and early 1970s, during which the proportion of research using statistics and formal modeling rose sharply and the proportion using qualitative methods dropped markedly.
Second, there is a disjuncture between the high proportion of research performed with qualitative methods, the relatively low proportion of courses offered in these methods, and the even lower proportion of departments that require them. There is arguably a similar but smaller gap in the number of departments that offer or require formal modeling.3
It is possible that the gap between teaching in and research using formal models is smaller than our data suggest if many departments, like our own at Georgetown, rely to some extent on game theory or modeling courses in economics departments.
Third, qualitative or case study research in American Politics is in steep decline in most of the top journals. In the top seven multimethod journals, the proportion of articles presenting case studies in American Politics fell from 12% in 1975, to 7% in 1985, to 1% in 1999–2000.
Fourth, the mix of articles in APSR has been highly unrepresentative of the substantive and methodological mix in other journals. In the late 1990s, APSR had almost twice the average proportion of articles using formal modeling and articles on American Politics than the top 10 journals, while it had about half the average of articles in International Relations and less than one-fourth the average of articles using case studies. Perhaps most striking, of the International Relations articles in APSR, only one in 20 used case studies, compared to more than four out of 10 of the International Relations articles among the top 10 journals.
We detail these results below and conclude with recommendations for maintaining a productive balance of methods in departments, curricula, and journals, and for increasing multimethod collaboration among researchers.
We coded 2,207 journal articles in four separate data sets by date, subfield, journal, abbreviated name of one author, and methods (articles using more than one method were so coded). The subfields primarily included American Politics, International Relations, and Comparative Politics; we included only those articles in Political Theory that presented a formal model. Most of the sub-field codings were easy to establish; some articles at the intersection of two fields (such as comparative foreign policy, U.S. foreign policy institutions, or U.S. politics in comparative perspective) were more difficult to code, but were few enough that they did not greatly affect the results. We focused on articles presenting empirical research; we did not include review articles, research notes, or correspondences, nor did we include analytical essays that discussed a theory or concept without any empirical work (with the exception of formal models that did not present any empirical tests, which we included in the sample).
The methodological codings were more difficult to establish. We chose a broad measure of formal modeling, statistics, and case studies, rather than finer variants of these or other methods, in order to quickly code a large number of articles. Under formal modeling, we included the articles where the formal model was largely verbal rather than mathematical and those that made use of simple game theory, in addition to more complex models with mathematical appendices. We coded as statistical methods those articles using regression analysis; we coded articles that used only descriptive statistics in a few cases as case studies. Simulations, experiments, and surveys that used statistical methods to analyze their results were included in the sample and recorded as using statistics (again with the exception of descriptive statistics of survey results in a few cases where the authors analyzed primarily through qualitative methods). Articles coded as case studies included those with careful case selection and research design as well as those that were less rigorous but that involved detailed historical analysis of a few cases. When articles used cases mostly for illustrative purposes rather than empirical tests, we usually included these in the sample and coded them as case studies; analytical essays that included only case illustrations shorter than a few paragraphs were not included in the sample.
In selecting which journals to code, we began with James Garand's September 1990 ranking of political science journals in PS, which incorporates journals' evaluation rankings and their familiarity to political scientists (Garand 1990, 448–451). We selected from among the top-ranked journals to get a mix of subfields, and we excluded policy journals (such as Foreign Affairs) and journals whose focus is narrower than one subfield (such as Public Opinion Quarterly). We also excluded sociology journals, such as American Sociological Review. The resulting list of journals included American Journal of Political Science (AJPS), APSR, Comparative Political Studies (CPS), Comparative Politics (CP), International Organization (IO), International Studies Quarterly (ISQ), Journal of Conflict Resolution (JCR), Journal of Politics (JoP), Political Science Quarterly (PSQ), and World Politics (WP). This group includes two journals that predominantly publish studies of American Politics (JoP and AJPS), two that cover Comparative Politics (CP and CPS), three that focus on International Relations (ISQ, IO, JCR), two that publish from several fields (APSR and PSQ, each of which publishes predominantly American Politics), and World Politics (WP), which is about two-thirds Comparative Politics and onethird International Relations.
We first surveyed all 10 journals, starting with their last issue in 1998 (chosen because our survey work began in 1999) and working backward until we had sampled 100 articles for each journal (the “1998 Basic Survey,” n=1000).
Second, in order to get a clearer picture of time series trends in methods, we selected the seven of these journals that published work using a range of methods: AJPS, APSR, CPS, IO, ISQ, JoP, and WP. (We dropped CP and PSQ from the multimethod sample due to their predominance of qualitative work, and we dropped JCR due to its dearth of such work). To add data on the latest trends, we surveyed all the relevant articles in these seven multimethod journals for 1999 and 2000 (the “Recent Survey,” n=337). Third, we surveyed 30 articles for each of these journals starting with their last issue in 1985 and working back, and 30 for each starting at the end of 1975 and working back (the “Decade Survey,” n = 420; we chose 1985 and 1975 for decade comparisons to the Basic Survey data—since the Basic Survey began with journals published in 1998 and went back 100 articles, for most journals this included the years 1995–1998). Fourth, we surveyed the articles in APSR every other year from 1965 to 1993 (the “APSR Survey,” n=450; we stopped at 1993 because subsequent years were included in the Basic and Recent surveys) to assess trends in this key journal. We had two coders independently code several hundred articles to refine the coding protocol, but generally each coding was made by one individual, as is common in such large literature surveys.4
The authors coded most of the cases; we also want to thank Muqtedar Khan for his help in coding articles, and Michael Bailey for his help in analyzing the quantitative results of the surveys. Neither of these individuals is responsible for any of the opinions or factual assertions in this article.
I. Changes in the Proportion of Methods over Time. In the Basic Survey of 1,000 articles in 10 journals from 1998 and earlier, 49% of the articles sampled used statistics, 46% used case studies, and 23% used formal modeling (the total is greater than 100% due to articles that used more than one method). The seven multimethod journals from this data set had a somewhat lower proportion of case studies, a higher proportion of articles using statistics, and a similar proportion using formal modeling. A comparison of these seven journals in 1975, 1985, 1998, and 1999–2000 indicates that the proportion of articles using formal modeling has actually dropped in the last 15 years, from 34% in 1985 to about 22% in the late 1990s (Table 1).5
We used a weighted average of the journals for the 1999–2000 survey data to compensate for the different number of articles per journal in this sample.
The longer time-series comparison, based on the APSR Survey only, suggests that, apart from the sharp rise in formal modeling in the mid-1980s, the mix of methods has been relatively stable since 1975. This contrasts with the sharp rise in the use of formal models and statistics in APSR and the precipitous drop in the publication of case studies in this journal from the mid-1960s to the mid-1970s (Table 2).
Thus, it appears that formal modeling's representation in the field has been decreasing rather than increasing since the mid-1980s, while usage of case study methods and statistics has remained relatively stable since the mid-1970s. Within this overall context, however, the data on methods across sub-fields tell a more complex story with considerably different mixes of methods among the subfields and sharper trends over time.
II. Differences in the Proportion of Methods across Subfields over Time. As indicated in Tables 3, 4, and 5, articles in American Politics used statistical methods and formal modeling more frequently than those in other subfields, and the proportion of articles in American Politics using case studies was far lower than in the other subfields. Articles in Comparative Politics used statistics roughly 20% more frequently than those in International Relations, but the mix of methods in these two fields converged in the late 1990s, and the usage of formal modeling in these two fields was similar throughout 1975–2000.
Methodological trends in the subfields were fairly steady from 1975–2000 and tracked those in political science as a whole. Probably the most striking finding here is the drop in case studies in American Politics to 1% of articles in 1999–2000. This measurement over such a short period of time could represent an aberration, but it fits a longer-term trend marking the decline of case studies in American Politics that comes from a sample of 118 articles in American Politics in 1999–2000.
III. Differences Among Journals by Method. The top journals vary in their emphasis on different subfields in ways that are often clearly indicated in the journals' titles and mission statements. They vary just as greatly, however, by their emphases on different methods, even though these emphases are not always clearly stated in the journals' front matter. Tables 6, 7, and 8 present the data from the 1998 Basic Survey on the percentage of articles in each journal using the alternative methods.
One reading of our data is that each of the leading methods is alive and well in at least some of the top journals, and that methodological pluralism and stability have improved since the behavioral revolution. A more troubling reading, however, is that the post-behavioral accommodation among different methodological approaches has taken place not only through some true methodological integration and collaborative work, but also through an unhealthy amount of “dining at separate tables.”
Two examples illustrate the extent to which methodological communities have become segregated despite common subject area interests. First, two leading journals in comparative politics, CPS and CP, have very different methodological profiles, with the former emphasizing formal and statistical work and the latter emphasizing qualitative work. Second, our survey of 100 articles in IS from December 1998 back shows that only four used statistics, and none used formal modeling. This stands in sharp contrast to the heavy emphasis on formal and statistical work and the relative lack of case study research in JCR, which addresses many of the same substantive issues as IS does.6
The first issue of IS proclaimed that the journal would “accommodate the broad range of methodologies” relevant to international security (“The Editors,” Foreword, International Security 1 (summer, 1976): 2). The data indicate, however, that it is the least methodologically diverse of all the journals surveyed.
It is not necessarily unproductive for journals and their editors to specialize in one method or another. More worrisome, however, is the fact that articles in each of these journals typically cite previous articles in the same journal and those in journals using similar methods, but they seldom cite articles from counterpart journals or from other sources using dissimilar methods. We compared the winter 2000/2001 issue of IS and the December 2000 issue of JCR and found that out of 599 total footnotes in IS, 46 cited earlier articles in IS but only one footnote cited an article in JCR. Out of 165 footnotes in JCR, nine cited earlier articles in JCR but there was not a single reference to any articles in IS.
Also of interest with regard to the balance of methods among journals is APSR, which as an official journal of the American Political Science Association is in some sense the flagship journal of the field. The data indicate that APSR is, on average, unrepresentative of the work across the top journals by both method and field. In the 1998 Basic Survey, APSR had nearly twice the average proportion of articles on American Politics in the top 10 journals (46% versus an average of 25%) and almost twice the average proportion of articles using formal modeling (44% versus an average of 23%). At the same time, it had about half the average proportion of articles in International Relations (21% versus an average of 37%) and less than one-fourth the average proportion of articles using case studies (10% versus the average of 46%). The clearest difference with other top journals was APSR's relative lack of International Relations research combined with its dearth of qualitative research: only 5% of APSR articles in the 1998 Basic Survey on International Relations used case studies, compared with almost nine times this proportion (44%) of articles on this subject across the top 10 journals. APSR was also disproportionate in 1999–2000, when only 2% of its articles in any subject used case studies, compared to 30% for the top seven multimethod journals.
These data are consistent with APSR's self-study on the methods and field of articles submitted to the journal for possible publication (Finifter 2000, 924). Of the 578 articles in American politics submitted from 1996 to 2000, comprising 37% of the overall submissions, only four, or less than 1%, used “Small N” methods. Articles submitted in International Relations constituted 10% to 13% percent of the articles submitted each year; only four such articles submitted from 1996 to 2000, or 2%, used qualitative methods. Submissions in Comparative Politics were more reflective of the articles published in other journals, comprising about 23% of the submissions to APSR; of these articles, about 11% used qualitative methods. These data allow competing interpretations, however. It is even possible to infer from these data that APSR is accepting articles using qualitative methods at a higher rate than those using other methods. This inference is not reliable, however, because the articles sampled in the present survey do not include articles in political theory except for those that use formal models. Also, there may be selection effects at work: it is possible that since APSR publishes very little qualitative work, perhaps only the very strongest qualitative articles, or those by senior scholars, get submitted to the journal. If one goal of APSR is to be more reflective of other top journals, it will have to achieve a more even mix of submitted articles, rather than publishing a higher proportion of the submitted articles that use underrepresented methods or deal with under-represented topics.
IV. Trends in Articles Using More than One Method. The surveys also illuminate trends in articles using more than one method. In the survey samples, the proportion of such articles was very steady from 1975 to 2000 at between 15% and 19% of those surveyed. Multimethod work was also consistent over time; the largest share involved a combination of formal modeling and statistics (between 50% and 90% throughout the period surveyed). The combination of statistical and case study methods was the second most common, comprising between 33% and 50% of the multimethod work (except for 1985 when it dipped to 10%). The least common combination was formal modeling and case studies, which during the period was between 5% and 16% of the multimethod work. As there is no obvious conceptual reason that formal models and case studies cannot be combined more frequently, this kind of multimethod work may deserve encouragement, as does multimethod work in general.7
One example of combining formal models and case studies is Robert Bates, Avner Greif, Margaret Levi, Jean-Laurent Rosenthal, and Barry Weingast, Analytic Narratives (Princeton, NJ: Princeton University Press, 1998). For examples of sophisticated recent work that combines formal, statistical, and case study methods, see Kenneth A. Schultz, Democracy and Coercive Diplomacy (Cambridge: Cambridge University Press, 2001) and H. E. Goemans, War And Punishment: The Causes of War Termination and the First World War (Princeton, NJ: Princeton University Press, 2000).
The aim of this survey is to identify the basic approach to, or the general tendency in, the teaching of methodology in American political science graduate programs. More specifically, we are interested in the relative emphasis on the three main methods: qualitative (QL), quantitative (QN), and formal models (FM). In order to do that, we established as our sample the 30 top departments according to the 1998 ranking. We assembled the data through emails, phone calls, and web sites. After compiling the draft of the first charts, we circulated them in order to solicit corrections from all the departments (contacting directors of graduate studies or someone they designated). The response rate was 66%. After correcting our data, we re-submitted the data to all the departments. At that point, only two modifications had to be made.
Our data distinguishes between course offerings and required courses. The top 30 departments offered 236 methodological courses, ranging from two by Princeton to 16 by Illinois. The average is eight courses. Forty-seven courses (i.e. 20%) were general in nature: Philosophy of Science (PoS), Scope and Methods (S&M), and Research Design (RD). The remaining 189 courses were divided between the different methods in the following manner: quantitative (104, 55%), formal models (55, 29%), and qualitative (30, 16%). All the departments offer courses in quantitative methods, whereas 21 of them (70%) offer courses in formal models, and 20 (66%) in qualitative methods. (Seventy percent offer courses in Scope and Methods, 43% in Research Design, and 27% in Philosophy of Science. This last figure may be understated, since political science students can take courses in PoS offered by departments of philosophy.)
As for methodological requirements, the average number of courses required in graduate programs is three, ranging from zero in Berkeley to seven in Illinois. In the general courses, only one department (Wisconsin) requires Philosophy of Science, 12 (40%) require Scope and Methods, six (20%) require Research Design, and four (13%) require languages. As for the three different methods, the data is a bit more complicated to present, since usually the number of courses is required but their content is, to varying degree, optional. Twenty departments (66%) require courses in quantitative methods, and seven more offer them as optional (90% in total). Only two departments (6.6%) require courses in qualitative methods, and eight more offer them as optional (33% in total). Similarly, courses in formal models were required in only two departments (6.6%) and optional in five more (23%).
We cannot rule out some measurement error in having three co-authors survey such a large number of articles. However, our samples are sufficiently large and our measurement criteria sufficiently clear that our survey results merit policy recommendations.
These include the following:
There are some early indications that leading scholars recognize and are moving to address the imbalances evident in our data. Several departments indicated in our curriculum survey that they had recently added a course in qualitative methods or were planning to do so. In addition, Lee Sigelman, the current editor of APSR, notes in his preface to the first issue under his editorship that “the rich theoretical, methodological, and substantive variety of our discipline has not been reflected nearly as well as it should be in our premier research journal” (Sigelman 2002, ix). Sigelman has expanded the board of the journal, reached out to a wider set of reviewers, and actively encouraged the submission of qualitative research. While it is too early to assess fully the success of Sigelman's efforts, the first year of the journal under his editorship included two case study articles in American politics, an article on qualitative methods, and a lead essay by Robert Jervis, a preeminent qualitative scholar in international relations.
As in methodological trends in political science in the past, journal editors and department chairs as well as individual scholars can make a large difference. As in their own research, these scholars' decisions on curricula and journal submissions and publications need to be based on evidence rather than assumptions.