Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-05T23:54:00.370Z Has data issue: false hasContentIssue false

NSF Funding Unbiased, Necessary for Political Science

Published online by Cambridge University Press:  15 April 2003

James E. Campbell
Affiliation:
University at Buffalo, SUNY NSF Political Science Program Director, 1992–1994
William Mishler
Affiliation:
University of Arizona NSF Political Science Program Director, 1982–1984, 1990–1991
Rights & Permissions [Opens in a new window]

Abstract

Type
FORUM
Copyright
© 2003 by the American Political Science Association

A recent article in PS (December 2002) by Canon, Gabel, and Patton (hereafter, CGP) purports to assess the utility of external funding for research in political science. Briefly summarized, CGP report that fewer than 30% of published articles in prominent journals acknowledge external support, and only about half of those acknowledge National Science Foundation support. They conclude from this that, “Unlike the natural sciences, political science does not require any significant funding—or even any funding—to conduct valuable research and publish it in the highest quality journals” (748). CGP further report that NSF funding appears to vary by subfield and methodological approach in ways the authors attribute to “NSF bias in the early 1990s favoring quantitative, rational choice, or formal theory approaches and support for American politics research” (749). These charges are not only unjustified by the available evidence, they are irresponsible. The authors have done an injustice not only to the political scientists who served as Program Officers at NSF, but even more to the many distinguished political scientists who have served on the program's review and oversight panels, to the hundreds of scholars the program called upon to assist in evaluating proposals, and to those scholars who received NSF grants during this period whose success CGP denigrate as the product of biased assessment. Moreover, if CGP's charges were to be accepted as true within NSF and Congress, they could jeopardize future funding for political science at NSF. As NSF Political Science Program Officers from 1990–1994, we take these charges very seriously and welcome the opportunity to set the record straight by correcting the serious errors in analysis and inference committed by CGP.

CGP examine the extent of external funding from all sources (NSF and otherwise) for articles published in eight journals and find that the pattern of NSF-funded publications differs that of non-funded articles and of articles funded by other sources including the Ford Foundation and the MacArthur Foundation, among others. Despite their judicious caveats regarding alternative interpretations for their findings and the limits of their analysis, CGP very injudiciously and wrongly conclude that their data suggest NSF bias. This charge not only is untrue, it also is unsupported (indeed, it is unsupportable) by their data. As they admit in the course of the analysis, CGP have no information whatsoever on the pool of proposal submissions to NSF (or to most of these eight journals or to any of the other funding sources) with which to calculate relative acceptance rates across subfields. They also have no information whatsoever regarding the merits of the submitted proposals or the reviews of these proposals by others working in the relevant fields. They have no information on the distribution of NSF-funded research reported in other outlets (e.g., books and other journals). Their conclusion of NSF bias is dependent in part on their using other funding agencies as a baseline for comparison, but these other agencies (which include the SSRC, NEH, and USSR Academy of Sciences!) have their own biases, some of them explicitly so (such as SSRC and NEH). CGP's conclusions in these regards are reckless. Based on what they have presented, it is impossible for the authors to know (or even speculate in an informed way) about NSF bias and it is irresponsible to draw the conclusions that they have.

The claim that NSF is biased is one of the oldest canards in the profession and has repeatedly been tested and disproved by NSF data (Mishler 1984; Sigelman and Scioli 1987). It also has been examined and rejected in periodic evaluations, both quantitative and qualitative, of the Political Science Program by oversight committees, whose distinguished members, unlike CGP, have access to all of the reviews and proposals including those that were not funded. The report of the 1998 Committee of Visitors is online at www.nsf.gov/sbe/ses/polisci/cov_report.htm. Based on their extensive, two-day review the Committee concluded:

There was no detectable bias in the reviews. Most notably, there was no evidence of an “Old Boy's Club” or “invisible college” that favored some people, topics or approaches over others. Indeed, we were struck by the overwhelmingly professional nature of the reviews; only an exceptionally few deviated from this norm and they were so apparent that they had a virtually self-negating character.

Importantly, NSF is the only funding source in political science of which we are aware that evaluates both the substance and procedures of its proposal review process and reports the results to the community.

NSF goes to great lengths to avoid bias and insure the integrity of its review process—more so than any other foundation or journal and more transparently as well. Proposals normally are reviewed by three to five ad hoc reviewers specializing in the field and, then, by a panel of political scientists representing different subfields and perspectives and serving staggered two-year terms. The mantra at NSF is to spend its limited budget on the most meritorious projects (the best science) that are submitted regardless of area or approach. It is true that a successful proposal at NSF must demonstrate its scientific rigor (it is after all the National Science Foundation and not the National Endowment for the Humanities), but these are generous bounds that encompass a wide assortment of projects and perspectives. It also is true that NSF is prohibited by statute from supporting history and philosophy, which are the province of NEH. The discipline of political science is one that remains divided between science and the humanities. NSF's mandate is to support social science; NEH is supposed to support work in the humanities, including humanities-oriented work on government and politics. (We have never seen a report of NEH activities, review processes, or awards, though we think it safe to say they are “biased” against scientific inquiry, as they should be.) Additionally, unlike other federal agencies, NSF prefers research that goes beyond the application of existing knowledge to social problems and that promises significant additions to theory and fundamental knowledge.

CGP offer equally flawed and nonsensical research in support for their “conclusion” that “valuable research in political science does not require much if any funding” (748). This conclusion is based on the observation that the majority of research in good political science journals did not report any funding. The flaws in the logic, here, are so astounding as to be embarrassing. In an average year, the NSF Political Science Program makes an average of about 55 awards (the number was smaller however in the early 1990s). Excluding dissertation grants, grants to support various workshops, and conferences, such as the Ralph Bunche Summer Institute, the Political Science Program at NSF supports about 40 research grants per year. Assuming that every grant produces three refereed journal articles over a three-year period and assuming also that all NSF-supported articles are sent only to the eight journals surveyed by CGP (extreme estimates considering that award levels are quite small and that significant amounts of NSF research are published in books and book chapters), this means that in a three-year period NSF-funded research should produce about 120 articles. If the average journal publishes eight or nine articles per quarter (based on CGP's table 1, the average is 8.7), or about 35 per year, then these eight journals should publish about 840 articles in a three-year period. These “back-of-the-envelope” calculations suggest that NSF-supported research could fill a maximum of about 14% of the research space in these eight journals if all NSF-supported articles were published here and only here. This is virtually identical to the percentage of NSF-supported articles in these journals reported by CGP. This means, among other things, that the supply of NSF-supported research is so small that the eight elite journals of necessity must accept far more non-NSF-supported research simply to fill their pages. The small ratio of NSF-supported research in leading journals says absolutely nothing about the quality or competitiveness of non-NSF research compared to NSF-supported research.

Moreover, the level of NSF-supported research in these eight journals is almost certainly much higher than the CGP “guestimate.” As previously noted, CGP count only NSF support that is formally acknowledged in the article. This is the smallest possible estimate of NSF support, in part because it fails to consider research where NSF support was indirect and almost certainly not formally acknowledged. For example, much of the voluminous work on American elections and public opinion that is published in the top journals would be impossible without the availability of American National Election Study data. The ANES, of course, has depended upon tens of millions of NSF dollars over the years. Yet, virtually all of these articles cite only the ANES and not NSF in their acknowledgements. Such articles are not included in CGP's “count.” Similarly, in international relations, NSF has invested heavily in various large databases over the years including COW and MIDs among others. The number of published articles in IR that use these data is higher by an order of magnitude than those that received direct NSF support. Many, if not most, of these publications would have been impossible without the data that NSF provided. NSF also has supported myriad of databases whose secondary analysis has been vital to the publication of numerous and important articles on public law, public policy, comparative politics (where NSF has supported a very large percentage of the large data projects on post-Soviet, post-Communist citizens, parties and legislatures) and a host of other areas. While Jim Gibson faithfully acknowledges NSF support in his publications on public opinion in Russia, the many others who have published secondary analyses of Jim's data appropriately cite Jim and his data—not NSF. Yet, without NSF's support Jim's data would not be available for secondary analysis.

Of course, the indirect contributions of NSF to published research are not limited to the supply of data. NSF also has invested heavily in the development of methods and statistics that are widely used in published work without acknowledgement to NSF except perhaps by the methods' originators. For example, Gary King's work on maximum likelihood methods, on the treatment of missing data, and on methods of crosslevel inference are widely used in publications by diverse scholars who could not have done their work (or done it as well) without these methods but who may not even be aware of the role of NSF support in providing these tools. Similarly, Diana Mutz's current award in support of Time-Sharing Experiments in the Social Sciences (TESS) is providing wonderful opportunities for numerous investigators to generate their own data, without additional cost to them, on a variety of subjects using survey research methods with experimental designs. Much important work is likely to come from this NSF-supported project, most of which probably will acknowledge TESS, but little of which likely will acknowledge the critical role of NSF.

None of this is to deny that important work can and is done in political science without funding. Political scientists are very innovative in finding ways to do excellent research with little if any support. However, there are a great many projects that could never have been undertaken without NSF support, direct or indirect. And much if not most of the work published without extramural support undoubtedly could have been done even better had the scholars involved had additional resources.

In summary, it appears from the evidence that most of the articles produced with NSF-supported research appear in the leading journals, which are limited in publishing even more NSF-supported work given its limited supply. Indeed, it is likely that significant portions of the remaining articles in the elite journals depend indirectly on NSF-supported theory, data, or methods. That CGP do not recognize this is testimony to inherent flaws in their research design.

A simple “mind experiment” can reinforce these points. Consider, for example, how differently the major journals would look without NSF support for political science, direct and indirect. No doubt we would have just as many journals publishing just as many articles. Certainly all of the elite journals would still be filled with research and still would be publishing the (relatively) best work produced by the discipline. But does this mean (as CGP imply) that the quality of research in the profession would be just as good as it is today without NSF support? To the contrary, we believe, the political science landscape would be dismal. Many of our richest databases would be either non-existent or severely limited in scope and duration. Our methods and theories would lag substantially behind where they are today. Moreover, to the extent that economics and sociology were funded by NSF but political science was not, we would expect that research in political science would be even more dominated by the theories and methods of other disciplines than already is the case.

In summary, Canon, Gabel, and Patton are wrong in claiming that there is bias in NSF funding and wrong in shortchanging the importance of NSF funding to the advancement of political science research. The National Science Foundation has one of the most fair, rigorous, and transparent peer review systems in all of academe. NSF Program Officers work hard at community outreach in order to maximize the number, quality, and diversity of proposal submissions, consistent with NSF's legislative mandate. Its reviewers and panelists are carefully vetted for conflicts of interests and strongly encouraged to fund the best research regardless of other considerations. As a result, NSF has made major contributions, both direct and indirect, to the development of political science over the past three decades. NSF-funded research has significantly enriched both our theories and methods. It has increased and strengthened the human capital in our discipline by virtue of its heavy investment in graduate student research and training and its strong support of young investigators; and NSF has contributed greatly to the infrastructure of our discipline by virtue of its substantial investments in both equipment and data. It is no wonder that NSF-supported research is consistently published in the leading outlets, including the most prestigious journals. The Political Science Program at NSF is a valuable asset to the discipline. It welcomes scientifically rigorous research proposals from a variety of perspectives and in all sub fields. This was the case in the early 1990s when we participated in the Program's management and we are certain that it remains the case today. To assert the contrary without any semblance of meaningful data is irresponsible.

References

Canon Bradley C. Gabel Matthew Patton Dana J. 2002 External Grants and Publication: Sources, Outlets, and Implications PS: Political Science and Politics 35 (December) 743 750 Google Scholar
Mishler William 1984 Trends in Political Science Funding at the National Science Foundation, 1980–84 PS: Political Science and Politics 17 (fall) 846-53Google Scholar
Sigelman Lee Scioli Frank P. Jr 1987 Retreading Familiar Terrain: Bias, Peer Review, and the NSF Political Science Program PS: Political Science and Politics 20 (winter) 62 69 Google Scholar