No—this is not what it seems. This is not a discourse on statistics, but it is a discussion of a type of methodology—that of how, when they are writing, political scientists can better approach their audience. Put differently, this article is related to the more general proposition that scholars need to demonstrate not only that they can conduct research but also that they can communicate it effectively to a wide audience. And, although critical of some social science writing, it carries a suggestion for increasing your range of publication options.
Today's political science graduate students are often well versed in statistics, and they feel comfortable in talking statistical language; moreover, they know that to obtain a job, they must be able to respond at job talks to questions about methods and statistics, and they often respond in technical language in order to seem au courant. Certainly in presenting papers to other political scientists at professional meetings, they continue to speak in the same way, with full statistical sails flying, as terms like “autocorrelation,” “endogeneity,” and “heteroskedasticity” trip off their tongues, and “logit” and “probit” are a basic part of their lingua franca. Certainly to the extent that these papers are intended for ultimate submission to leading general political science journals, it is important that methodology and statistical analysis be adequately explained and that tables of data—all those beta values and the like—be presented. For some, there seems to be no other way. While perhaps toning down the statistics when teaching undergraduates, political scientists seem to believe that the “one true way” is to be to let one's statistics fly full force.
But wait a minute! What if a potential audience doesn't understand sophisticated—and perhaps not even “unsophisticated”—statistics? What if those who work in a related field—people who would be interested in the substance of your research—lack training in statistics and might have no more than a master's degree, if that? Does one simply write them off as unable to understand what you as a political scientist have discovered?2
This is apart from the related question of whether one should try to reach the lay public. See, for example, the assignment for a basic economics course in which the instructor asks students to use principles of economics but, in writing, to “[i]magine yourself talking to a relative who has never had a course in economics” (Frank 2005). In going on to say that the best papers “typically … do not use any algebra or graphs,” he implicitly uses an approach similar to the one prescribed here and, more important, indicates that the problem drawing our attention also affects other disciplines.
To listen to many colleagues, the implicit answer does seem to be that, at least unconsciously, one does write off those “others.” To remove the statistics from one's research presentations, I am told, would mean “we wouldn't be true social scientists”; the statistics “must be available for readers to understand what we've done.” I wish to suggest otherwise.
To appreciate my argument requires accepting the premise that there most certainly is good social science of value to practitioners working for the government or private agencies, and even to the educated lay public with an interest in politics, policy, and government. That premise applies to a wide variety of journals where practitioners are part of the audience, but most particularly to those which deal with policy analysis in general, specific substantive areas of policy, or aspects of bureaucratic and judicial administration.
If you say, “Well, of course, I accept the idea that research about politics and government ultimately has to inform people practicing politics and governance,” what then? As an editor of a peer-reviewed journal with a primary audience of practitioners, few of whom are trained in statistics, I have some suggestions that I have implemented—perhaps over the anguished screams of some authors, or at least despite their out-of-earshot mutterings of dark imprecations—and which seem to have “worked,” and I have some other observations to share.
It is not my intent here to argue the point that we should be engaging the rest of the world, perhaps to strengthen public policy discourse, although I do not doubt we could have that effect if we communicate clearly with others. It is also not my intent to argue for more “applied research” because practitioners seek studies that address their concerns. It is to argue that even if we continue to engage in “normal science”—in order to satisfy our peers and obtain tenure and promotion—we can make that normal science more useful for practitioners by making it more accessible. To the extent that we do base our research on real-world problems, whether it be disaster-mitigation or campaign contributions to judicial elections, there is all the more reason to have the resultant written product speak in terms that are understandable by those who find themselves in the crucible of experience and who are wondering what to do next. However, I assume that most political scientists will continue to do standard basic social science research, and thus I speak primarily about how to reach an “applied audience.”
The principal prescription offered here is two-fold and simple: keep all but the most basic statistics (percentages, ratios, perhaps chi squares) out of the manuscript and ban all “stat talk.” That means no more mention of words which will quickly make readers' eyes glaze over: no more “logit”—except perhaps in a footnote; no more terms like “dummy variable”; and no more extensive description of what was coded “1” and what was coded “0.” And it also means no tables with mind-numbing numbers that are meaningless to the reader. By mind-numbing, I do not mean simple cross-tabulations, which most readers should be able to follow—but even there, must one always say “bivariate correlation”? My argument also means not presenting every table possible, nor, in those tables, carrying each number out to what a colleague calls the “false precision” of four decimal places. Thus, even when presenting statistics, there can be simpler ways. Someone familiar with logit has suggested that “rather than reporting coefficients and what may be statistical significance, one could present predicted probabilities,” which are far easier to understand.
If complicated and sophisticated statistical analysis has to be performed so you can be sure of your findings, you should, of course, still carry it out. But performing it and expounding on it in technical language are not the same; the former does not compel the latter. Thus results can, and should, be presented in clear English prose, not in “stat talk.” A social scientist who has long worked in the business world (and thus has clients who are practitioners, albeit of a different sort) recently remarked that clearly stated findings can be “informed by backroom statistical tricks of regression, multidimensional scaling, and the like” without presentation of the latter. Another practitioner who has a background in academia but who now works primarily in the public world talks of using factor analysis “to screen hundreds of variables to ensure that I don't miss an important interrelationship” but of presenting only “a table or two that shows the final result.” Indeed, one of these practitioners observes, “visuals” are important—but such “graphics” do not equate to lists of correlation coefficients. This point is reinforced by a colleague who observes that we jump too often to more sophisticated models when we could tell the same story—to more people—in frequency distributions, cross-tabulations, and scatterplots.
To those who would retort that a manuscript of the type described, if sent to review, would appear to have been written without required statistical work having been undertaken, the response is that one can make available, to the journal editor and to reviewers, a memorandum discussing the statistical work. This further indicates to an editor that you understand the audience of the journal, something not enough authors appear to have undertaken to learn.
What, you say, about the social scientists in the journal's audience who would wonder about the statistical analysis and wish to see the statistical results in all their “guts and glory?” A simple solution, like having business clients “only hear [statistical] details if they ask,” is to provide footnotes. For methods, one could use, “Further explanation of statistical methodology is available from the author on request,” and, for results, it would be appropriate to say, “A full set of statistical tables is available on request from the author.” A less severe alternative, which might be treated as the minimum, is to place discussion of measurement of variables and of the basics of one's methodology in an Appendix, thus still keeping the body of the text clear of “statistical debris.” In addition, in this increasingly electronic world, the author could post the data on a website—the author's or one maintained by the journal, an arrangement analogous to news media indicating that “more on the story” is available online.
There is still another possibility, suggested by a senior colleague who, unlike the author, is a sophisticated user of statistics; this possibility is an important one. His advice to junior colleagues is that if one must “strip” the statistics to publish in a journal such as one with a practitioner audience, you should prepare two manuscripts: one, as suggested here, with little or no statistical talk, for the more practitioner-oriented journal, and another, reporting the same study but revealing the methodology and all the statistics, to be sent to mainline political science journals. Thus not only do you satisfy the editor and readers of the former journal who, benighted souls that they are, “don't want stats,” but you get a “twofer”—two articles from the same research, with the accompanying resulting increase in citations, particularly if the journal aimed at practitioners reaches a larger readership.
As an extension of this advice, someone who has written an occasional non-technical article observes that they can be developed simply to describe interesting trends; they don't necessarily have to contain a strong theoretical rationale nor does one have to deduce hypotheses directly from theory. They can, in short, be more problem-based and/or data-derived without having to be flow from theory; they allow presentation of data in a “practical” sense, perhaps while one is still struggling with the theoretical aspects of the project.
We should, however, note that some members of the profession dislike it when an author publishes two versions of the same article, even if in two very different formats for distinct audiences in different types of peer reviewed journals or in a peer-reviewed journal and an edited volume. While this concern should be acknowledged, the “two-fer” device seems acceptable to many. That being said, to go much beyond the two versions suggested here is generally unwise, regardless of differences in audience; in short, don't slice the onion too many different ways, or you will cry.
Another, related prescription can be added to the two-fold one presented here of limiting statistics and banning statistical language: Pay more attention to developing the “So what?” point of an article. Many articles seem to be written, primarily (or even exclusively) for those in a subfield, or, indeed, a small corner of a sub-subfield. Yet it would be easy to reach other audiences if more were done to explain why findings matter and to say more about their implications—and that is not difficult to do. In some instances, it can be done simply by devoting a little attention in the first few paragraphs, and again in the concluding statement, about what the research means for the intended audience, for example, stating that “This study of changes in jurisdiction has implications for court administration because of the shifts it bring in courts' caseloads.”
Moreover, many readers of this article undoubtedly have had the feeling that the “Discussion” portion of manuscripts is often small—even minuscule—in relation to the long sections devote to theory or literature review, the extended treatment of variables, data, and measurements, and the relatively short set of findings that precede that “Conclusion.” With the space saved by limiting statistics and the complicated talk about them, one would have space for an expanded discussion section, which would make the article far more accessible to others in or out of that subfield, and perhaps even to people clear across the discipline and those outside it.
Although my focus here is on the problems posed by statistics in journal articles, the general argument about making one's work accessible to a broader audience can be extended to use of language for discussing our theories, our “theoretical jargon.” Thus, just as I argue we should avoid “logit” and “dummy variable,” we should likewise avoid “Foucaultian” and “post-modern,” as well as “rent-seeking” and “incentivize” (recently heard at a conference). A reviewer has generously supplied “Hamiltonian or Madisonian,” “the median voter,” and “constructivist perspective.” A story helps make the point: Recently, a practitioner reviewing a manuscript brought me up short by asking what was meant by a “nontraditional nominee,” a term those in the law-courts area use regularly but which apparently means nothing outside our little group. By the way, it means a female or a member of an ethnic or racial minority, in short, someone other than a “traditional white male nominee.”
What may be a “catch-phrase” for us, showing we are “in the know,” may alienate our audience by suggesting that we wish to fence them out and may also limit our readership to a small circle of people who, for example, love or hate constructivists. This problem was noted recently by Jacobs and Skocpol (2006, 28): “Increasingly insular and self-referential bodies of research emerged, with little or no relevance to broader public debates.” Of course, there are some people outside academia who like “big words,” to force the rest of us to go hunting in the dictionary, but those folk are beyond reach. Perhaps the best-known judge to do this is Bruce Selya of the First Circuit, who regularly composes clauses like “After careful consideration of the relevant legal authorities and perscrutation of the amplitudinous record …” (Ungar v. Palestine Liberation Organization, 402 F.3d 274, 276 (1st Cir. 2005)).
There certainly are times when a technical term, like “random sample,” saves much discussion, but, even when using terms with self-evident import, like “principal-agent,” we should quickly supply clear, simple definitions. That is part of a more general point—that our essential arguments should be available to a wide audience. The introduction of an article, stating the topic, and the conclusion can be made available to the reader who is “completely non-statistical.” This should be true not only of journals with a large component of practitioners in their audience but also to articles in journals—for example, the American Journal of Political Science—in which statistical analysis predominates.
The suggestion that we use accessible language leads to a further suggestion of potential advantages beyond additional publications: we may benefit by obtaining new ideas that improve our research. Many social scientists sincerely seek valid measures and more effective specification of the measures they use (“operationalization,” if you prefer). Presenting their work to another set of political scientists is likely to reaffirm use of the same measures, but feedback from different audiences might provide not-previously-considered ideas for measures. For example, at a recent conference, lawyers and judges made valuable suggestions to political scientists studying judicial selection as to which aspects of the selection process require more attention and measurement if a complete picture of the process were to be provided. Feedback from such an audience can also be helpful even if it does no more than say, “Yes, you have it right; you are looking in the right place.” Moreover, unless one simply doesn't care about whether one's concepts and categories comport with real-world usage, those who are in the trenches can tell you quickly whether you are working with categories that make sense in terms of their work-a-day practice.
So there you have the pitch, relatively short if not so sweet: keep it clear, and keep it non-technical. If you want your research to be read, understood, and accepted by both scholars and practitioners, you need to be willing to accommodate what potential readers bring to your work—their “skill sets,” if you will—so that, having the potential to benefit from our research, they can in fact do so.
Biography
Stephen L. Wasby is professor emeritus at the University at Albany, SUNY and visiting scholar at University of Massachusetts, Dartmouth. He is editor-in-chief of Justice System Journal. E-mail: wasb@albany.edu