Hostname: page-component-7b9c58cd5d-g9frx Total loading time: 0 Render date: 2025-03-14T06:42:18.237Z Has data issue: false hasContentIssue false

Is This the Curriculum We Want? Doctoral Requirements and Offerings in Methods and Methodology

Published online by Cambridge University Press:  06 August 2003

Peregrine Schwartz-Shea
Affiliation:
University of Utah
Rights & Permissions [Opens in a new window]

Abstract

Type
SYMPOSIUM
Copyright
© 2003 by the American Political Science Association

Ask political scientists what it is they study and you will get a range of answers reflecting the breadth of research communities affiliated with the discipline. (As of 2002 the APSA has 36 organized sections and 50 related groups.) For a variety of purposes, the fact that we share little more than an institutional label may not matter much; but for the purpose of doctoral education, this very institutional label forces us to grapple with what we have in common. The years devoted to doctoral education represent the key point of socialization to the academy at large, and the profession in particular, and program-wide doctoral requirements enact departments' judgments about what constitutes a “political scientist” above and beyond the substantive fields of American politics, comparative politics, international relations, and political theory.1

These are the major fields offered by the 57 programs in this sample: American Politics 96.5% (55); Comparative Politics 95.0% (54); International Relations 96.5% (55); Political Theory 79.0% (45). A “major field” means that students may write a dissertation in that field. Some programs offer a field, often methodology or political theory, only as a minor field. Other major field offerings include Methodology 44.9% (25); Public Policy 31.6% (18); Public Law 21% (12); Public Administration 17.5% (10); Formal Theory 16.0% (9); and Political Economy 10.5% (6).

What are these requirements? In this paper, I report aggregate statistics on doctoral program requirements and offerings based on my extended study of 57 doctoral programs in the United States (Schwartz-Shea 2001). For each department, three sets of questions guided the investigation: (1) Are there any program-wide requirements? Or, are requirements decided by each field? (2) What are the curricular definitions of “methods” and “methodology?” That is, are “methods” courses exclusively quantitative or are courses in “qualitative” methodology offered or required as well? (3) To what extent are philosophy of science and the scope and/or history of the discipline offered or required at the doctoral level?

This evidence base provides a starting point for addressing the title of the paper: Is this the curriculum we want? More specifically, to what extent is there a core to the discipline of political science that is transmitted to its future scholars? Does this core transmit what is most important or most central to the discipline? Does this required core, as well as the additional offerings in methods and methodology, prepare future scholars to understand the range of approaches that members of the discipline find important for the study of politics? In what follows, I first describe the data and methodology of the study, offer background relevant to understanding curricular choices, and then present the aggregate statistics. In the final sections, I discuss the findings and then offer an argument for methodological pluralism inclusive of qualitative research based on an interpretive epistemology.

Data Set and Methodology

Fifty-seven doctoral programs were selected based on the criteria of graduate program size and three sets of rankings,2

Debates over the validity of program rankings are intense, with some defending reputational systems, others championing “objective” publication-based ones (e.g., Ballard and Mitchell 1998), and still others questioning the whole enterprise. For additional discussion, see Schwartz-Shea (2001, 11). Whatever one's views on rankings, this selection of programs covers a substantial portion of disciplinary programs and the doctoral student population. As important, the statistics reported here depend on judgments about precise program rankings only when statements are made, for example, about “the top ten programs.”

the 2001 US News and World Report (US News) ranking,3

According to the web site active during the period of data collection (http://www.usnews.com/edu/beyond/gradrank/gbpolisci.htm), the 2001 US News Graduate Rankings for Political Science were based on data from 1998.

the National Research Council (NRC) ranking (as published in PS, June, 1996), and the ranking system of Ballard and Mitchell (1998) published in PS. The sample includes 43% of the 132 doctoral programs identified in the 1998–2000 APSA Graduate Program and Faculty Guide (pp. 351–352) and 90% of the top 50 NRC programs. It also includes approximately 61% of the graduate student population as estimated using the program size numbers from the Guide.4

In the Guide, the figure for the “Number of Students Now in Ph.D. Programs” was not available for five of the 57 schools used in the study. These numbers were obtained during the June 2000—August 2001 time period of data collection of this project. To calculate the percentage, I used the denominator given in the Guide, “Totals Adjusted for Missing Data” of 7,765.

Small program size accounts for the five top 50 NRC schools not included in the data set whereas the distinct ranking system of Ballard and Mitchell accounts for the selection of particular programs ranked lower or not ranked in the NRC and US News systems. For the list of doctoral programs included and their associated codes, see Appendix A.

For each program, two primary sources were used: (1) department program descriptions and course listings from university bulletins available through the data base College Source; (2) department web sites. If program information was incomplete or there were inconsistencies between these two sources, email communication to staff and faculty was used for clarification. In a few cases, graduate program guides were obtained by mail. These sources were used to create an extensive record for each program: 121 variables as well as a template-based description including idiosyncratic descriptors and synthetic judgments, e.g., the likely implications of particular concatenations of requirements. Initial coding was conducted by a research assistant5

Thanks to doctoral student Hasan Kosebalaban for his careful work.

and was then checked and corrected by me after completion of the template-based description. Completing and verifying the data for each program required approximately two hours from the research assistant and four hours on my part.

For the most part, coding was straightforward but two areas merit discussion. First, the meaning of “qualitative methods” must be carefully defined because it is used in a variety of ways in the discipline. To capture this diversity, “qualitative methods” should be understood as an umbrella term encompassing positivist and interpretive epistemological perspectives on the access to (e.g., interviewing, selection and construction of cases) and analysis of (e.g., content analysis or semiotics) “words.” Courses in qualitative methods cover topics like field research, in-depth interviewing, textual analysis, comparative case studies, ethnography, historical research, and assorted interpretive techniques (from metaphor analysis to deconstruction). Courses in “quantitative methods” cover standard statistical topics (descriptive statistics and measures of associations) and forms of multivariate analysis as well as experimental methods, survey research, and econometrics. Though the “quantitative/qualitative” terminology is useful for the purposes of curriculum assessment, it has been criticized on several grounds.6

Authors as diverse as King, Keohane, and Verba (1994, 5) and Flyvbjerg (2001, 196) decry the distinction as spurious, though on quite distinct grounds. For further discussion, see Schwartz-Shea and Yanow 2002 (480–2).

I agree with much of this criticism and admit that the construction of these crude categories of courses contributes to the reification of the misleading “quantitative/qualitative” distinction. Nevertheless, for the purposes of curriculum review, these categories provide a reasonable means to discern what are not particularly subtle patterns.

A second coding issue involves the extent of program coverage of particular topics, such as qualitative methods or philosophy of science. Such content may be covered in a “stand-alone” course focusing on the particular topic or in a “catch-all” course that covers a smattering of topics. Catch-all required introductory courses are common, an attempt to cover some combination of philosophy of science, scope of the discipline, history of the discipline, design, and methods. The coverage of stand-alone courses can be inferred from course titles and catalog descriptions. Identifying and inferring content from catch-all courses is tricky because they have a variety of titles (from “Scope and Methods” to “The Nature of Political Science” to “Design and Methods”) that may or may not match the content of the catalog descriptions. In a few such cases I was able to obtain syllabi. Though syllabi are better indicators of instructor intent, catalog descriptions formalize the general outlines of course content and thus may guide future course instructors.

Inferences for precisely what is covered are less reliable for catch-all than stand-alone courses, though for any particular content, e.g., philosophy of science, coverage will obviously be greater in stand-alone than catchall courses. For the purposes of counting whether particular content was required, I chose a generous decision rule, i.e., mere mention of the relevant content in the course description meant the program received credit for covering the topic. Thus, the depth of coverage of specific content is overestimated in the areas of history of the discipline, philosophy of science, and qualitative methods. In addition, the data on program requirements are generally more reliable than that on course offerings as new courses are often taught under temporary specialty catalog numbers before achieving individual catalog titles and numbers, and courses in the catalog may not have been taught in years.7

The appearance of new courses is particularly significant when it comes to qualitative methods; i.e., my descriptive study has a moving target due to the Perestroika movement (which began in October 2000) within the discipline. For example, in response to the Perestroika email discussions, graduate students at Yale University requested changes in the methods offerings of their program. However, given the substantive complexity and collective action problems involved in changing departmental requirements, it seems likely that the descriptive statistics reported here are still accurate within a few percentage points.

Given the research questions I posed, the data set provides a strong evidence base for answering descriptive questions at a rough level of precision. Three caveats should be kept in mind. First, the data set is cross-sectional so that longitudinal inferences are not warranted. Second, a major premise of the study is that course requirements and offerings reflect substantive judgments about the value of particular content in the doctoral curriculum. But for a variety of reasons (from problems of collective action to institutional constraints), the graduate curriculum does not perfectly reflect what members of the discipline most value. The third issue concerns the relative importance of curriculum in doctoral education; a study of curriculum cannot capture the significant roles of graduate advising and mentoring, much less the informal interaction and socialization that occurs over a multi-year process. Despite these caveats, this study provides a more detailed and complete picture of doctoral education in political science in the United States than heretofore existed.

Requirements, Field Neutrality, and Offerings

Requirements may apply only to students majoring or minoring in a particular field or to all students admitted to the program. A program (or for that matter a field) requirement can take one of two general forms: competency or specific course requirements. The former is the more flexible form because students may fulfill the competency by using prior course work, taking an examination, or perhaps by taking a course or two from a specified list or of a specified type inside or outside of the department. A competency approach to requirements allows for diverse student background and interest and husbands faculty teaching resources. The presumption is that how the competency is achieved matters little; e.g., language requirements are competency based. In contrast, the requirement that students take specified courses puts a premium on uniformity of content for all students in the program—producing a “cohort experience” of the material; faculty teaching resources are devoted to the topic and to the socialization process involved in seminar exchanges. These two approaches place different emphasis on flexibility to meet diverse student needs and interests versus assuring uniformity in substantive content and in professional socialization. Though both competency and specific course requirements communicate what a faculty values, specific course requirements are especially powerful in this regard because of the resources they consume, their role in the socialization process, and what they reveal about the core of a discipline.

Ideally, program requirements should be as “field neutral” as practical, that is, no single field should be so burdened by a program requirement that students choosing that field take considerably longer to complete the degree. For example, language requirements may be essential to those with a major field in comparative politics but less so to those whose major field is American politics. One response to this difficulty is a research “tools” requirement allowing those in comparative politics (or political theory) to choose a language while those in other fields, say formal theory or American politics, may choose training in statistical-quantitative methods. The principle of field neutrality supports this “tools” approach8

The language of “tools” and “toolkits” is often contested by those who want to emphasize the connections between “methods” and “methodology” and between methodology, ontology, and epistemology.

so that those who favor language as a program requirement must articulate the worth of language study for all fields. If taken to its extreme, however, the principle of field neutrality implies that there should be no program requirements at all, only field requirements.

Course offerings demonstrate faculty capacity and interest. I have analyzed elsewhere the considerable program variation in regard to program offerings (Schwartz-Shea 2001). That variation, however, is not related in any simple way to department size9

It is the case that NRC ranking is associated with faculty size. Average faculty size for the 57 programs is 32 (with a standard deviation of 10). The top 19 NRC programs average 40 faculty (s = 10), the middle NRC 19 programs average 29 faculty (s = 7), and the 19 programs that are unranked by NRC or ranked in the lowest third (of this selection) average 28 faculty (s = 11). But analysis shows that programs with fewer faculty still manage to offer, at a comparable level, those courses which are, historically, an accepted part of the discipline, i.e., philosophy of science, scope of the discipline, research design, and quantitative-statistical methods. In contrast to that set of courses, the ability to offer specialty methods courses is negatively related to ranking/faculty size and this pattern is even clearer in the offerings of game theory and qualitative methods courses. Thus, doctoral students attending larger programs are likely to have a greater array of possibilities when it comes to those “newer” emphases within the discipline. Yet, when substantive content is widely accepted, programs with fewer resources still find a way to offer it. For example, 89% of the top 19 programs (average faculty size of 40) offer three quantitative-statistical courses whereas fully 58% of the lower-tier schools (average faculty size of 28) manage to do so.

nor NRC rankings. Thus, it would be misleading to say that more highly ranked programs offer “better” requirements, structure, or offerings. For example, while it is the case that some departments emphasize quantitative methods while others emphasize a balance between quantitative and qualitative methods, highly ranked programs occur in both of these categories.10

See especially, Table 6, page 71 (Schwartz-Shea 2001).

Findings

Table 1 reports aggregate statistics for department requirements including language competency. As of 2001, only 16% of departments value a language competency sufficiently to make it a program-wide requirement. The influence of the “tools” conceptualization is evident in the 16% of programs that have chosen the field-neutral approach of allowing a choice between language and statistics. It is important to note that the “tool” alternative to foreign language is conceived of exclusively in quantitative terms. This preference is echoed in the percentages requiring quantitative and qualitative methods, a lopsided 66% and 9% respectively. Two substantive areas that might have been expected to gain more discipline-wide support are philosophy of science and scope/history of the discipline. Both are inclusive of all fields and philosophy of science is notable as an area that brings together interests shared by political theory and the empirical fields. Nevertheless, more than half of the discipline does not value these areas sufficiently to require them of all students. In sum, a key inference from Table 1 is that the closest political science comes to a core curriculum, to be transferred from generation to generation, is quantitative methods.

Table 2 presents another way to address the question of whether there is a core transmitted to all doctoral students in the discipline. This table shows the percentage of departments requiring specified courses for all students in the program— from no courses required to as many as four, five, or six. It reveals considerable variation in departmental judgments about the need for program structure: 30% require no specific courses, 33% require one or two courses, and 37% require three or more courses. Significantly, there is a weak relationship between the number of courses required and NRC ranking, i.e., only two11

Programs 36 and 43.

of the top 20 programs require three or more courses and seven12

Programs 2, 7, 17, 29, 42, 47, and 56 though both 42 and 56 require field or research seminars.

of the top 10 programs require no courses. Thus, a significant proportion of the most highly ranked programs has given up on the notion of a substantive core that should be transmitted to all doctoral students regardless of field. For those programs with requirements, as shown in Table 1, a course in quantitative methods constitutes the core, or a part of the core, curriculum.

Though the dominance of quantitative methods in program requirements is clear from this evidence, it is also important to examine program offerings. Table 3 shows the number of course offerings in seven substantive areas for the sample of 57 programs. Examining “first course” coverage in Table 3 indicates significant coverage in all the substantive areas except qualitative methods. In other words, 70% or more of programs have at least some coverage of philosophy of science, scope/history of the discipline, research design, quantitative methods, specialty methods offerings, and game theory. It should be emphasized, however, that the depth of coverage in philosophy of science and scope/history13

Judging by titles, stand-alone scope and stand-alone history courses are quite rare. If this content is covered at all, it is in a catch-all course such as “Scope and Methods” in which the emphasis is often on methods. History is far less often a part of curriculum. Indeed, the total number of courses with significant emphasis on history of the discipline is between three and six whereas the “Scope and Method” title is quite common as indicated by the 70% in Table 3.

of the discipline is overestimated by the coding decision rule to count affirmatively coverage in catch-all courses. Second courses are stand-alone courses and these percentages are considerably lower.

Like the counts for philosophy of science and scope/history courses, the count for the first qualitative methods course on the books (44%) is overestimated by the decision rule to count affirmatively coverage in catch-all courses. The percentage of programs offering a stand-alone course is considerably lower at 2%. Neither figure, however, includes titles indicating a field emphasis, e.g., “Textual Analysis in Political Theory” or “Comparative Methods.” When such courses are included in the category of qualitative methods the percentage of programs offering a first qualitative course rises to 65%. Though this percentage is considerably higher, course titles matter because a field-specific course title narrows the range of clientele whereas a general title invites students of any field to consider the method as applicable. In contrast to qualitative courses, the titles of quantitative courses are rarely field-specific.

Another way to think about the differences in course offerings in quantitative and qualitative methods is to count the total number of offerings (i.e., the sum of multiple offerings per program). When this is done the ratio of quantitative to qualitative courses is seven to one. Including field-specific courses reduces the ratio to four to one. Though aggregate statistics do not reveal what is available to students in particular programs, this aggregate imbalance is further evidence of the dominance of quantitative methods in the curriculum. As important, specialty methods courses are predominately quantitative (survey methods and experimental research) whereas there is only one course14

Program 36.

on interviewing, a method common across the fields.

Discussion

Is this the curriculum we want? I will hazard the guess that a great many faculty would not judge the aggregate curricular patterns reported above as adequately preparing future scholars to understand the range of approaches that members of the discipline can and do use in the study of politics. The effects of the dominance of quantitative methods, verified here at the curricular level, have been debated for some time. Notably, even those who agree with a requirement in quantitative methods criticize the dominance of quantitative methods as lacking field neutrality (e.g., Van Evera 1997, 3), as contributing to the marginalization of political theory in the discipline (e.g., Walsh and Bahnisch 2000), and as producing methods-driven, trivial research (e.g., Theodoulou and O'Brien 1999, 10). What the evidence reported here also reveals is the extent to which highly ranked programs seem to have withdrawn from these debates and taken recourse in a let-the-fields-do-whatthey-may structure. Such department decisions contribute to the “disciplinary fragmentation” (Easton and Schellings 1991), “excessive specialization,” and “lack of respect” (Jervis 2001) that characterize the discipline.

One other significant effect needs to be noted. A quantitative methods core, and the language of variables and hypotheses it engenders, elides huge areas of non-positivist research visible in other social sciences yet still highly relevant to the discipline. In particular, doctoral students interested in qualitative methods of the interpretive epistemological persuasion find very little in the curriculum to satisfy their needs. Moreover, when programs lack a philosophy of science requirement these systematic interpretive approaches to research are misunderstood at best and, at worst, denigrated as unscientific, not appropriate for researchers in the discipline of political science. Whatever one's ultimate position on epistemological issues, graduate students' capacity for autonomous choice depends on a familiarity with the basic alternatives. Yet, many students taking a required quantitative methods course will lack this familiarity given the low prevalence of stand-alone philosophy of science courses and the fact that less than half of the programs require even catch-all coverage. Without this basic grounding, such epistemologically unaware researchers have no basis for cross-specialty communication with researchers in a variety of fields who use interpretive approaches including law and politics (Brigham 1996), theories of bureaucracy (Ferguson 1984), public policy (Yanow 1996), and budgeting (Czarniawska-Joerges 1992).

The lack of respect in cross-specialty communication has been summarized by Lisa Anderson in especially colorful terms. She writes:

Judging from the way most American doctoral students are trained today, disciplines are as much gangs, with handshakes and colors, initiation ceremonies and secret passwords, as they are research traditions. Their members are jealous of their territory and quick to resort to “trash talk” when confronted with the work of their rivals (Anderson 2000, 8).

Whether such a pessimistic reading of the status quo is warranted, it seems clear that the time has come to reconsider the nature of doctoral education in the discipline. I first offer three general areas for debate and then more specific arguments about the teaching of quantitative and qualitative methods.

First, to belabor the obvious, departments should re-evaluate their graduate requirements to engage the question of what makes political science a discipline rather than a collection of fields. Second, the debate should be conducted with a self-consciousness concerning the implicit stereotypes about the natural sciences, the social sciences, and the humanities that often imperil genuine communication across epistemological divides. Understanding the constructed nature of disciplinary divisions and their on-going evolution (Wallerstein 1999) may help to loosen the imagination of faculty to consider how to construct a social science for the twenty-first century. This discussion should include attention to the role of philosophy of social science in the curriculum. What should a program-wide requirement in philosophy of social science look like? When was the course, if offered, last updated? Could team teaching improve departmental communication across epistemological divides, thereby providing a model of such communication for graduate students? Third, are there ways to re-conceptualize the relationship between political theory and the empirical fields so that a necessary division of labor does not stultify our understanding of political life? Just as there are currently debates about the field divisions between comparative politics and international relations or whether American politics is better conceptualized as a part of comparative politics, conversations about this divide can challenge the standard approach to curricular design.

Finally, the debates must call into question taken-forgranted assumptions about quantitative and qualitative methods. A typical argument for the status quo is that a quantitative requirement is necessary, as are the two to three course sequences, because quantitative methods are particularly difficult for students. Social science students, even doctoral students, often avoid mathematical topics and hence training must be required and, ideally, it should be intensive. Moreover, this required, intensive training is warranted—so the argument goes—because of the general applicability of statistical research tools to political phenomenon. Qualitative methodologies, in contrast, are easier because they are wordbased, less generally applicable, and more task dependent. Precious curricular hours should be devoted to quantitative methodologies so students receive the necessary training under faculty guidance. Qualitative methodologies can be picked up on one's own as needed for particular research projects.

This argument cannot be addressed in sufficient depth here to convince a skeptic but the general lines of a rebuttal can be laid out. Are quantitative methods really more difficult to learn than qualitative methods? Though it may be the conventional wisdom that “math is hard,” such a claim constitutes insufficient grounds for the imbalance in requirements and offerings of the curricular status quo. An obvious question is, hard for whom? Clearly, there are those for whom mathematics is difficult just as there are those for whom it is easy. And there are those who are adept at the manipulation of words and those who find that task extremely frustrating. Given that the basis of ordinary least squares regression is the formula for a straight line, perhaps regression could be learned on one's own and the curricular hours devoted instead to the subtleties of metaphor analysis. The point is that our understanding of these pedagogical issues is asymmetric at this time—a function of the experiences of those who teach quantitative methods with comparably little input from those who teach qualitative methods.

It is indeed the case that qualitative methodology is more diverse than quantitative methodology encompassing a wide range epistemologically (from positivist to interpretive) and in terms of specific techniques of data access (e.g., interviewing, ethnography, field research) and data analysis (e.g., deconstruction, metaphor analysis). But this is not to agree that quantitative tools are the “more general” or “more useful” if what is meant by that is “more applicable to a wide range of phenomenon.” As the Chronicle of Higher Education's regular feature “Deconstruct This” attests, it is possible to produce “readings” of a wide array of events, texts, and policy documents. For those allergic to deconstruction, let me be clear. Though methodologies borrowed from the humanities can be useful in the social sciences, that does not mean they will or should be used in the same ways. (In particular, there are different attitudes toward evidence in the social sciences as compared to the humanities.) Like quantitative analyses, qualitative analyses can be systematic, rigorous, and subtle. And like quantitative analyses, qualitative analyses can be superficial, misapplied, and a waste of time. Rather than a priori judgments about the worth of particular methodologies, qualitative methodologies need to be more widely recognized as useful to the study of politics and thus deserve to be taught as part of what political scientists do. As important, qualitative methodologies may be indispensable for accessing the kinds of “local or practical knowledge” that quantification overlooks or erases (Schmidt 1993; Scott 1998; Flyvbjerg 2001).

The Challenge and Promise of Methodological Pluralism

Methodological pluralism means no a priori methodological commitments, only a commitment to addressing substantive questions using the conceptual and methodological resources most appropriate to answer those questions. Appropriateness should be the most important standard for judging quality research, not whether the methods “are what the field uses” or whether the methods are “new” or “advanced.” That said, methodological pluralism need not mean, as Laitin (forthcoming) suggests, that scholars do not seek to improve specific techniques: experts in quantitative methodologies have invented improved methods for dealing with dichotomous variables, pooled time-series data, the sampling of proportionally smaller populations, and the wording of survey questions; experts in qualitative methodologies have invented new methods for dealing with the role of the self in ethnography, for systematizing the development of access to research sites, and for working with qualitative software in the development of themes from in-depth interviews. Methodological pluralism does mean avoiding the a priori privileging of particular approaches that is evident in the curricular status quo.

If we endorse this meaning of methodological pluralism, if we give up on the “quantitative core,” how are we to speak to each other across the epistemological and ontological divisions that crisscross the discipline? It is the case that positivists have a shared language of variables and hypotheses, and their standards of reliability and validity do provide a means of communication and a means for judging some of the significant problems in the discipline. Recently, there has been a concerted effort to extend these standards to qualitative research, e.g., King, Keohane, and Verba (1994) and Brady and Collier (forthcoming). Qualitative research, however, should be understood to include other word-based approaches grounded in an interpretive epistemological position. Interpretive researchers have developed appropriate languages and standards for these methodologies (Lincoln and Guba 1985; Erlandson et al. 1993; Brower et al. 2000) though these standards are not well known in political science. Thus, as Cartwright (1999) admits, there are no common standards for judging quality research beyond “use the method that allows you to answer your research question.” And this difficulty is further exacerbated by the often different goals of positivist and interpretive researchers. Whereas positivist qualitative research emphasizes causality, interpretive qualitative research emphasizes meaning-making—focusing on the interpretive acts of both situated participants and researchers, rather than seeking to present context-free generalizations.

It is undeniable that communication between those endorsing positivist research and those endorsing interpretive research is and will be difficult—not only because their standards differ but because they often have different visions of the purposes of social science. Instead of denying the existence of interpretive qualitative research, we in the discipline should be grappling with how to educate ourselves and build the understanding that will allow better communication on the question of cross-epistemological standards. What may make this effort both feasible and promising is a recommended recommitment to problem-driven research (Shapiro 2002) because it is the passionate commitment to understanding substantive issues that makes cross-epistemological communication necessary. But genuine grappling, in contrast to claims of a priori methodological superiority, will require a professorate minimally conscious of itself as a discipline and of the varieties of approaches to social problems. A curricular starting point for this journey would be a program-wide, discipline-wide requirement of a stand-alone course in philosophy of science, covering interpretive as well as positivist positions.15

Portions of this paper were presented at the 2001 Annual Meeting of the American Political Science Association, San Francisco, and the 2001 Annual Meeting of the Western Political Science Association, Las Vegas. Support provided by the Faculty Fellow Program and the Tanner Humanities Center at the University of Utah. The support is much appreciated. Thanks also to those who provided feedback on earlier drafts, particularly Matthew Burbank but also Mark Button, Jefferson Gray, and John Francis.

References

American Political Science Association 1998 APSA Graduate Program and Faculty Guide 1998–2000
Lisa Anderson 2000 “Response to Ken Wissoker's Negotiating a Passage Between Disciplinary Borders: A Symposium.” Items and Issues: Social Science Research Council 1 (3–4) 8Google Scholar
Ballard Michael J. Neil J. Mitchell 1998 “The Good, the Better, and the Best in Political Science.” PS: Political Science and Politics 31 (4) 826-28Google Scholar
Brady Henry David Collier Forthcoming Rethinking Social Inquiry: Diverse Tools, Shared Standards LanhamRowman and Little-field
Brigham John 1996 The Constitution of Interests: Beyond the Politics of Rights New YorkNew York University Press
Brower Ralph S. Mitchel Y. Abolafia Jered B. Carr 2000 “On Improving Qualitative Methods in Public Administration Research.” Administration and Society 32 (4) 363-97Google Scholar
Cartwright Nancy 1999 Dappled World: A Study of the Boundaries of Science Cambridge Cambridge University Press
Czarniawska-Joerges Barbara. 1992 “Budgets as Texts: On Collective Writing in the Public Sector.” Accounting, Management & Information Technology 2 (4) 221-39Google Scholar
Easton David Corinn Schellings 1991 Divided Knowledge: Across Disciplines, Across Cultures Newbury Park, CA Sage
Erlandson David A. Edward L. Harris Barbara L. Skipper Steve D. Allen 1993 Doing Naturalistic Inquiry Newbury Park, CA Sage
Ferguson Kathy E. 1984 The Feminist Case Against Bureaucracy Philadelphia Temple University Press
Flyvbjerg Bent. 2001 Making Social Science Matter: Why Social Inquiry Fails and How It Can Succeed Again Cambridge Cambridge University Press
Jervis Robert 2001 “A Community of Scholars or a Scholarly Community? Controversies in Disciplines and Public Life.” APSA Presidential Address presented at the Annual Meeting of the Western Political Science Association, Las Vegas.
King Gary Robert O. Keohane Sidne Verba 1994 Designing Social Inquiry: Scientific Inference in Qualitative Research Princeton, NJ Princeton University Press
Laitin David D. Forthcoming 2003 “The Perestroikan Challenge to Social Science.” Politics & Society
Lincoln Yvonna S. Egon G. Guba 1985 Naturalistic Inquiry Beverly Hills, CA Sage
National Research Council 1996 “Appendix Table P-36 Relative Rankings for Research-Doctorate Programs in Political Science.” PS: Political Science and Politics 29 (2) 146-8
Schmidt Mary R. 1993 “Grout: Alternative Kinds of Knowledge and Why They are Ignored.” Public Administration Review 53 (6) 525-30Google Scholar
Schwartz-Shea Peregrine 2001 “Curricular Visions: Doctoral Program Requirements, Offerings, and the Meanings of ÔPolitical Science.'” Presented at the 2001 Annual Meeting of the American Political Science Association, San Francisco. Available at: http://www.poli-sci.utah.edu/SchwartzShea%20Curricular%20Visions.htm.
Schwartz-Shea Peregrine Dvor Yanow 2002 “ÔReading' ÔMethods' ÔTexts': How Research Methods Texts Construct Political Science.” Political Research Quarterly 55 (2) 457-86Google Scholar
Scott James C. 1998 Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed New Haven Yale University Press
Shapiro Ian 2002 “Problems, Methods, and Theories in the Study of Politics, Or: What's Wrong with Political Science and What to Do about It.” Political Theory 30 (4) 588-611Google Scholar
Theodoulou Stella Z. Ror O'Brien 1999 Methods for Political Inquiry: The Discipline, Philosophy, and Analysis of Politics Upper Saddle River, NJ Prentice-Hall
US News and World Report 2001 “America's Best Graduate Schools.” U.S. News OnLine. http://www.usnews.com/usnews/edu/beyond/gradrank/gbpolisc.htm (June 27, 2000).
Va Evera Stephen 1997 Guide to Methods for Students of Political Science Ithaca Cornell University Press
Wallerstein Immanuel 1999 “The Structures of Knowledge, Or How Many Ways May We Know?” The End of the World as We Know It: Social Science for the Twenty-First Century Minneapolis University of Minnesota Press 185-91
Walsh Mary Mar Bahnisch 2000 “The Politics of Political Theorizing in the New Millennium.” Presented at the Annual Meeting of the American Political Science Association, Washington, DC.
Yanow Dvora 1996 How Does a Policy Mean Washington, DC Georgetown University Press
Figure 0

Doctoral Programs Requirements

Figure 1

Number of Required Coursesa

Figure 2

Doctoral Programs Course Offerings

Figure 3