Hostname: page-component-6bf8c574d5-h6jzd Total loading time: 0.001 Render date: 2025-02-20T18:58:34.189Z Has data issue: false hasContentIssue false

Ranking Departments: A Comparison of Alternative Approaches

Published online by Cambridge University Press:  10 July 2007

Natalie Masuoka
Affiliation:
University of California, Irvine
Bernard Grofman
Affiliation:
University of California, Irvine
Scott L. Feld
Affiliation:
Purdue University
Rights & Permissions [Opens in a new window]

Extract

There are many different ways to develop rankings of Ph.D.-granting academic departments. Perhaps the most common method is reputational: we simply ask knowledgeable scholars in the discipline to provide their rankings and aggregate these in some fashion. Other ways involve more “objective indicators.” But, of course, departments have multiple attributes, e.g., we might be interested in how good a department is as a place to get a Ph.D., or we might be interested simply in the research record of its faculty, etc. Thus, we might want to use different indicators to measure different aspects of the department.

Type
THE PROFESSION
Copyright
© 2007 The American Political Science Association

“In Animal Farm all animals are equal, but some are more equal than others.”

—George Orwell (1946)

There are many different ways to develop rankings of Ph.D.-granting academic departments. Perhaps the most common method is reputational: we simply ask knowledgeable scholars in the discipline to provide their rankings and aggregate these in some fashion. Other ways involve more “objective indicators.” But, of course, departments have multiple attributes, e.g., we might be interested in how good a department is as a place to get a Ph.D., or we might be interested simply in the research record of its faculty, etc.1

Other measures include counts of articles or books produced, perhaps weighted in some fashion by the prominence of the journal or publisher. Klingemann (1986, 53, Table 3), for example, provides a ranking of departments by total number of published articles in journals in the SSCI citation base over the period 1978–1980. We prefer to look at citations, since many articles tend to vanish from the collective disciplinary consciousness without a trace. However, we recognize that publications can provide an important measure of research activity, and an indicator that will lead citations, especially for departments with relatively junior faculty.

Thus, we might want to use different indicators to measure different aspects of the department.

In this article we look at U.S. Ph.D.-producing departments, focusing, on the one hand, on departmental research excellence as judged by the total and mean per capita citation counts of present faculty, and, on the other hand, on departmental success in Ph.D. production as judged by the number and proportion of its Ph.D.s who end up among the most highly cited U.S. political scientists. Citation counts to the work of present departmental faculty are a measure of present departmental visibility, and we may also think of them as measuring department input useful in turning out first-rate scholars; while citation counts (or citation-based ranking measures) to the work produced by a department's Ph.D. graduates are a measure of past departmental output success. Both types of measures can be informative.

We look at the correlation between these and other indicators and the ranking of political science departments offered by U.S. News and World Report. We have examined multivariate regression models that can be used to predict departmental prestige rankings circa 2005. While we considered many different models, our best fitting model is a remarkably simple one with only three independent variables: the number of faculty in a department who are among the 400 most highly cited U.S.-based scholars in the discipline, the department's success in placing its own Ph.D. students at other graduate departments, and its success in producing students who become highly cited scholars in the profession.

This is the third and final article in a series. In the first essay (Masuoka, Grofman, and Feld 2007a), we focused on using citation data to rank individual scholars, creating a list that we (following Klingemann, Grofman, and Campagna 1989) refer to as the “Political Science 400.” In the second essay (Masuoka, Grofman, and Feld 2007b), we focused on exchange patterns within departments, e.g., the number and proportion of a given department's Ph.D.s it places with/receives from each of a set of other departments. In this essay, we incorporate the individual-level citation data presented in the Political Science 400 paper and the departmental Ph.D. production and placement data from the second paper to rank departments using a number of different indicators.

Ranking Departments in Political Science: Reputation, Productivity, and Citations

Departmental rankings have been of sustained interest to political scientists. To be sure, there are various methods to rank departments, all of which provide a different perspective on the elite departments in the profession. These methods can be generally classified into two types: subjective and objective. Subjective measures rely on perceptions of reputation while objective measures have largely focused on departments' cumulative scholarly production.

Reputational rankings have the longest standing tradition in political science. Early studies such as Kenniston (1957), Cartter (1966), and Somit and Tanenhaus (1967) all relied on surveys of either department chairs or APSA members to measure the reputations of departments. Somit and Tanenhaus (1964, 28) posit “there is no infallible method of objectively quantifying the actual quality of schools. But where one deals with qualitative assessments, the relationship of fact to reality is often less important than the existence of the belief and the behavior that results from its acceptance.” Contemporary studies conducted by both the National Research Council (1995) and U.S News and World Report (2005) also in whole or part base their rankings on reputation.

Increasingly, scholars are using more objective measures of quality, particularly in terms of publication output or citation data. Most objective studies such as Robey (1979), Morgan and Fitzgerald (1977), McCormick and Bernick (1982), Ballard and Mitchell (1998), and McCormick and Rice (2001) focus on cumulative article publications of departmental faculty. These studies try to control for quality by limiting their count to articles published in top journals such as the APSR and in regional journals. Rice, McCormick, and Bergmann (2002) also ranked departments by their faculty's cumulative book production and find that the type of publication makes a difference in the rankings. Finally, studies such as Klingemann (1986), Klingemann, Grofman, and Campagna (1989), and Miller, Tien, and Peebler (1996) have used cumulative departmental faculty citation counts as another method to objectively rank departments.

A number of studies also examine the relationship between subjective and objective indicators of prestige. Studies such as Lowry and Silver (1996) and Katz and Eagles (1996) examine how structural features such as departmental size and funding influence reputation rankings. These studies suggest that there are other possible factors that can influence departments' reputations. However, studies conducted by Jackman and Silver (1996) and Garand and Grady (1999), which compare the relationship between cumulative publications and reputation, find that the two do not highly correlate.

Using Citation Counts to Rank Departments

We show in Table 1 (similar to Table 3 in Klingemann, Grofman, and Campagna 1989) the ranking of the top departments in each of six periods (before 1950, 1950–1959, 1960–1969, 1970–1979, 1980–1989, 1990–1999) based on how many members of the current Political Science 400 they produced during that time period. We included a department in the table if it was among the top 20 departments in any of these time periods.

Departments that Produce the Highest Number of Members of the Political Science 400: Overall and by Decade

When we compare Table 1 with Table 3 in Klingemann, Grofman, and Campagna (1989), which covers the four decades of the 1940s, 1950s, 1960s, and 1970s based on the 1980–1985 Political Science 400 list, we see that the departments at Harvard, Yale, Chicago, Michigan, Berkeley, Princeton, and Columbia continue to be among the top producers overall of the most highly cited political scientists, and that those at Stanford and the University of North Carolina, Chapel Hill have moved up in ranking. Comparing across the seven decades sees more evidence of change. Generally the departments at the top 10 schools maintain their high status over the seven periods, but we do see a dip for Columbia's. Current gains by the departments at Cal Tech, MIT, Rochester, Washington University-St Louis, UC San Diego, and Duke are also worth noting since, especially in the last several decades, each has produced a number of scholars who make it to the top of the profession, and thus each would rise drastically in rankings based on production of recent Ph.D.s who have gone on to distinction.

Because it might be thought more likely that a department that produces a large number of Ph.D.s will produce a large number of highly cited Ph.D.s, for the cumulative citation counts Table 2 (paralleling Table 4 in Klingemann, Grofman, and Campagna 1989) normalizes Table 1's data by dividing through by the total number of Ph.D.s produced by that department over the period 1966–2001 (we also refer to the Schmidt and Chingos [2007] in this edition of PS).2

For overall Ph.D. production we have yearly data at the aggregate level from 1910 through 2001 (U.S. Department of Education 2005; Gaus 1934; U.S. National Academy of Sciences 1958; 1978; U.S. National Science Foundation 2005). For Ph.D. production at the departmental level, we have data reported by Somit and Tanenhaus (1964; 1967), and that in two early articles in the APSR (Gaus 1934; Munro 1930) for the period 1902–1933, with data for the periods 1948–1958 and 1966–2001 taken from statistics provided by the National Science Foundation, National Academy of Sciences, and the Department of Education's National Center for Education Statistics.

For simplicity, and to avoid problems with ratios based on small numbers, we limit ourselves to overall rankings. There are a number of significant changes when we consider success in turning out stars of the profession relative to a department's total Ph.D. production. With the exception of Yale and Stanford's departments, all schools' departments which were at the very top move downward when we normalize the rankings, and some schools' departments significantly so. For example, Columbia's, which ranked seven in Table 1, drops to 25 in the normalized ranking. It is apparent from Table 2 that some smaller departments, such as those at Cal Tech, Rochester, and Washington University-St Louis, are better at producing high-quality Ph.D.s. relative to their total production of Ph.D.s than are some departments with larger Ph.D. production and more highly regarded Ph.D. programs.

U.S. Departments that Produce the Highest Proportion of Members of the Political Science 400 Relative to their Total Production of Ph.D.s

While Table 2 ranks departments based on a measure of the quality of their Ph.D. graduates, Table 3 ranks departments based on their faculty's total cumulative citations over the period 1960–2005 to their present (circa 2002) faculty (paralleling Table 2 in Klingemann 1986, 656), and Table 4 ranks departments by the number of their faculty (circa 2002) that are among the Political Science 400. For these tables we have not bothered to disaggregate by cohort or decade. But it appears to us that the set of emeriti who are still listed on mastheads, especially at more prestigious institutions, are those who are among the more famous in the discipline. Thus, which emeriti count can affect department ratings.3

To ascertain whether including emeriti faculty in the calculation of a department's citation count would have a major impact on departmental rankings, we ran additional analyses in which we excluded the citation counts for emeriti faculty. Of course, the total faculty citation counts for many top departments did decrease, but we found that there were no substantial differences in the rankings of the top departments, even though there was some movement within limited parameters. This stability is most likely due to three reasons. First, since the top-ranked departments' citation counts are much higher than that of their lower-ranked counterparts, removing emeriti citation counts (even of highly cited faculty) would not involve displacement of top departments from their positions. Second, because a large number of departments list emeriti on their faculty rosters, most departments had their citation counts lowered as a result of eliminating emeriti. Third, in no department are emeriti a substantial proportion of all listed faculty.

Ranking Departments Based on the Total Cumulative Citations over the Period 1960–2005 to the work of their Present (circa 2002) Faculty and by Citations Per Capita

Ranking Departments by the Number and Proportion of their Faculty that (circa 2002) are among the Political Science 400

Comparing the listing of top-ranked departments in these tables to each other and to the departmental rankings based on the production of highly cited Ph.D.s (Table 1), we see that the expected departments (e.g., those at Harvard, Yale, Princeton, etc.) consistently rank in the top 10 regardless of whether we consider the production of present faculty in the Political Science 400, total faculty citations, or production of Ph.D.s who go on to become members of the Political Science 400. However, perhaps the most striking feature of Table 3, when we look at total cumulative citations, is not the high ranking of the usual suspects as found by Klingemann, Grofman, and Campagna (1989), but the prominence of the departments at UC Berkeley, UCLA, and UC San Diego, as well as the high rankings of departments such as those at Duke, Cornell, and Indiana. While again most of the usual suspects are highly ranked in Table 4, we also see the same remarkable prominence of departments at University of California institutions, now in terms of total faculty who are in the Political Science 400.4

Klingemann (1986, 659) called attention to the under-ranking in prestige terms of Southwestern universities with high-citation faculty, especially those in California, which he attributed to prestige lagging “behind the massive shift in population, resources and talent that was moving to the Southwest during the 1970s and 1980s.” As is apparent from Tables 3 and 4, the number of highly cited political scientists located in the West has continued to grow over the last two decades.

For this table, we also call attention to the high rankings of the departments at Ohio State, MIT, the University of Washington, and Duke.

But, of course, ceteris paribus, we might expect to see large departments ranking higher in total citation counts and in numbers of highly cited faculty than those with fewer political science faculty. To correct for this, we have also provided in Tables 3 and 4 rankings normalized by departmental size in a way similar to what we have done in Table 2 to control for size of Ph.D. production. When we normalize the citation numbers to account for size of department, the rankings in Tables 3 and 4 can shift considerably. For example, once we account for the size of the faculty by looking at per capita citation rates, we see in Table 3 that faculty at smaller departments such as Cornell, UC San Diego, and UC Irvine's all have higher mean citation rates than do their colleagues in departments at larger schools, such as those at UC Berkeley and Princeton. In Table 2 we found that departments with smaller Ph.D. programs, such as Cal Tech and Rochester, are proportionally more effective at producing top scholars. In Table 4 we see that, if we control for faculty size, Cal Tech's department may have the greatest proportion of highly cited faculty in political science, and that other smaller departments, like MIT, UC San Diego, SUNY Stony Brook, and the New School's, also have a very high proportion of highly cited faculty—higher in per capita terms than that of a number of more “famous” departments.

Another way to rank departments is according to the subfield where they might have concentrations of highly cited scholars. Klingemann's ranking (1986) of the top 20 scholars in each field in the period 1980–1985 named Yale's as the premier department in political theory, Michigan's nonpareil in American politics, and Harvard and Columbia's first in international relations, while Harvard and Wisconsin ranked highest in the combined areas of public policy, public administration, and public law. In comparative politics, no department emerged as the clear leader. We have replicated Klingemann's analysis. As of 2002, we find Stanford's to be the top department in American government, with four of the top 20 most-cited scholars. Michigan, Northwestern, and Rochester's rank next highest: each has two of the top 20 most-cited scholars. For comparative politics, Harvard's is now overwhelmingly the leader with six of the top 20 scholars on its faculty. Columbia, Cornell, and Yale's are the next top-ranked in this subfield. For international relations, Stanford's is the top department, while in methodology, four departments, those at Harvard, Michigan, Minnesota, and Ohio State, tie as the top-ranked. Yale's continues to be the premier department in political theory, with Columbia and UC Irvine's tied as the next highest ranked judged in terms of members of the Political Science 200 on their faculty. Finally, for public policy, public administration, and public law, Harvard, Indiana, and Michigan's departments each have two of the top 20 most-cited scholars.5

When we examine subfield distribution for the entire Political Science 400, we get only slightly different results. We find that, for American politics, Stanford's is still the top department in political science, with eight of its American politics faculty in the top 400. But now Ohio State's is second with six faculty in the top 400. In comparative politics, Harvard's is again at the top with 10 of its faculty in the Political Science 400, but now UC Berkeley's is next with eight, and Yale's drops to third with seven. For international relations, Columbia and Stanford's tie as the top departments. In methodology, the University of North Carolina-Chapel Hill's department joins the previously noted departments at Harvard, Michigan, Minnesota, and Ohio State in the tie for first. In political theory, five departments all have three faculty in the Political Science 400: those at Columbia, Harvard, University of Texas-Austin, UCLA, and Yale. It is in public policy/public administration/public law that we see the greatest change; now Johns Hopkins' is the premier department with four of its faculty in the top 400.

Predicting Departmental Prestige, 1960–2005

Somit and Tanenhaus (1963; 1964; 1967) look at four different departmental prestige rankings taken in 1925, 1957, 1963, and 1964. They find that Ph.D. production is a key determinant of prestige in the discipline's early period.6

“Historically, the largest producers were also the most highly regarded departments. The lion's share of Ph.D.s traditionally came … from departments which were prestigious as well as sizable” (Somit and Tanenhaus 1964, 31).

When correlating Somit and Tanenhaus' 1963 rankings with total Ph.D.s produced between 1948–1958, we find a bivariate correlation of −0.634 (an adjusted r2 of 0.38). If we correlate a contemporary measure of reputation, the 2005 U.S. News and World Report graduate program rankings, with total Ph.D.s produced between 1966–2001, we find a bivariate correlation of −0.572 (an adjusted r2 of 0.33). When we use the logged value of the U.S. News and World Report rankings as our dependent variable,7

Using data in the form of rankings implicitly posits an equal spacing in perceived reputational differences between departments at adjacent ranks so that the difference between, say the 5th- and the 6th-ranked department would be the same as the difference between the 120th- and the 121st-ranked departments. Because we anticipate that identification of reputational differences among departments will be easier among the better-known departments, with a kind of reputational lumping effect for the less-well-known departments, the log of the ranks as our dependent variable gives us a nonlinear function of an appropriate shape. In this calculation, we have treated all unranked departments as being at rank 95. Clearly, lumping all unranked departments limits the best predictive fit we could hope to achieve, but if unranked departments are really low-ranked departments, as is almost certainly the case, this seems a more sensible way to treat the data than to eliminate a large number of departments from our regressions due to missing values. When we use rankings without logging them, we get essentially the same results, but the explained variance is lower; the same is true when we delete cases with missing ranks rather than treat these departments as at the bottom of the rankings.

we get an r value of −0.611 (with an adjusted r2 of 0.37).

There are other indicators besides Ph.D. production which might predict departmental prestige. Somit and Tanenhaus (1964, 36) note that “a necessary if not a sufficient condition of success is that the aspiring department be a component of a university which is itself prestigious.” Klingemann (1986) looked at the relationship between departmental prestige and 1980 departmental citation counts. Reanalyzing his data we find a bivariate correlation of −0.712 (an adjusted r2 of 0.494) between the ratings in a 1981 survey of departmental reputational prestige (Rudder 1983) and total citation counts of departmental faculty.8

If we use the 2006 U.S. News and World Report rankings of undergraduate institutions as our measure of university prestige, we find a correlation of only −0.436 (an adjusted r2 of 0. 175) between that measure and current (circa 2002) departmental total citation counts.

If we correlate a contemporary measure of reputation, the 2005 U.S. News and World Report graduate program rankings, with citation counts of current departmental faculty circa 2002, we find a bivariate correlation of −0.719 (an adjusted r2 of 0.51). When we use the logged value of the U.S. News and World Report rankings as our dependent variable, we get an r value of −0.846 (with an adjusted r2 of 0.713). Other citation and placement variables also correlate with the (logged) U.S. News and World Report rankings in the expected way in a statistically significant fashion, as shown in Table 5.

Bivariate Correlations with U.S. News and World Report and Departmental Rankings (N = 132)

For predictions of departmental prestige rankings circa 2005 as provided by U.S. News and World Report, we have generated various multivariate regressions with both input variables such as the total citation counts of departmental faculty, per capita citation counts, the number of faculty in a department who are among the Political Science 400, and the number and proportion of departmental faculty whose Ph.D.s are from top departments (here those at Berkeley, Chicago, Columbia, Harvard, Michigan, Princeton, Stanford, and Yale),9

In the second paper of this series (Masuoka, Grofman, and Feld 2007b), in which we analyze the production and placement rates of Ph.D.-granting institutions, we identified a core of eight departments (referred to as the Big 8) that exert a powerful influence on the profession by directly or indirectly shaping the faculty who train the discipline as a whole. These eight schools were found to hire primarily from each other and train the majority of the faculty members at 32 other top-placing departments. Together, these 40 departments train the majority (78%) of the faculty in Ph.D.-granting departments.

and output variables such as Ph.D. production, and success in placing one's Ph.D. students at other graduate departments and at the most elite graduate departments. The most sensible model we arrived at, shown in Table 6, includes three variables: proportion of departmental Ph.D. graduates placed at Ph.D.-granting institutions, number of current faculty in the Political Science 400, and the number of Ph.D.s produced that are in the Political Science 400. Since we are looking at rankings, negative values indicate a positive relationship between the independent variable and departmental reputational prestige.

Predicting U.S. News and World Report Rankings

The equation in Table 6 has an adjusted r2 of 0.85, and all variables have the correct sign, with one (number of faculty in the Political Science 400) statistically significant at the .001 level, one at the .01 level (total number of Ph.D.s produced in the Political Science 400), and one (proportion of departmental Ph.D. graduates placed at other Ph.D.-granting departments) significant at the .02 level.

Discussion

First, when we rank departments in terms of citations to the work of their Ph.D. graduates or in terms of citations to the work of their present faculty, we see that the long established, and mostly East Coast, institutions continue to be very highly ranked in measures derived from our updated citation data—as they were in the 1980–1985 citation data studied by Klingemann, Grofman, and Campagna (1989). However, we also find a remarkable rise to prominence of departments at California institutions such as UC San Diego, and a further rise in the prominence of those at Berkeley, UCLA, and Stanford. Moreover, by some important criteria, two other California departments, those at Cal Tech and UC Irvine, also enter the elite ranks in political science when the data on which rankings are based are normalized with respect to faculty size.

Second, when we look at departmental rankings, the public/private status of their host institutions also seems to play a role. Most of the East Coast schools among the elite institutions are private, while the West Coast schools (with the exception of Cal Tech) are public.

There have been some changes in the rankings of the most prominent departments in given subfields. Although departments like Stanford's continue to be of high prominence in multiple areas, in some subfields schools besides the “usual suspects” of long-established elite institutions have risen to prominence. For example, Ohio State's has come to be one of the major departments in American politics.

Finally, we can explain most of the variance in departmental reputational rankings with only three variables: number of present departmental faculty in the Political Science 400, proportion of past departmental Ph.D.s placed at other U.S. graduate departments, and professional success of departmental Ph.D. graduates as judged by membership in the Political Science 400. Moreover, models combining types of citation and placement data also do well in predicting U.S. News and World Report rankings.

Author Bios

Natalie Masuoka is a Ph.D. candidate at the University of California, Irvine. Her dissertation examines the impact of mixed race identities on racial minority political participation and attitudes.

Bernard Grofman, whose past research has dealt with mathematical models of group decision making, legislative representation, electoral rules, and redistricting, has been at the University of California, Irvine since 1976 and professor of political science at UCI since 1980. He is co-author of four books, published by Cambridge University Press, and co-editor of 16 other books; he has published over 200 research articles and book chapters. Grofman is a past president of the Public Choice Society and in 2001 he became a fellow of the American Academy of Arts and Sciences.

Scott L. Feld, professor of sociology at Purdue University, continues his investigations into systematic causes and consequences of patterns in social networks, including academic placement networks. He is also continuing to study how collective decision making processes are affected by social structures in the forms of group memberships, norms, and social networks.

References

Ballard, Michael, and Neil Mitchell. 1998. “The Good, the Better and the Best in Political Science.” PS: Political Science and Politics 31 (4): 8268.Google Scholar
Cartter, Allan. 1966. An Assessment of Quality in Graduate Education. Washington, D.C.: American Council on Education.Google Scholar
Garand, James, and Kristy Grady. 1999. “Ranking Political Science Departments: Do Publications Matter?PS: Political Science and Politics 32 (1): 1136.Google Scholar
Gaus, John M. 1934. “The Teaching Personnel in American Political Science Departments: A Report of the Sub-Committee on Personnel of the Committee on Policy to the American Political Science Association, 1934.” American Political Science Review 28 (4): 72665.CrossRefGoogle Scholar
Jackman, Robert, and Randolph Silver. 1996. “Rating the Ranking: An Analysis of the National Research Council's Appraisal of Political Science Ph.D. Programs.” PS: Political Science and Politics 29 (2): 15560.Google Scholar
Katz, Richard, and Munroe Eagles. 1996. “Ranking Political Science Programs: A View from the Lower Half.” PS: Political Science and Politics 29 (2): 14954.Google Scholar
Kenniston, Hayward. 1959. Graduate Study and Research in the Arts and Sciences at the University of Pennsylvania. Philadelphia: University of Philadelphia Press.Google Scholar
Klingemann, Hans-Dieter. 1986. “Ranking Graduate Departments in the 1980s: Toward Objective Qualitative Indicators.” PS: Political Science and Politics 19 (3): 65161.CrossRefGoogle Scholar
Klingemann, Hans-Dieter, Bernard Grofman, and Janet Campagna. 1989. “The Political Science 400: Citations by Ph.D. Cohort and by Ph.D.-Granting Institution.” PS: Political Science and Politics 22 (2): 25870.Google Scholar
Lowry, Robert, and Brian Silver. 1996. “A Rising Tide Lifts All Boats: Political Science Department Reputation and the Reputation of the University.” PS: Political Science and Politics 29 (2): 1617.Google Scholar
Masuoka, Natalie, Bernard Grofman, and Scott L. Feld. 2007a. “The Political Science 400: A 20-Year Update.” PS: Political Science and Politics 40 (January): 13345.CrossRefGoogle Scholar
Masuoka, Natalie, Bernard Grofman, and Scott L. Feld. 2007b. “The Production and Placement of Political Science Ph.D.s, 1902–2000.” PS: Political Science and Politics 40 (April): 36170.Google Scholar
McCormick, James, and E. Lee Bernick. 1982. “Graduate Training and Productivity: A Look at Who Publishes.” Journal of Politics 44 (1): 21227.CrossRefGoogle Scholar
McCormick, James, and Tom Rice. 2001. “Graduate Training and Research Productivity in the 1990s: A Look at Who Publishes.” PS: Political Science and Politics 34 (3): 67580.Google Scholar
Miller, Arthur, Charles Tien, and Andrew Peebler. 1996. “Department Rankings: An Alternative Approach.” PS: Political Science and Politics 29 (4): 70417.Google Scholar
Morgan, David, and Michael Fitzgerald. 1977. “Recognition and Productivity among American Political Science Departments.” Western Political Quarterly 30 (3): 34250.CrossRefGoogle Scholar
Munro, William. 1930. “Appendix VII: Instruction in Political Science in Colleges and Universities.” American Political Science Review 24 (1): 12745.Google Scholar
National Research Council. 1995. Research Doctorate Programs in the United States: Continuity and Change. Washington, D.C.: National Academy Press.Google Scholar
Orwell, George. 1946. Animal Farm. New York: Harcourt, Brace and CompanyGoogle Scholar
Rice, Tom, James McCormick, and Benjamin Bergmann. 2002. “Graduate Training, Current Affiliation and Publishing Books in Political Science.” PS: Political Science and Politics 35 (4): 7515.CrossRefGoogle Scholar
Robey, John. 1979. “Political Science Departments: Reputations versus Productivity.” PS: Political Science and Politics 12 (2): 2029.CrossRefGoogle Scholar
Rudder, Catherine. 1983. “The Quality of Graduate Education in Political Science: A Report on the New Rankings.” PS: Political Science and Politics 16 (1): 4853.Google Scholar
Schmidt, Benjamin M., and Matthew M. Chingos. 2007. “Ranking Doctoral Programs by Placement: A New Method.” PS: Political Science and Politics 40 (July): 5239.Google Scholar
Somit, Albert, and Joseph Tanenhaus. 1963. “Trends in American Political Science: Some Analytical Notes.” American Political Science Review 57: 9338.Google Scholar
Somit, Albert, and Joseph Tanenhaus. 1964. American Political Science: A Profile of the Discipline. New York: Atherton Press.Google Scholar
Somit, Albert, and Joseph Tanenhaus. 1967. The Development of American Political Science: From Burgess to Behavioralism. New York: Boston, Allyn and Bacon.Google Scholar
U.S. Department of Education. 2005. Institutional Postsecondary Education Data System Completions Survey. Washington, D.C.: U.S. Department of Education.Google Scholar
U.S. National Academy of Sciences. 1958. Doctorate Production in United States Universities, 1936–1956. Washington D.C.: National Academy of Sciences.Google Scholar
U.S. National Academy of Sciences. 1978. A Century of Doctorates: Data Analyses of Growth and Change. Washington, D.C.: National Academy of Sciences.Google Scholar
U.S. National Science Foundation. 2005. Survey of Earned Doctorates Records File. Washington, D.C: National Science Foundation.Google Scholar
U.S. News, and World Report. 2005. America's Best Graduate Schools, 2005 Online Edition. Available at: www.usnews.com/usnews/edu/grad/rankings/phdhum/brief/polrank_brief.php.Google Scholar
Figure 0

Departments that Produce the Highest Number of Members of the Political Science 400: Overall and by Decade

Figure 1

U.S. Departments that Produce the Highest Proportion of Members of the Political Science 400 Relative to their Total Production of Ph.D.s

Figure 2

Ranking Departments Based on the Total Cumulative Citations over the Period 1960–2005 to the work of their Present (circa 2002) Faculty and by Citations Per Capita

Figure 3

Ranking Departments by the Number and Proportion of their Faculty that (circa 2002) are among the Political Science 400

Figure 4

Bivariate Correlations with U.S. News and World Report and Departmental Rankings (N = 132)

Figure 5

Predicting U.S. News and World Report Rankings