The 2012 United States presidential contest ushered in a revolution in election forecasting. While serious efforts to forecast American elections have been around for more than 30 years, suddenly things have changed. Competing news agencies and election prediction websites proliferated to satisfy the public’s appetite for forecasts during the campaign. And in terms of forecasting approaches, a new generation of dynamic modeling has emerged.
The elevated profile of election forecasting offers us the opportunity to consider what this means for the credibility, theory, and ultimately the future of election forecasting. Early election prediction models were met with the criticism that such forecasts were simply fun and games, not “real” political science, although these models were based on established election theory, public opinion polling techniques, and econometric estimation (Fair Reference Fair1978; Lewis-Beck and Rice Reference Lewis-Beck and Rice1982, Reference Lewis-Beck and Rice1984; Rosenstone Reference Rosenstone1983; Sigelman Reference Sigelman1979). Since the publication of these seminal works, model modifications have put our established election theories to the test. Through this process, we have learned much.Footnote 1 With these advances, and the increased demand for forecasts from campaigns and news consumers, election forecasting finally is gaining the respect that it deserves.
In this symposium, we offer 16 articles that tackle the task of election prediction. These pieces, written by leaders in the fields of election forecasting and commentary, are accessible presentations that examine a particular method or problem. The approaches to forecasting represented here can be grouped into four types: Structuralists, Aggregators, Synthesizers, and Judges. Next, we look at these forecasting types in practice. Then, we explore advances and obstacles in forecasting theory, and end with how that bears on election theory.
APPROACHES
The four forecasting types drawn on here can be distinguished by their uses of theory, data, time, and inference. Structuralists (e.g., Abramowitz, Campbell, Lewis-Beck and Stegmaier, Norpoth) estimate, via standard regression techniques, single-equation explanatory voting models at the national level of analysis. Commonly, these models begin with a core political economy explanation, something like vote = f (presidential popularity, economic growth). Generally these models offer a unique, final preelection forecast. Aggregators (e.g., Berg and Rietz, Blumenthal, Jackman, Traugott ) examine vote intention directly (or indirectly) through national opinion data. A leading example, that of Real Clear Politics, summarizes the preferences from likely voters, over multiple polls. While these poll results are intended by the polling houses as snapshots of opinion at the moment, they are frequently used by election watchers to aid in election prediction, as Blumenthal discusses. Jackman’s model-based poll aggregation approach exemplifies this innovation.Footnote 2 Taking a different slant from the polls themselves, the Iowa Electronic Markets summarize the election predictions of market traders. These Aggregators offer repeated forecasts during the campaign. Both of these approaches—Structuralist or Aggregator—base their inferences on quantitative methods.
Synthesizers combine properties of Structuralists and Aggregators. That is, they begin with an explanation in political economy form, and embed aggregated and updated polling preferences. The data, analyzed either at the national level (e.g., Erikson and Wlezien) or the state level (e.g., Linzer), are subjected to rigorous quantitative modeling. These models bring together election theory and the powers of aggregation and dynamic updating. A similar approach was widely followed in the run-up to the 2012 presidential election in the media examples from Nate Silver at the New York Times.
The foregoing forecasting approaches are distilled by thoughtful campaign observers (e.g., Cook and Wasserman, Rothenberg), who effectively act as Judges. This judging does not necessarily remain inside a positivist quantitative framework. These experts go further, weighing the sometimes conflicting claims of the polls, models, and markets, putting in their own admittedly qualitative assessment of the horse-race and following their own rules of thumb. In this way, they promise added value like the local weather forecasters who use their expertise of local conditions and patterns to adjust their forecasts against those of the Numerical Weather Prediction models (Novak et al. Reference Novak, Bailey, Brill, Eckert, Petersen, Rausch and Schichtel2011).
ELECTION FORECASTING THEORY: ADVANCES AND OBSTACLES
The newest election forecasting models, exemplified by Linzer’s work, look more like theoretically and technically sophisticated physical science forecasting models, such as those used in meteorology (Lewis-Beck and Stegmaier). They are based on a political economy theory of election outcomes, a theory tested against massive amounts of geographically appropriate observations (i.e., on the states), with these data and their predictions updated until the election occurs. In the 2012 presidential contest, such models correctly forecasted the Electoral College winner in all but one or two states.
How should these models, and others, be evaluated as forecasting instruments? In the literature, we earlier offered the following evaluation criteria: accuracy, lead, parsimony, and replication (Lewis-Beck Reference Lewis-Beck2005). The work of the Aggregators and Synthesizers, with their frequent updating, make clear that the word “dynamic” should be added to the list of criteria for several reasons. For one, as Sides remarks, updating the forecast “engages the campaign narrative.” For another, a forecasting instrument works better, to the extent that it can be updated on a regular (even daily) basis up to Election Day. This more or less continuous release of forecasts from one overarching model has been labeled by some as nowcasting (Lewis-Beck and Stegmaier).
The idea of combining either models or polls raises the other evaluation issues—parsimony and replication (i.e., transparency). Take parsimony first. The meaning of a parsimonious model becomes opaque when the predictions of many models or polls are averaged, especially if the unit of analysis is the state.
What about the first four evaluation criteria? In the popular mind, accuracy looms as most important. Updating, combining polls, using state-level measures, are all techniques that have helped improve accuracy. But, as the Campbell article suggests, accuracy alone is not enough. To take the extreme case, while a poll of voters exiting the voting booth might be highly accurate, it can only tell us something we will know in a few hours. The intrinsic attraction of forecasting comes from its ability to see into the future, when the future stands far away. Blumenthal, in his article, argues that more focus on the accuracy of early polls is needed. With respect to a specific time horizon, Linzer emphasizes the need to generate early forecasts, perhaps three to four months before Election Day. In this regard, Erikson and Wlezien, and Lewis-Beck and Stegmaier, tout the forecasting ability of early campaign perceptions of national economic conditions.
In the 2012 US presidential election, all the leading approaches to forecasting generally “got it right,” at least in the rough sense that, collectively, they forecast an Obama win. Part of that collective accuracy was due to the rising practice of ensemble forecasting, wherein the forecasts from different models are averaged, as was done in the pre-2012 election forecasting symposium published in PS: Political Science and Politics (Campbell Reference Campbell2012). But ensemble forecasting and other forms of combining (such as poll averaging) mask the problem of the differential quality among the models or polls. Rothenberg and Traugott, in their commentaries, raise the particular issue of poll quality, Rothenberg with respect to partisan polls, Traugott with respect to interactive voice response (IVR) polls.
The idea of combining either models or polls raises the other evaluation issues—parsimony and replication (i.e., transparency). Take parsimony first. The meaning of a parsimonious model becomes opaque when the predictions of many models or polls are averaged, especially if the unit of analysis is the state. When the unit of analysis is the nation, as used to be routine, the parsimony question had an easier answer. For one, these earlier models were based on such a small sample that parsimony was a practical necessity. One encouraging technique, which may allow more clarity and parsimony at both state and national levels of analysis, is the uniform swing idea, as applied by Jackman.
Turning to the problem of replication, the issue of opacity becomes greater. In particular, it is impossible for an interested investigator to replicate the results of a proprietary (i.e., classified) poll or model. This lack of transparency undercuts a canon of scientific research. As Linzer remarks, statistical models are based on assumptions whose validity can only be evaluated if the model and its operations are made known.
Besides these difficulties, other issues relate to replication, and, in particular, data. Accuracy may heavily rest on the availability of a sufficient number of reliable polls at the state or national level. But, as Blumenthal observes, the number of available state polls decreased from 2008 to 2012, and many forecasters fear that the number might further decrease as polling aggregation increases (for it is a much less costly forecasting strategy). If polls remain plentiful, the problem of their representativeness as voter samples persists, according to many of the articles in this symposium. In particular, Blumenthal asks why different polls may converge on the “right prediction.” Is aggregate voter opinion more stable, or are the polling houses adjusting their final forecasts toward central values?
ELECTION THEORY: LESSONS FROM PRESIDENTIAL ELECTION FORECASTING
Sometimes, election forecasting can appear to be a limited enterprise. For example, as Abramowitz notes, if interested citizens simply predicted that each state in 2012 would vote for its party choice in 2008, they would have been correct for 48 out of 50 states. In other words, no fancy equations, surveys, or models were needed to pick Obama as the presidential winner. But election forecasting is not always so easy for many reasons, as Campbell discusses. For one, we may be interested in point forecasts of the popular vote margin (in the state or the nation). For another, presidential elections recently have become very close, making them harder to forecast. Therefore, in the long run, theory becomes more important. Indeed, Sides argues that the forecasting exercise itself tests election theory.
Can sufficient accuracy be obtained weeks, even months before voting day? This question—that of the optimal lead—stands as an important next question to be solved in this burgeoning field.
It seems valuable, then, to ask what forecasting has taught us, as political scientists, about election theory. What have we learned about the behavior of American voters in presidential elections? Here we list five propositions:
I. Electoral cycles exist. As Norpoth shows, the incumbent party will generally only hold the White House for two, maybe three terms. Further, first-term incumbent parties are most advantaged, as Campbell and Norpoth observe. After that, the costs of ruling increase dramatically.
II. Campaigns influence the electoral outcome. This influence comes in obvious and less-obvious ways. In particular, it is conditioned by how candidates use economic information (strategically or not) to win votes, as Vavreck demonstrates.
III. The economy matters a great deal in the voter’s electoral calculus. Further, with respect to national economic performance, trends matter more than absolutes (Vavreck). Also, economic effects manifest themselves with a time lag (Erikson and Wlezien, Lewis-Beck and Stegmaier). Finally, economic perceptions count and can count even more than the economic facts (Lewis-Beck and Stegmaier, Vavreck).
IV. Voters are retrospective, and myopic. As Mayer points out, voters base their incumbent assessments largely on past performance, and they form that assessment roughly from events of the last year.
V. Voter opinion cannot be easily swayed. The forecasts tend to show considerable inertia in candidate preference (day after day, month after month), contrary to the expectation of many journalists, as Mayer observes. Moreover, according to Dickinson, the media tend to exaggerate the impact of candidate personality and campaign tactics.
While these propositions are not incontrovertible, they appear to rest on a solid empirical base developed from the repeated ex-ante forecasting by different research teams on United States presidential elections since 1980.
CONCLUSION
US presidential election forecasting has firmly established itself as a scientific forecasting enterprise that is capable of sophisticated modeling providing accurate, long-range work. The accuracy level, while high, is not perfect and never can be. Error will always remain, and some contests will be forecast incorrectly. However, this error may be reduced by careful attention to the more qualitative elements in the race, elements that go beyond the usual quantitative strictures. Finally, considerable accuracy can generally be achieved at some temporal distance from Election Day. A trade-off exists between accuracy and lead. At some point the gains in accuracy may not offset the costs in lead. Can sufficient accuracy be obtained weeks, even months before voting day? This question—that of the optimal lead—stands as an important next question to be solved in this burgeoning field.
ACKNOWLEDGMENTS
We would like to express our gratitude to Bill Jacoby and the ICPSR Summer School at the University of Michigan for hosting the roundtable “Presidential Election Forecasting: Frontiers and Controversies,” where a group of the contributors shared and discussed ideas on the future of election forecasting. We also appreciate Drew Linzer’s thoughtful comments on this introduction.
SYMPOSIUM CONTRIBUTORS
Alan I. Abramowitzis the Alben W. Barkley Professor of Political Science at Emory University. He has authored or coauthored six books, dozens of contributions to edited volumes, and more than 50 articles in political science journals dealing with political parties, elections, and voting behavior in the United States. He is also one of the nation’s leading election forecasters. Abramowitz’s most recent book, The Polarized Public: Why American Government Is So Dysfunctional examines the causes and consequences of growing partisan polarization among political leaders and ordinary Americans. He can be reached at polsaa@emory.edu.
Joyce E. Bergis a professor in the Department of Accounting, Henry B. Tippie College of Business, University of Iowa. She has been working with the Iowa Electronic Markets project since 1992 and is currently the director. Her recent research has appeared in Neuropsychologia, Quantitative Economics, Games and Economic Behavior, and Management Science. She can be reached at Joyce-Berg@uiowa.edu.
Mark Blumenthalis the senior polling editor of the Huffington Post and the founding editor of the site formerly known as Pollster.com , now HuffPost Pollster. He has been writing about polls and their methodology since launching the MysteryPollster blog in 2004. Blumenthal also worked in the political polling business for more than 20 years, conducting surveys on behalf of Democratic candidates and market research for major corporations. He can be reached at mark@huffingtonpost.com.
James E. Campbellis a UB Distinguished Professor of Political Science at the University at Buffalo, SUNY. He is the author of three books and more than 80 journal articles and book chapters. He previously served as Chair of the Political Forecasting Group, President of Pi Sigma Alpha, an APSA Congressional Fellow, and an NSF program director. He has edited forecasting symposia in each of the last five presidential elections. He can be reached at jcampbel@buffalo.edu.
Charles E. Cook, Jr.has been editor and publisher of the Cook Political Report since its founding in 1984. He has worked as a columnist for Roll Call (1986–1988) and for National Journal (1998–present). He has been a political analyst for NBC News since 2002. Prior to 2002, Charlie had been a consultant and/or analyst for CNN and CBS News. He can be reached at ccook@cookpolitical.com.
Matthew J. Dickinsonis professor of political science at Middlebury College. His blog on presidential power can be found at http://blogs.middlebury.edu/presidentialpower . He is author of Bitter Harvest: FDR, Presidential Power, and the Growth of the Presidential Branch (1999), the coeditor of Guardian of the Presidency: The Legacy of Richard E. Neustadt, and has published numerous articles on the presidency, Congress, and the executive branch. His current book manuscript, The President and the White House Staff: People, Positions, and Processes, 1945–2012, examines the growth of presidential staff in the post–World War II era. He can be reached at dickinso@middlebury.edu.
Robert S. Eriksonis professor of political science at Columbia University. His research on American politics and elections has been published in a wide range of scholarly journals, and he is coauthor of The Timeline of Presidential Elections (Chicago), The Macro Polity (Cambridge), Statehouse Democracy (Cambridge), and American Public Opinion (Pearson). He is a former editor of the American Journal of Political Science and Political Analysis. He can be reached at rse14@columbia.edu.
Simon Jackmanis a professor in the department of political science, Stanford University. He has worked on state-level, poll-averaging models since the 2000 US presidential election. In 2012 his poll-averaging models were used by the pollster section HuffingtonPost.com . He is a former president of the Society for Political Methodology, the author of Bayesian Analysis for the Social Sciences, and one of the principal investigators of the American National Election Study. He can be contacted at jackman@stanford.edu.
Michael S. Lewis-Beckis F. Wendell Miller Distinguished Professor of Political Science at the University of Iowa. His interests are comparative elections, election forecasting, political economy, and quantitative methodology. Professor Lewis-Beck has authored or coauthored more than 225 articles and books, including Forecasting Elections, Economics and Elections and Applied Regression. He has served as editor of the American Journal of Political Science and of the Sage QASS series (the green monographs) in quantitative methods. Currently he is associate editor of International Journal of Forecasting and data editor of French Politics. In spring 2013, Professor Lewis-Beck was visiting scholar, Centennial Center, American Political Science Association, Washington, DC. He can be reached at michael-lewis-beck@uiowa.edu.
Drew A. Linzer, in 2012, published the forecasting website votamatic.org , which offered state-by-state poll tracking and predictions of the US presidential election. His research has appeared in the Journal of the American Statistical Association, Political Analysis, American Political Science Review, Journal of Politics, World Politics, and the Journal of Statistical Software. From 2008 to 2013, Linzer was assistant professor of political science at Emory University. He can be reached at drew@votamatic.org.
William G. Mayeris a professor of political science at Northeastern University. He is the author of the first published forecasting model for presidential nominations and many other articles on public opinion, voting and elections, media and politics, and the presidential nomination process. He can be reached at w.mayer@neu.edu.
Helmut Norpothis a professor of political science at Stony Brook University. He is coauthor of The American Voter Revisited and has published widely on topics of electoral behavior. His current research focuses on public opinion in wartime. He can be reached at helmut.norpoth@stonybrook.edu.
Thomas A. Rietzis a professor in the department of finance, Henry B. Tippie College of Business, University of Iowa. He has been working with the Iowa Electronic Markets project since 1993 and is currently a member of the steering committee. His recent research has appeared in Quantitative Economics, Games and Economic Behavior, Management Science, and The Proceedings of the National Academy of Science. He can be reached at Thomas-Rietz@uiowa.edu.
Stuart Rothenbergis editor and publisher of the Rothenberg Political Report, a nonpartisan newsletter that reports on and handicaps US House, Senate, and gubernatorial campaigns and elections. A former political analyst for CNN, CBS News, and the News Hour on PBS, he is a columnist for Roll Call. He holds an undergraduate degree from Colby College and a PhD in political science from the University of Connecticut. He can be reached at stu.rothenberg@gmail.com.
John Sidesis an associate professor in the department of political science at George Washington University. His work focuses on political behavior in American and comparative politics. He is the author, with Lynn Vavreck, of The Gamble: Choice and Chance in the 2012 Election. He can be reached at jsides@gwu.edu.
Mary Stegmaieris teaching assistant professor in the Truman School of Public Affairs at the University of Missouri. Her recent research on voting behavior, elections, and political representation in the United States and abroad has appeared in Electoral Studies, Political Science Research and Methods, Public Choice, The Journal of Elections, Public Opinion, and Parties, and Parliamentary Affairs. She can be reached at stegmaierm@missouri.edu.
Michael W. Traugottis professor of communication studies and political science and a senior research scientist in the Center for Political Studies at the Institute for Social Research at the University of Michigan. He is an editor of the International Journal of Public Opinion Research, the Poll Review Section of Public Opinion Quarterly, and the invited editor of the Public Opinion Quarterly special issue on the 2012 election in the United States. He has been the president of the American Association for Public Opinion Research (AAPOR), the World Association for Public Opinion Research (WAPOR), and the Midwest Association for Public Opinion Research (MAPOR). In 2010 he received the AAPOR Award for Distinguished Lifetime Achievement. He is currently consulting with the Gallup Organization on a review of their 2012 preelection polling methodology. He can be reached at mtrau@umich.edu.
Lynn Vavreckis associate professor of political science and communication studies at University of California, Los Angeles. She has published three books on presidential campaigns including The Message Matters: The Economy and Presidential Campaigns and The Gamble: Choice and Chance in the 2012 Presidential Election, portions of which she and John Sides wrote and released in real time during the 2012 election. She is the originator of the Cooperative Campaign Analysis Project and cofounder of the Model Politics blog. She can be reached at lvavreck@ucla.edu.
David Wassermanis House editor of the Cook Political Report, where he is responsible for handicapping and analyzing US House races. He also serves as an associate editor of National Journal magazine and a contributor to the Almanac of American Politics 2014, and has served as an analyst for the NBC News Election Night Decision Desk in 2012, 2010, and 2008. He can be reached at dwasserman@cookpolitical.com.
Christopher Wlezienis Hogg Professor of Government at the University of Texas at Austin. His research on American and comparative politics has appeared in numerous journals, and he is coauthor of Degrees of Democracy (Cambridge) and The Timeline of Presidential Elections (Chicago) and coeditor of a number of other books, including Who Gets Represented? (Russell Sage). He was founding coeditor of The Journal of Elections, Public Opinion and Parties and currently is associate editor of Public Opinion Quarterly. He can be reached at Wlezien@austin.utexas.edu.