Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-04T14:49:22.752Z Has data issue: false hasContentIssue false

Vote Expectations Versus Vote Intentions: Rival Forecasting Strategies

Published online by Cambridge University Press:  19 August 2019

Andreas E. Murr
Affiliation:
University of Warwick, UK
Mary Stegmaier*
Affiliation:
University of Missouri, USA
Michael S. Lewis-Beck
Affiliation:
University of Iowa, USA
*
*Corresponding author. E-mail: stegmaierm@missouri.edu
Rights & Permissions [Opens in a new window]

Abstract

Are ordinary citizens better at predicting election results than conventional voter intention polls? The authors address this question by comparing eight forecasting models for British general elections: one based on voters' expectations of who will win and seven based on who voters themselves intend to vote for (including ‘uniform national swing model’ and ‘cube rule’ models). The data come from ComRes and Gallup polls as well as the Essex Continuous Monitoring Surveys, 1950–2017, yielding 449 months with both expectation and intention polls. The large sample size permits comparisons of the models' prediction accuracy not just in the months prior to the election, but in the years leading up to it. Vote expectation models outperform vote intention models in predicting both the winning party and parties' seat shares.

Type
Article
Copyright
Copyright © Cambridge University Press 2019

The 2015 and 2017 British general elections were closely watched, heavily handicapped and poorly forecast. Regardless of their stripes, election prognosticators failed. Different approaches were followed – experts, markets, models, polls – but none was successful. The most common strategy was to rely on the polls, particularly the current vote intention data. However, as they were in 1992, these numbers were woefully inaccurate in 2015 and 2017. The essential difficulty, similar to 1992, was correctly predicting the level of support for the two main parties. As Fisher and Lewis-Beck (Reference Fisher and Lewis-Beck2016, 229) observed in the conclusion of their critical analysis: ‘polling data on 2015 vote intentions, which ought to come close to any election result, no matter how strange, led to serious forecasting error for the Conservatives and Labour’.

A major forecasting issue for subsequent contests, then, involves reducing these polling errors. Almost all the forecasting methods used rely on accurately estimating the vote share, because they follow a two-step process: they first estimate the party vote share, and then transform that estimate into a seat share; the party with the highest seat share is the winner. This procedure has two potential sources of error – one from the vote intention estimate, and the other from the transformation of that estimate into seats. In a recent compendium of ex ante forecasting papers written by a dozen academic teams investigating the 2015 election, only one forecast seats directly, thereby producing relatively more accurate seat share estimates for the Conservative and Labour parties (Fisher and Lewis-Beck Reference Fisher and Lewis-Beck2016, Table 1; Murr Reference Murr2016).

The method employed by Murr (Reference Murr2016) to forecast British seat shares utilized polling data, but based on vote expectations rather than vote intentions. This approach, which has been labelled ‘citizen forecasting’, dispenses with an intention item (for example ‘If there were a general election tomorrow, which party would you vote for?’) and replaces it with an expectation item (‘Who do you think will win the next general election?’) The citizen forecasting approach was first used in Britain in the 2010 general election (Lewis-Beck and Stegmaier Reference Lewis-Beck and Stegmaier2011). Later, Murr (Reference Murr2011) applied it effectively, ex post, to the constituency level for the 2010 contest and, in another article, for prior contests (Murr Reference Murr2016). Here, we test the hypothesis that vote expectations generally offer a better election forecasting instrument than vote intentions.

Why should vote expectations predict better than vote intentions? Murr (Reference Murr, Arzheimer, Evans and Lewis-Beck2017) and Leiter et al. (Reference Leiter2018) argue that a big part of the answer lies in the fact that vote expectations include information from citizens' social networks. The expectation item ‘who will win?’ gives people a chance to report what they have heard from family, friends, and ‘in the pub or on the bus’ (King, Wybrow, and Gallup Reference King, Wybrow and Gallup2001: 1f). Using a German survey, Leiter et al. (Reference Leiter2018) find that characteristics of citizens' social networks (size, political composition and frequency of discussion) are among the most important variables when forecasting election results. It is logical that questions about vote expectations should more accurately predict election results, since such items capture information about the respondent and his/her social network, whereas vote intentions only concern the respondent.

Below, we introduce the large monthly dataset we have assembled on vote expectations and vote intentions in Great Britain, which covers the general election cycles from 1950 to 2017. We present models for forecasting from these data, including those based on the ‘uniform national swing’ (for example, Fisher et al. Reference Fisher2011; Ford et al. Reference Ford2016; Hanretty, Lauderdale and Vivyan, Reference Hanretty, Lauderdale and Vivyan2016) and the ‘cube rule’ and its modifications (for example, Whiteley Reference Whiteley2005; Whiteley et al. Reference Whiteley2011; Whiteley et al. Reference Whiteley2016). Next, these estimation results are evaluated. Expectation models are consistently able to predict which party will win and the parties' seat shares; they clearly outperform intention models. The implications are pursued, especially as they apply to prediction accuracy as the distance from the election increases. For the expectations model, the optimal lead time annually falls anytime within the two years before the election; its optimal lead time quarterly is during the quarter before the election.

Previous Research

Lewis-Beck and Skalaban (Reference Lewis-Beck and Skalaban1989) and Lewis-Beck and Tien (Reference Lewis-Beck and Tien1999) tested the simple hypothesis that voters in American presidential elections do better than chance when asked to forecast who would be elected president. Utilizing responses to the American National Election Studies (ANES) item ‘Who do you think will be elected President in November?’ they found that in eleven elections, voters accurately forecast the presidential winner 71 per cent of the time, an estimate statistically significant at p < 0.001 (Lewis-Beck and Tien Reference Lewis-Beck and Tien1999, 176).

These efforts were extended to the British case, first by Lewis-Beck and Stegmaier (Reference Lewis-Beck and Stegmaier2011), taking on what turned out to be the task of forecasting the ‘hung parliament’ of the 2010 general election. Murr (Reference Murr2011) then continued exploring citizen forecasting in Britain at the constituency level. He showed that vote expectations for the winning party, as recorded in British Election Study constituency subsamples, were accurate – even though they were disaggregated, small and seemingly unrepresentative. Murr (Reference Murr2015) demonstrates the same citizen forecasting ability within subsample constituencies (states) of American voters in the ANES.

Lewis-Beck and Tien (Reference Lewis-Beck and Tien1999) was the first study to compare the accuracy of citizen forecasting vs. vote intentions, using data from US presidential elections. Despite the much longer lead time of citizen forecasts, they found that this method achieved a similar level of accuracy as vote intentions. Later, Graefe (Reference Graefe2014) expanded the comparison of forecasting approaches for US presidential elections to citizen forecasting, vote intentions, prediction markets, expert surveys and quantitative models. He found that citizen forecasts were as good as quantitative models at forecasting the winner, but better than vote intentions, prediction markets and expert surveys. He also found that citizen forecasting had a lower mean absolute error when forecasting the winner's vote share. In other words, vote expectations were among the best – if not the best – approaches to forecasting both the winner and the two-party vote share in US presidential elections.

Here we examine whether vote expectations are also better than vote intentions at forecasting the election winners and seat shares of multiple parties in British parliamentary elections. Due to the large number of parties and the changing seats–votes ratio over time, the British case presents a more difficult test of the accuracy of vote expectations compared to the United States.

In addition, we compare expectations and intentions not just in the 100 days before the election, but also in the four years prior. The greater the lead time of an accurate forecast, the more impressive it is (Lewis-Beck Reference Lewis-Beck2005). Forecasts made a day before the election risk being trivial; those made years in advance are daring. We hope to uncover accurate forecasts made systematically well before the election, certainly months before it.

Methodology

Below we describe the eight standard vote intention forecasting models of a party's seat share. These models fall into two groups, depending on how they translate intentions into seat share forecasts: either assuming a uniform national swing (UNS) or relying on the ‘cube law’ and its modifications.

The UNS models assume that the change in a party's vote share from one election to the next (‘swing’) is the same in every constituency (‘uniform’). We identified five forecasting models based on this assumption:

  1. 1. NAI: Intentions in a naïve UNS model (BBC, The Guardian). This model takes a current vote intention poll as the national vote share forecast, derives the implied UNS to calculate a constituency vote share forecast, then forecasts that the party with the largest constituency vote share forecast will win the seat, and, finally, aggregates the forecasts for each party across constituencies.

  2. 2. NON: Intentions in a non-naïve UNS model (Fisher et al. Reference Fisher2011; Ford et al. Reference Ford2016; Wlezien et al. Reference Wlezien2013). This model proceeds as NAI except that it uses a regression model to forecast a party's national vote share, and then translates the implied constituency vote share into a probability of winning the constituency (Curtice and Firth Reference Curtice and Firth2008). The regression model includes the party's current vote intention poll.

  3. 3. GOV: Intentions and government status in a non-naïve UNS model (Fisher Reference Fisher2015, Reference Fisher2016). This model proceeds as NON except that the regression model also includes a dummy variable indicating whether the party is in government and the party's current vote intention poll.

  4. 4. LAG: Intentions and lagged vote share in a non-naïve UNS model (Fisher Reference Fisher2015; Fisher Reference Fisher2016). This model proceeds as GOV except that the regression model replaces government status with lagged vote share.

  5. 5. CHA: Intentions and change in vote share in a non-naïve UNS model (Fisher Reference Fisher2015, Fisher, Reference Fisher2016; Hanretty, Lauderdale, and Vivyan Reference Hanretty, Lauderdale and Vivyan2016). This model proceeds as LAG except that it regresses the change in a party's national vote share on the change in its vote intention poll relative to the previous national vote share.

    The UNS models translate votes into seats using the previous constituency election results. Hence, these models can only be used when the constituency boundaries remain constant. The next forecasting models can always be used, as they forecast seats from votes directly. The (modified) ‘cube law’ forecasting model assumes a stable relationship between the vote and seat share over time:

  6. 6. LOG: Intentions, lagged seat share and party split in a log-linear model (Whiteley Reference Whiteley2005; Whiteley et al. Reference Whiteley2011; Whiteley et al. Reference Whiteley2016). This model regresses the logged seat shares on logged previous seat shares, the logged voting intentions for two parties, and a dummy variable indicating the split in the party system in 1983 and 1987. We propose one forecasting model similar to the above vote intention models, but using vote expectations instead.

  7. 7. EXP: Expectations and lagged seat share in a linear model (Lewis-Beck and Stegmaier Reference Lewis-Beck and Stegmaier2011). This model regresses the seat share on the previous seat share and the vote expectations for two parties.Footnote 1

    Our expectation model differs from the intention models used by others in more ways than just replacing expectations with intentions. Hence, to ensure that gains in accuracy between expectations and intentions do result from the different survey question, we also fit the forecasting model with intentions instead of expectations:

  8. 8. LIN: Intentions and lagged seat share in a linear model. This model is the same as EXP except that it replaces vote expectations with vote intentions.

Detailed variable definitions and data sources are provided in Appendix 1. The eight forecasting models are presented in detail in Appendix 2. The initial sample period for estimation was 1950 to 1983. The in-sample estimation procedure was ordinary least squares (estimates presented in Appendix 3). We used the estimated models to generate forecasts for the out-of-sample 1987 election by plugging in values of the predictors. Then the 1987 election was added to the estimation sample, the coefficients re-estimated, and a new set of forecasts was generated for 1992. We proceeded thusly, adding an additional election, re-estimating coefficients, generating out-of-sample forecasts, for each subsequent election, finally reaching the 2017 election.

We used two measures to judge the models' forecasts. First, we compared the forecast accuracy via correct prediction of winner (CPW): we calculated what proportion of forecasts correctly identified the party with the most seats. Secondly, we compared the forecast accuracy via the mean absolute error (MAE) in seat shares.

Results

Table 1 presents the CPW and the MAE for the three forecasting models for all elections, and for the eight forecasting models for the elections with constant constituency boundaries. The first column contains the overall performance of each model, computed by averaging across all months. The remaining eight columns show the measures for each election, computed by averaging across all months in the relevant election.Footnote 2

Table 1. Out-of-sample forecasting accuracy by election year

Looking first at the CPW, we see that for all elections the proportion ranges from 50 to 80 per cent overall and from 0 to 100 per cent across election years. Overall, the EXP forecast is best, followed by the LIN forecast. The LOG forecast is last overall. The EXP forecast performs worse than the LIN forecast in one out of eight elections, the same in four, and better in three.

For elections with constant constituency boundaries, the proportion of CPW ranges from 41 to 100 per cent overall and from 0 to 100 per cent across election years. The EXP forecast is best overall, followed by the LAG forecast. The worst are the LOG and NAI models. The EXP performs the same as the LAG in five of the six elections, and better in one.

For all elections, the MAE ranges from 3.9 to 5.3 per cent overall and from 0.8 to 10.8 per cent across election years. Overall, the EXP forecast is best, followed by the LIN forecast. The LOG is last overall. The EXP is less accurate than the LIN in two out of eight elections and more accurate in the remaining six.

For elections with constant constituency boundaries, the MAE ranges from 2.6 to 7.9 per cent overall and from 0.8 to 10.8 across election years. The EXP forecast is best overall, followed by the GOV forecast. The worst forecasts are by the LAG and NAI models. The EXP is more accurate than the GOV in all five elections.Footnote 3

Lead Time: Is there an Optimal Forecasting Distance?

There is always tension between the distance from the election, on the one hand, and accuracy, on the other hand. Conventional wisdom suggests that the closer the forecast to the election, the greater the accuracy. Table 2 shows the two accuracy measures for the forecasting models for the four years before the election, computed by averaging the relevant months across all elections. The year before the election is broken down into quarters.

Table 2. Out-of-sample forecasting accuracy in the quarters and years before the election

Note: abbreviations as in Table 1.

Table 2 shows that, regardless of equation type or outcome measure, as the years until the election decrease, the forecast generally becomes more accurate. Still, if the forecast comes immediately before the election, it may be regarded as trivial. Lead time affects the quality of the forecasting. As Lewis-Beck (Reference Lewis-Beck2005, 151) observed, ‘giving the forecast a full horizon, say six months to a year, allows for an impressive performance’. Accepting that charge, we examine the results one year before the election, quarter by quarter, and find that the accuracy trend ceases to be monotonic downward. For the decisive dependent variable – CPW – two models receive perfect scores: Q1 for EXP and Q3 for LIN.

With both EXP and LIN equally accurate, is one more optimal? Nadeau, Lewis-Beck, and Bélanger (Reference Nadeau, Lewis-Beck and Belanger2009) note that most UK forecasters have used a two- or three-month lead. More recently, the median lead time for the twelve forecasting teams was 22 days for the 2015 general election (Fisher and Lewis-Beck Reference Fisher and Lewis-Beck2016). These field results, falling in the last quarter before the election, lend support for the expectations measure (EXP) from Q1 over the intentions measure (LIN) from Q3. Further, the argument for LIN has a weak spot, given that its forecasting power oddly deteriorates as the election approaches: by Q1 it registers only 83 per cent. This perverse lead–accuracy trade-off may make it tough for some forecasters to decide on their modelling strategy. If they wish to increase their chances of being correct at last call, they should stick with the expectations model (and Q1). However, if they want more lead time, they might turn to the intentions model (and Q3).

Conclusion

Election forecasting is a lively enterprise in Western democracies, and Great Britain is no exception (Stegmaier and Norpoth Reference Stegmaier, Norpoth and Valelly2017). Different approaches – markets, models and polls – were used in the run-up to the 2015 general election. The failure to predict the big Conservative victory dealt a severe blow to the polling industry, which relied on vote intention results. Should this approach be forsaken, or is there a polling alternative? We argue that the relatively neglected vote expectation polls are the answer, since they capture additional information about the citizen's social network (Leiter et al. Reference Leiter2018; Murr Reference Murr, Arzheimer, Evans and Lewis-Beck2017).

We analyze over 400 monthly expectation–intention matches from UK elections from 1950 to 2017, and find that citizen forecasting outperforms traditional vote intention forecasts. When applied to the 48 months leading up to the 2015 poll, vote intentions – in uniform national swing and cube rule models – called the election for the Conservatives only about half of the time, while vote expectation models always did so. Moreover, vote expectation models produced more accurate seat share forecasts over that time period.

Supplementary material

Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/9JLCSY and online appendices at: https://doi.org/10.1017/S0007123419000061 .

Acknowledgements

We would like to thank conference participants and discussants, especially Steve Fisher, for useful comments. We would also like to thank Ericka Rascón Ramírez for helpful comments. We are grateful to Harold Clarke, University of Texas at Dallas, for sharing the Essex Continuous Monitoring Survey data of April 2004 to October 2015, and to Steve Fisher, University of Oxford, for sharing constituency election data for 1997. Thank you to the anonymous reviewers for their valuable suggestions. A previous version of this article was presented at the launch event of the Political Methodology Group of the Political Studies Association at the British Academy, 25 November 2015, at the Midwest Political Science Association Conference, April 2016, and at the Catholic University of Uruguay, 11 July 2016.

Footnotes

1 Lewis-Beck and Stegmaier (Reference Lewis-Beck and Stegmaier2011) regress the winning party's seat share on the winning party's vote expectation; we generalize their model to all parties.

2 Tables 8 and 9 in Appendix 4 present the CPW and whether or not the winner has an overall majority by election year and by time until election. The results are similar.

3 Figures 1–3 in Appendix 5 supplement these results by graphing the three accuracy measures for every month of every election.

References

Curtice, J and Firth, D (2008) Exit polling in a cold climate: the BBC–ITV experience in Britain in 2005. Journal of the Royal Statistical Society: Series A (Statistics in Society) 171(3), 509539.10.1111/j.1467-985X.2007.00536.xCrossRefGoogle Scholar
Fisher, SD (2015) Predictable and unpredictable changes in party support: a method for long-range daily election forecasting from opinion polls. Journal of Elections, Public Opinion and Parties 25(2), 137158.CrossRefGoogle Scholar
Fisher, SD (2016) Piecing it all together and forecasting who governs: the 2015 British general election. Electoral Studies 41(1), 234238.10.1016/j.electstud.2015.11.014CrossRefGoogle Scholar
Fisher, SD et al. (2011) From polls to votes to seats: forecasting the 2010 British general election. Electoral Studies 30(2), 250257.10.1016/j.electstud.2010.09.005CrossRefGoogle Scholar
Fisher, SD and Lewis-Beck, MS (2016) Forecasting the 2015 British general election: the 1992 debacle all over again? Electoral Studies 41(1), 225229.CrossRefGoogle Scholar
Ford, R et al. (2016) From polls to votes to seats: forecasting the 2015 British general election. Electoral Studies 41(1), 244249.10.1016/j.electstud.2015.11.013CrossRefGoogle Scholar
Graefe, A (2014) Accuracy of vote expectation surveys in forecasting elections. Public Opinion Quarterly 78(S1), 204232.10.1093/poq/nfu008CrossRefGoogle Scholar
Hanretty, C, Lauderdale, B and Vivyan, N (2016) Combining national and constituency polling for forecasting. Electoral Studies 41(1), 239243.10.1016/j.electstud.2015.11.019CrossRefGoogle Scholar
King, A, Wybrow, RJ and Gallup, A (2001) British Political Opinion, 1937–2000: The Gallup Polls. London: Politico's Publishing.Google Scholar
Leiter, D et al. (2018) Social networks and citizen election forecasting: the more friends the better. International Journal of Forecasting 34(2), 235248.10.1016/j.ijforecast.2017.11.006CrossRefGoogle Scholar
Lewis-Beck, MS (2005) Election forecasting: principles and practice. British Journal of Politics and International Relations 7(2), 145164.10.1111/j.1467-856X.2005.00178.xCrossRefGoogle Scholar
Lewis-Beck, MS and Skalaban, A (1989) Citizen forecasting: can voters see into the future? British Journal of Political Science 19(1), 146153.10.1017/S000712340000538XCrossRefGoogle Scholar
Lewis-Beck, MS and Stegmaier, M (2011) Citizen forecasting: can UK voters see the future? Electoral Studies 30(2), 264268.10.1016/j.electstud.2010.09.012CrossRefGoogle Scholar
Lewis-Beck, MS and Tien, C (1999) Voters as forecasters: a micromodel of election prediction. International Journal of Forecasting 15(2), 175184.10.1016/S0169-2070(98)00063-6CrossRefGoogle Scholar
Murr, AE (2011) ‘Wisdom of crowds’? A decentralised election forecasting model that uses citizens' local expectations. Electoral Studies 30(4), 771783.10.1016/j.electstud.2011.07.005CrossRefGoogle Scholar
Murr, AE (2015) The wisdom of crowds: applying Condorcet's jury theorem to forecasting US presidential elections. International Journal of Forecasting 31(3), 916929.10.1016/j.ijforecast.2014.12.002CrossRefGoogle Scholar
Murr, AE (2016) The wisdom of crowds: what do citizens forecast for the 2015 British general election? Electoral Studies 41(1), 283288.10.1016/j.electstud.2015.11.018CrossRefGoogle Scholar
Murr, AE (2017) Wisdom of crowds. In Arzheimer, K, Evans, J and Lewis-Beck, M (eds), The Sage Handbook of Electoral Behaviour. London: Sage, pp. 835860.CrossRefGoogle Scholar
Murr, AE, Stegmaier, M and Lewis-Beck, MS (2019) Replication data for: Vote expectations versus vote Intentions: rival forecasting strategies. Available from https://doi.org/10.7910/DVN/9JLCSY, Harvard Dataverse, V1, UNF:6:nOvPAVfJ3QJlQYIErnwwYQ== [fileUNF]Google Scholar
Nadeau, R, Lewis-Beck, MS and Belanger, E (2009) Election forecasting in the United Kingdom: a two-step model. Journal of Elections, Public Opinion and Parties 19(3), 333358.CrossRefGoogle Scholar
Stegmaier, M and Norpoth, H (2017) Election forecasting. In Valelly, R (ed.), Oxford Bibliographies in Political Science. Oxford University Press. Available from http://www.oxfordbibliographies.com/view/document/obo-9780199756223/obo-9780199756223-0023.xml.Google Scholar
Whiteley, PF (2005) Forecasting seats from votes in British general elections. British Journal of Politics and International Relations 7(2), 165173.10.1111/j.1467-856X.2005.00179.xCrossRefGoogle Scholar
Whiteley, P et al. (2016) Forecasting the 2015 British general election: the seats-votes model. Electoral Studies 41(1), 269273.10.1016/j.electstud.2015.11.015CrossRefGoogle Scholar
Whiteley, PF et al. (2011). Aggregate level forecasting of the 2010 general election in Britain: the seats–votes model. Electoral Studies 30(2) 278283.10.1016/j.electstud.2010.09.010CrossRefGoogle Scholar
Wlezien, C et al. (2013) Polls and the vote in Britain. Political Studies 61(S1), 6691.CrossRefGoogle Scholar
Figure 0

Table 1. Out-of-sample forecasting accuracy by election year

Figure 1

Table 2. Out-of-sample forecasting accuracy in the quarters and years before the election

Supplementary material: Link

Murr et al. Dataset

Link
Supplementary material: PDF

Murr et al. supplementary material

Online Appendix

Download Murr et al. supplementary material(PDF)
PDF 114.7 KB