Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-06T15:57:14.619Z Has data issue: false hasContentIssue false

Response Bias in Survey Measures of Voter Behavior: Implications for Measurement and Inference

Published online by Cambridge University Press:  25 March 2019

Claire Adida*
Affiliation:
University of California, San Diego, CA, USA
Jessica Gottlieb
Affiliation:
Texas A&M University, College Station, TX, USA
Eric Kramon
Affiliation:
George Washington University, Washington, DC, USA
Gwyneth McClendon
Affiliation:
New York University, New York, NY, USA
*
*Corresponding author. Email: cadida@ucsd.edu
Rights & Permissions [Opens in a new window]

Abstract

This short report exploits a unique opportunity to investigate the implications of response bias in survey questions about voter turnout and vote choice in new democracies. We analyze data from a field experiment in Benin, where we gathered official election results and panel survey data representative at the village level, allowing us to directly compare average outcomes across both measurement instruments in a large number of units. We show that survey respondents consistently overreport turning out to vote and voting for the incumbent, and that the bias is large and worse in contexts where question sensitivity is higher. This has important implications for the inferences we draw about an experimental treatment, indicating that the response bias we identify is correlated with treatment. Although the results using the survey data suggest that the treatment had the hypothesized impact, they are also consistent with social desirability bias. By contrast, the administrative data lead to the conclusion that the treatment had no effect.

Type
Short Report
Copyright
© The Experimental Research Section of the American Political Science Association 2019 

We exploit a unique opportunity to identify response bias in survey questions about voter turnout and vote choice, two outcomes of fundamental interest to political scientists that are often measured using surveys (see, e.g., Barton, Castillo, and Petrie Reference Burden2014; Broockman, Kalla, and Sekhon Reference Belli, Moore and VanHoewyk2017; De La O and Rodden Reference De LaO and Rodden2008; Greene Reference Greene2011; Mvukiyehe and Samii Reference Mvukiyehe and Samii2017; Nathan Reference Nathan2016).Footnote 1 An extensive literature studies response bias in survey measures of voter turnout in advanced democracies (see, e.g., Belli, Moore and VanHoewyk Reference Belli, Traugott, Young and McGonagle2006; Belli et al. Reference Barton, Castillo and Petrie1999; Burden Reference Burden2000; Clausen Reference Clausen1968; Greenwald et al. Reference Greenwald, Carnot, Beach and Young1987; Holbrook and Krosnick Reference Holbrook and Krosnick2010a, Reference Holbrook and Krosnick2010b; Karp and Brockington Reference Karp and Brockington2005; Silver, Anderson, and Abramson Reference Silver, Anderson and Abramson1986; Zeglovits and Kritzinger Reference Zeglovits and Kritzinger2013). However, we lack information about the extent of this problem in new democracies or on survey measures of vote choice.

In a field experiment conducted in Benin (Adida et al. Reference Adida, Gottlieb, Kramon and McClendon2017), we measured voter turnout and vote choice through survey and official village-level administrative data.Footnote 2 The main results of our field experiment are based on administrative data precisely because of the bias introduced by the survey data, which we demonstrate below.Footnote 3 Because our surveys are representative at the village-level, we can compare average outcomes across both measurement instruments in a large number of units (N = 237). This allows us to demonstrate the size and significance of the discrepancy between estimates of vote choice and voter turnout using these different instruments: survey respondents consistently overreport turning out to vote and voting for the incumbent. This bias is large, with implications for the interpretation of experimental results.

We implemented a randomized field experiment around the 2015 legislative elections in Benin (Appendix D in Supplemental Material) that randomly assigned villages to receive information about the performance of their incumbent legislator (Appendix B in Supplemental Material). Our administrative data consist of official polling station level results from these elections. We aggregate these data up (Appendix G in Supplemental Material) to produce village-level measures of incumbent vote share and voter turnout because our experiment intervened at the village level, and not at the polling station level.

We collected panel survey data on turnout and vote choice. Several weeks before the election, we administered a baseline survey (Appendix E in Supplemental Material) in a random sample of households in each study village (N = 3,419). In treatment villages, we provided performance information to 40–60 people from separate households, or 12–15% of households. The endline survey was conducted by phone in the days following the election and prior to the official announcement of results.

We deliberately constructed our vote choice question in yes or no format to protect respondent privacy and minimize social pressure (Table 1). Since the survey was conducted over the phone, no one but the respondent could understand the meaning of the yes or no response.Footnote 4

Table 1 Vote Choice Survey Question

To measure voter turnout, we asked respondents whether they voted in the legislative elections several days earlier, prefacing the question with the observation that some people were not able to vote, a face-saving element shown to reduce turnout overreporting (Zeglovits and Kritzinger Reference Zeglovits and Kritzinger2013).Footnote 5 We validated this measure with two follow-up questions respondents would be more likely to know if they voted: which hand and finger were stamped after voting to prevent fraud.

Figures 1 and 2 graph the differences of means across each measure for incumbent vote share and turnout: these are substantively large and statistically significant. For vote choice, average survey responses are 15 percentage points (half a standard deviation) higher than official data, an upward bias of 45%. For turnout, average survey responses are 20 percentage points (almost two standard deviations) higher than official data, an upward bias of 29%.

Figure 1 Official versus Survey Data: Incumbent Vote Share.

Figure 2 Official versus Survey Data: Voter Turnout.

Are the differences above driven by sample differences rather than response bias? If our representative survey captures individuals who did not register to vote, we expect a downward bias in our turnout measure, yielding a more conservative estimate. As for reported vote choice, we do not have strong expectations about whether respondents who did not register to vote would systematically be more or less likely to overreport voting for the incumbent. The above analyses imply that overreporting is greater for the vote choice question than for the turnout question: even if survey participants who misreport turning out to vote always report voting for the incumbent, this cannot account for all the response bias in the vote choice question.

In Figures 3 and 4, we show that response bias in the vote choice question is more severe in more competitive and in rural localities – consistent with the expectation that bias increases with the sensitivity of the question. In competitive areas, the incumbent may be especially motivated to use repressive tactics, and rural voters may be especially worried about social sanctions if their vote choice is discovered.

Figure 3 Official versus Survey Data: Incumbent Vote Share, by Competitiveness of 2011 Race.

Figure 4 Official versus Survey Data: Incumbent Vote Share, by Urban.

We conclude by demonstrating the implications of this measurement bias for analyzing treatment effects in our field experiment. We prespecified that treatment would increase (decrease) the vote share of incumbents in areas where the incumbent had performed well (poorly) in office (Appendix H in Supplemental Material).

To test these hypotheses, we estimate treatment effects on incumbent vote share in good (Figure 5) and bad (Figure 6) performance areas (Appendix I in Supplemental Material). We find a striking difference in results based on the type of data analyzed. The survey data imply that the treatment elicited the expected effect: providing positive (negative) information about the incumbent increased (decreased) vote share. By contrast, the analysis of official results suggests precisely estimated null effects. In other words, we find measurement bias induced by our treatment, a serious form of measurement error.

Notes: Preregistered regression estimates with block fixed effects and 95% confidence intervals.

Figure 5 Treatment Effects on Incumbent Vote Share: Good News.

Notes: Preregistered regression estimates with block fixed effects and 95% confidence intervals.

Figure 6 Treatment Effects on Incumbent Vote Share: Bad News.

Is this disparity attributable to differences in treatment intensity? Treatment was typically administered to a larger share of survey respondents than of registered voters in a given village. And yet, in a separate analysis, we find effects of a variant of this treatment even when administered to the same proportion of a village’s population (Adida et al. Reference Adida, Gottlieb, Kramon and McClendon2017). Additionally, if treatment intensity were a moderator, we would expect – but do not find – heterogeneous treatment effects by village size.Footnote 6

The conclusions we draw from our study thus seem to depend on the data we use. The results using the survey data are consistent with our preregistered hypotheses. However, they are also consistent with social desirability bias, where participants over(under)-report support for strong (weak) incumbents. By contrast, the administrative data lead to the conclusion that the treatment had no average treatment effect. Response bias in survey measures of vote choice and turnout in new democracies can be large and lead to Type-I errors.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2019.9.

Author ORCIDs

Claire Adida 0000-0002-3493-5539

Footnotes

Support for this research was provided by the Evidence in Governance and Politics Metaketa I. The data, code, and any additional materials required to replicate all analyses in this article (Adida et al. 2018) are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: https://doi.org/10.7910/DVN/CXWIPM. This study is part of the larger Metaketa initiative to accumulate knowledge about the relationship between information and accountability across country contexts. We thank Amanda Pinkston for sharing 2011 legislative election data and Ana Quiroz and Charles Hintz for excellent research assistance. This research was conducted in collaboration with the Centre de Promotion de la Démocratie et du Développement (CEPRODE), and we thank Adam Chabi Bouko for leading the implementation effort. Our project received ethics approval from the authors’ home institutions. We also obtained permission to conduct the study from the President of the National Assembly of Benin. In each study village, permission to conduct research was obtained from the chief and consent was obtained from each surveyed participant in the study. The authors declare no conflict of interest.

1 See Appendix A in Supplemental Material for a more complete review of these studies.

2 The registered pre-analysis plan for the Metaketa project can be found at: http://egap.org/registration/736. The registered pre-analysis plan for this particular study can be found at: http://egap.org/registration/735

3 This bias, though uncovered in the midst of our analysis of our field experiment, informed our decision to rely on administrative rather than survey data when the two conflicted; but demonstrating the bias lay outside the scope of our substantive paper, and is the focus of this paper.

4 Studies often rely on remote (phone or text) questioning to collect endline measures. Focusing the question wording on the incumbent alone could have inflated incumbent support. However, if this were the only source of bias we would see over-reporting of support in both good and bad information conditions. Instead, we see results more consistent with social desirability.

5 We did not, however, include face-saving response items, e.g. saying no while giving a valid excuse, which Morin-Chassé (Reference Morin-Chassé2018) has found to work better than face-saving preambles alone.

6 We sought to recruit the same number of individuals into treatment in each treatment village, regardless of village population size, so village size acts as a reasonable proxy for treatment intensity across villages.

References

Adida, Claire, Gottlieb, Jessica, Kramon, Eric and McClendon, Gwyneth. 2017. Breaking the clientelistic voting equilibrium: the joint importance of salience and coordination. AIDData Working Paper 48.Google Scholar
Adida, Claire L., Gottlieb, Jessica, Kramon, Eric and McClendon, Gwyneth. 2018. Replication Data for: Response Bias in Survey Measures of Voter Behavior: Implications for Measurement and Inference. Harvard Dataverse V3. (doi: 10.7910/DVN/CXWIPM).Google Scholar
Burden, Barry C. 2000. Voter Turnout and the National Election Studies. Political Analysis 8(4): 389–98.CrossRefGoogle Scholar
Barton, Jared, Castillo, Marco and Petrie, Ragan. 2014. What Persuades Voters? A Field Experiment on Political Campaigning. The Economic Journal 124(574): F293F326CrossRefGoogle Scholar
Belli, Robert F., Traugott, Michael W., Young, Margaret and McGonagle, Katherine A.. 1999. Reducing Vote Overreporting in Surveys: Social Desirability, Memory Failure, and Source Monitoring. The Public Opinion Quarterly 63(1): 90108.CrossRefGoogle Scholar
Belli, Robert F., Moore, Sean E. and VanHoewyk, John. 2006. An Experimental Comparison of Question Forms used to Reduce Vote Overreporting. Electoral Studies 25(4): 751–9.CrossRefGoogle Scholar
Broockman, David E., Kalla, Joshua L. and Sekhon, Jasjeet S.. 2017. The Design of Field Experiments with Survey Outcomes: A Framework for Selecting more Efficient, Robust, and Ethical Designs. Political Analysis 25(4): 435–64.CrossRefGoogle Scholar
Clausen, Aage R. 1968. Response Validity: Vote Report. The Public Opinion Quarterly 32(4): 588606.CrossRefGoogle Scholar
De LaO, Ana L. and Rodden, Jonathan A.. 2008. Does Religion Distract the Poor? Income and Issue Voting Around the World. Comparative Political Studies 41(4–5): 437–76.CrossRefGoogle Scholar
Greene, Kenneth F. 2011. Campaign Persuasion and Nascent Partisanship in Mexico’s New Democracy. American Journal of Political Science 55(2): 398416.CrossRefGoogle Scholar
Greenwald, Anthony G., Carnot, Catherine G., Beach, Rebecca and Young, Barbara. 1987. Increasing Voting Behavior by Asking People if they Expect to Vote. Journal of Applied Psychology 72(2): 315.CrossRefGoogle Scholar
Holbrook, Allyson L. and Krosnick, Jon A.. 2010a. Measuring Voter Turnout by Using the Randomized Response Technique: Evidence Calling into Question the Method’s Validity. Public Opinion Quarterly 74(2) :328–43.CrossRefGoogle Scholar
Holbrook, Allyson L. and Krosnick, Jon A.. 2010b. Social Desirability Bias in Voter Turnout Reports: Tests using the Item Count Technique. Public Opinion Quarterly 74(1): 3767.CrossRefGoogle Scholar
Karp, Jeffrey A. and Brockington, David. 2005. Social Desirability and Response Validity: A Comparative Analysis of Overreporting Voter Turnout in Five Countries. The Journal of Politics 67(3): 825–40.CrossRefGoogle Scholar
Morin-Chassé, Alexandre. 2018. How to Survey About Electoral Turnout? Additional Evidence. Journal of Experimental Political Science 5:14.CrossRefGoogle Scholar
Mvukiyehe, Eric and Samii, Cyrus. 2017. Promoting Democracy in Fragile States: Field Experimental Evidence from Liberia. World Development 95: 254–67.CrossRefGoogle Scholar
Nathan, Noah L. 2016. Local Ethnic Geography, Expectations of Favoritism, and Voting in Urban Ghana. Comparative Political Studies 49(14): 1896–929.CrossRefGoogle Scholar
Silver, Brian D., Anderson, Barbara A. and Abramson, Paul R.. 1986. Who Overreports Voting? American Political Science Review 80(2): 613–24.CrossRefGoogle Scholar
Zeglovits, Eva and Kritzinger, Sylvia. 2013. New Attempts to Reduce Overreporting of Voter Turnout and Their Effects. International Journal of Public Opinion Research 26(2): 224–34.CrossRefGoogle Scholar
Figure 0

Table 1 Vote Choice Survey Question

Figure 1

Figure 1 Official versus Survey Data: Incumbent Vote Share.

Figure 2

Figure 2 Official versus Survey Data: Voter Turnout.

Figure 3

Figure 3 Official versus Survey Data: Incumbent Vote Share, by Competitiveness of 2011 Race.

Figure 4

Figure 4 Official versus Survey Data: Incumbent Vote Share, by Urban.

Figure 5

Figure 5 Treatment Effects on Incumbent Vote Share: Good News.

Notes: Preregistered regression estimates with block fixed effects and 95% confidence intervals.
Figure 6

Figure 6 Treatment Effects on Incumbent Vote Share: Bad News.

Notes: Preregistered regression estimates with block fixed effects and 95% confidence intervals.
Supplementary material: File

Adida et al. supplementary material

Adida et al. supplementary material 1

Download Adida et al. supplementary material(File)
File 272.3 KB