Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-06T09:18:59.030Z Has data issue: false hasContentIssue false

Persistent Bias Among Local Election Officials

Published online by Cambridge University Press:  27 August 2019

D. Alex Hughes
Affiliation:
School of Information, University of California, Berkeley, CA, USA
Micah Gell-Redman
Affiliation:
Department of International Affairs and Department of Health Policy & Management, University of Georgia, Athens, GA, USA
Charles Crabtree
Affiliation:
Department of Political Science, University of Michigan, Ann Arbor, MI, USA
Natarajan Krishnaswami
Affiliation:
School of Information, University of California, Berkeley, CA, USA
Diana Rodenberger
Affiliation:
School of Information, University of California, Berkeley, CA, USA
Guillermo Monge
Affiliation:
School of Information, University of California, Berkeley, CA, USA
Rights & Permissions [Opens in a new window]

Abstract

Results of an audit study conducted during the 2016 election cycle demonstrate that bias toward Latinos observed during the 2012 election has persisted. In addition to replicating previous results, we show that Arab/Muslim Americans face an even greater barrier to communicating with local election officials, but we find no evidence of bias toward blacks. An innovation of our design allows us to measure whether e-mails were opened by recipients, which we argue provides a direct test of implicit discrimination. We find evidence of implicit bias toward Arab/Muslim senders only.

Type
Research Article
Copyright
© The Experimental Research Section of the American Political Science Association 2019

Racial bias that limits access to the ballot threatens basic principles of democratic equality. One potential source of bias that has received little attention are the street-level bureaucrats who administer elections in the United States (Lipsky, Reference Lipsky1980). An audit study conducted during the 2012 US election cycle showed these local election officials responded at significantly lower rates to inquiries from voters with putatively Latino, as opposed to white, surnames (White, Nathan and Faller, Reference White, Nathan and Faller2015). In this paper, we report the results of a similar audit study performed during the 2016 election cycle. We find that the previously observed bias against Latinos is persistent. We also extend the previous study by testing the effects of two racial primes other than Latino. Voters with Arab/Muslim names received responses at significantly lower rates (11% points) than whites, while black voters did not receive any response.

The two primary motivations for this study are to determine whether the previous finding of bias toward Latinos stands up to replication and to examine whether this bias extends to blacks and Arab/Muslim Americans. In spite of the ample evidence of racial disparities in political participation (Abrajano and Alvarez, Reference Abrajano and Alvarez2010; García-Bedolla and Michelson, Reference García-Bedolla and Michelson2012; Hajnal and Lee, Reference Hajnal and Lee2011; Hajnal and Abrajano, Reference Hajnal and Abrajano2015) and in everyday life (Bertrand and Mullainathan, Reference Bertrand and Mullainathan2004), relatively little empirical work demonstrates the role of race in limiting access to the ballot in contemporary America (McNulty, Dowling and Ariotti, Reference McNulty, Dowling and Ariotti2009), and some claims in this area have aroused skepticism (Grimmer et al., Reference Grimmer, Hersh, Meredith, Mummolo and Nall2018; Hajnal, Lajevardi and Nielson, Reference Hajnal, Lajevardi and Nielson2017). The pervasive discrimination that blacks face in various arenas of American politics (Butler, Reference Butler2014) suggests that this group could be at risk of bias in interacting with local election officials. While there is also ample evidence of discrimination toward Arab and Muslim Americans (Gaddis and Ghoshal, Reference Gaddis and Ghoshal2015), this group has received comparatively less attention from scholars (Jamal and Naber, Reference Jamal and Naber2007; Panagopoulos, Reference Panagopoulos2006). In an era of political rhetoric increasingly characterized by appeals to group identity, it is particularly important to understand how racially motivated bias impacts the day-to-day mechanics of elections for a range of racial/ethnic groups.

To seek evidence of bias, we focus on the thousands of local-level administrators charged with conducting elections in the United States. These bureaucrats are generally capable of exercising discretion in carrying out their job duties, which include responding to inquiries about the mechanics of voting and eligibility to participate in elections. Our core contention is that in exercising such discretion, street-level bureaucrats may be consciously or unconsciously influenced by the characteristics (e.g., race or partisanship) of individuals seeking public services (Lipsky, Reference Lipsky1980; White, Nathan and Faller, Reference White, Nathan and Faller2015).

EXPERIMENT DESIGN

To determine the extent to which previously documented bias is persistent and extends to other racial groups, we conducted an e-mail audit study of local election officials (Pager, Reference Pager2003).Footnote 1 Our intended sample comprises all such officials with publicly available e-mail addresses and the analytic sample includes 6,439 local election officials from 44 states (Figure A1 in Supplementary material).

The experimental stimulus consists of a single e-mail sent to each local election official. All e-mails follow the same structure, greeting the official by name, referencing voter identification laws, and asking about the requirements to vote in the state corresponding to the official. Our design closely parallels White, Nathan and Faller (Reference White, Nathan and Faller2015), but differs in that we send only messages that mention voter ID laws. Additionally, to minimize possible spillover issues, we created 27 variants of this request (See Sections A4 and A6 in Supplementary material).

Our experimental treatment is the putative identity of the e-mail sender. In line with convention we expose officials to four distinct group identities by manipulating senders’ names (Bertrand and Mullainathan, Reference Bertrand and Mullainathan2004; Bertrand and Duflo, Reference Bertrand, Duflo, Vinayak Banerjee and Duflo2017; Butler and Homola, Reference Butler and Homola2017). Because the identities signaled in our treatments have elements which could be described as racial, ethnic, or religious, we refer to these generically as group identity treatments. To mitigate possible name effects, each group identity condition is signaled by 100 unique names. We check that the chosen names reliably identify prime ethnicity by conducting a manipulation check on Amazon’s Mechanical Turk service in which workers read sets of names and ascribe probabilities that a name belongs to a particular racial or ethnic group.Footnote 2 In total, we sent 4,900 unique experimental conditions which combine variants of the contact language with treatment identities.

Treatment Assignment and Implementation

We blocked treatment assignment on logged population density, two-party vote share in the 2012 presidential election, percentage African American, percentage Latino, percentage of households with incomes below 150% of the federal poverty level, and a dummy variable indicating whether a county was previously covered by Section 5 of the Voting Rights Act. Further details are provided in Section A8 in Supplementary material. Within each block, we assigned local election officials a racial condition and message version at random.

We sent 6,235 e-mails the morning of October 31, 2016, one e-mail to each election official that was a part of the study.Footnote 3 E-mails were sent from a purpose-built domain, ez-webmail.com. Sending addresses took the form of the senders’ first initial, last name, and a two-digit string between 20 and 40. To mitigate the possibility that election officials would be suspicious of our contact, we structured the e-mail headers so that inboxes displayed the full name of the purported voter (see Figure A1 in Supplementary material). The variety in our treatments was intended to reduce the likelihood that different offices would receive e-mails from identical senders. In 29 of the 43 states in our analytic sample, every official received a contact from a distinct name.

One key innovation in this experiment permits the identification of whether e-mails were received and opened by election officials. We include a 1 × 1 pixel image with a unique link – commonly referred to as a tracking pixel – in the e-mail body, so that upon opening the e-mail, most e-mail clients loaded the image from our server and provided a positive record that the e-mail had been opened by a particular official. This measurement permits inference about differential open-rates, a test of implicit bias we examine in Section 2.1.

An open question in correspondence studies concerns whether the observed effects are merely an artifact of differential treatment of stimulus by the internet and e-mail infrastructure, that is, spam filters. Through pilot testing we are able to comment on this question. Before taking steps to develop the positive server reputation, no messages reached any test inboxes. However, by carefully managing our digital authentication and consulting with individuals at a digital marketing company, in pilot testing we were able to place every message, from every attempted sender, into test inboxes (see Section A2 in Supplementary material).

The choice to contact election officials eight days before the election is designed to make our study reflective of the real constraints on individuals seeking and providing information about voting requirements. To minimize the impact of our intervention on election officials’ time, the specific request contained in the e-mail is one that would require little effort to fulfill. Using data gathered via our mailing system, we estimate that the median time to compose and send a response to our e-mail is 3 minutes, 6 seconds. We contend that any costs borne by public officials as a result of our intervention are counterbalanced by the benefits of uncovering persistent bias in electronic communications between constituents and local election officials.

Our preregistered analysis uses a single-outcome measure that is coded 1 if an election official replied to our e-mail prior to election day, and 0 otherwise. We do not count auto-replies, away messages, or bounces as valid replies. We further report an exploratory analyses of a novel outcome measure made possible through our engineering: whether a local election official opened the message.

RESULTS

Overall, 57.8% of the e-mails we sent received at least one reply from local election officials. While lower than the 67.7% response rate previously obtained from a similar sample (White, Nathan and Faller, Reference White, Nathan and Faller2015), this rate compares favorably with experiments on elected officials in the United States, suggesting that our requests were taken at face value (Butler and Broockman, Reference Butler and Broockman2011).

Election officials respond at considerably lower rates when queries come from minority as opposed to white senders (difference in mean, $\Delta \mu = - 4.70$ % points, Wilcox Rank-Sum $P < 2 \times {10^{ - 16}}$ ). However, as we report in Table 1, responsiveness to minority senders is not uniformly lower. Nonparametric tests using white senders as the baseline find that a Latino name is sufficient to suppress the likelihood of a response by nearly 3% points ( $\Delta \mu = - 2.97$ , $P = 0.07$ ). Strikingly, an Arab/Muslim name lowers the likelihood of a response by greater than 11% points ( $\Delta \mu = - 11.3$ , $P < 1 \times {10^{ - 10}}$ ). In contrast, black senders receive responses at a rate indistinguishable from white senders ( $\Delta \mu = 0.11$ , $P = 0.90$ ). Figure 1(a) plots the Intent to Treat (ITT) causal effects of our treatments. Regression estimates with robust standard errors are reported in columns 1 and 2 of Table A6 in Supplementary material and produce similar results.

Table 1 Response Rates by Experimental Condition

Notes: The minority column includes all data from the Latino, Black, and Arab columns. Response rates and standard errors are reported in percentage terms.

Figure 1 Points represent the ITT, the estimated difference in response rates to e-mails from the named identity, compared to the white response rate baseline. Thick bars report ITT ± SE, thin bars report ITT ± 1.96 × SE. All estimates are difference in means, except the weighted average which estimates a precision weighted difference (Gerber and Green Reference Gerber and Green2012) utilizing 2012 (White et al., Reference White, Nathan and Faller2015) and 2016 Latino evidence.

Figure 1(b) plots a precision weighted meta-analysis estimate (Gerber and Green, Reference Gerber and Green2012, p. 361) that combines the results of our intervention with those previously reported (White, Nathan and Faller, Reference White, Nathan and Faller2015). These data, gathered in independent audits conducted over two election cycles, show that Latinos receive replies from local election officials at a rate 4.4% points lower than whites ( $\Delta \mu = 4.4$ , precision weighted $SE = 1.18$ , $P < 0.0001$ ).

While the persistence of the treatment of Latino senders in the 2012 and 2016 elections is remarkable, perhaps more striking is the finding that Arab/Muslim names suffer a penalty two times greater than the one produced by a Latino stimulus. One potential concern is that the observed effect could be driven by the implausibility of the treatment, since many parts of the country do not have any appreciable population of Arab-Americans. To examine this possibility, we investigate whether treatment effects are smaller in the jurisdictions where Arab-Americans are more numerous. If treatment effects are driven by implausibility then they should be smaller in places where the presence of citizens with Arab names are more plausible. We do not find clear evidence that the proportion of Arab Americans moderates the treatment effect (Table A13, Model 3; Table A14; Table A15 in Supplementary material). Our most credible estimates find a 10.6% point bias against Arab senders in counties with no Arab population ( $\Delta \mu = - 10.6, SE = 2.5, P < 0.001$ ), but only a 2.6% point improvement in the highest Arab population quartile of counties ( $\delta \Delta \mu = + 2.6, SE = 4.4, P = 0.55$ ), though the distribution of Arab American settlement limits the strength of this robustness check.Footnote 4

Evidence of Implicit Discrimination

Local election officials who receive our intervention demonstrate bias insofar as they respond differentially based only on the signal of group identity delivered through our treatments. This observed response behavior is part of a chain of actions: the official must open, read, and then respond to the e-mail. Standard analyses of audit experiments, which report an indicator of response or non-response as the dependent variable, focus only on the final result of this compound process. Innovations of our design allow us to consider the outcome at a prior step, the decision by the official to open the received e-mail, conditional on the treatment delivered.

To respond to our experimental stimulus, an election official must identify our request from among the large number of other requests, categorize it mentally, and then open it. We argue that opening an e-mail is a high-volume, low-attention task of the type scholars have associated with implicit, rather than explicit bias (Bertrand, Chugh and Mullainathan, Reference Bertrand, Chugh and Mullainathan2005, p. 96; Devine, Reference Devine1989). The pattern of e-mail opens suggests that, indeed, elections officials may be unintentionally or automatically screening requests from Arab/Muslim senders. There is no difference in open rates between white and Latino names ( $\Delta \mu = - 0.74$ , $SE = 1.7$ , $P = 0.68$ ) or white and black names ( $\Delta \mu = - 0.24$ , $SE = 1.7$ , $P = 0.90$ ). However, there is a pronounced gap in open rates for e-mails sent by senders with Arab/Muslim names who have their e-mails opened at a rate 6.8% points lower than white senders ( $\Delta \mu = - 6.8$ , $SE = 1.8$ , $P = 0.00013$ ).

Awareness of Experiment

During the analysis phase of this project, another entity attracted the reasercherwas pursuing a similar line of research using the same sending domain as White, Nathan and Faller (Reference White, Nathan and Faller2015). As a result, some public officials became concerned that an audit study might be underway. News reports claim that these concerns prompted the National Association of Secretaries of State (NASS) to alert its state branches, who in turn had the opportunity to alert individual officials. In sum, some of our experimental subjects may have become aware of the presence of interventions.

Subjects’ awareness of the intervention poses a general threat to audit studies, either by compromising independence between units, or by violating the exclusion restriction if minority names are more likely to raise suspicion than white names. Because subjects’ awareness might prevent identification of causal effects, researchers should mitigate this risk using many identities and a well-tuned sending architecture whenever feasible. When there is any observable information about the possibility of discovery, researchers can use this information to evaluate whether apparent differences are likely the result of discovery.

Analysis of the timing of responses in this experiment does not suggest that discovery is leading to the observed results. First, as we present in Figure 2, the systematic pattern of unresponsiveness to minority names appears rapidly and well before the reported NASS broadcast. Second, as we report in Tables A11 and A12 in Supplementary material, models that censor response data at the time of the NASS broadcast, and models that exclude states that witnessed interference between units both produce estimates very similar to our main results.

Figure 2 Rapidly slowing rates of response. The vertical axis plots the cumulative number of respones, split by group identity of sender; the horizontal axis plots time since sending. Election Day and National Association of Secretaries of State (NASS) e-mails are noted with vertical dashed lines. Responses follow a clear diurnal rhythm, and patterns of bias appear rapidly.

CONCLUSION

Previous experimental evidence showed local election officials were less responsive to inquiries from Latinos, raising concerns about bias in the electoral process. Using a similar experimental design, we demonstrate the firm basis for these concerns by replicating the initial finding. We also extend the results by testing for bias against other groups.

Our intervention showed Arab/Muslim Americans to be markedly disadvantaged in their interactions with local election officials. This finding is particularly salient, given that it is not simply an artifact of Arab/Muslims being a relatively less numerous part of the electorate. We encountered no evidence of bias from local election officials toward African Americans, making ours at least the third recent study to produce a similarly unexpected null finding (Einstein and Glick, Reference Einstein and Glick2017; Gell-Redman et al., Reference Gell-Redman, Visalvanich, Crabtree and Fariss2018). Rather than the evidence of a lack of bias against African Americans, these null findings may be an artifact of the correspondence study method in which name alone, rather than other cues such as appearance, is used to signal identity.

Through this design, we also engage a challenge inherent to all audit studies, the risk that subjects become aware of the experiment. The relatively low technical sophistication required to conduct some forms of audit studies, mated with the potentially large sample size that is possible through e-mail-based audits, make these designs a potentially attractive way to identify discriminatory behavior. However, in an increasingly crowded field, researchers must face the possibility that experimental subjects become aware of the study, thereby damaging the inference. We determined that sending 4,900 distinct treatments on a custom-built server provided the best balance of a low possibility of discovery with the ability to identify a novel open rate outcome measure, and we would encourage future researchers to make a similar assessment.

Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2019.23

Footnotes

*

The data, code, and compute environment required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: https://doi.org/10.7910/DVN/8E1IIM (Hughes et al., 2019). The authors are aware of no conflicts of interest regarding this research.

1 We received Human Subjects approval from the University of California, Berkeley and University Michigan Human Subjects Committees. Both committees waived the requirement of informed consent. Additional implementation details are made available in the Supplemental Information. The study design, and pre-analysis plan were registered at Evidence in Governance and Politics (Hughes et al., Reference Hughes, Gell-Redman and Crabtree2016).

2 Supplementary material section A7 describes the procedure for choosing names, and section A17 provides the complete list of names.

3 We also sent two waves of pilot email, 54 on October 26, 2016; and, 146 on October 28, 2016. For details, see Supplementary material section A12.

4 In the highest Arab quartile, the mean Arab population is 1%.

References

REFERENCES

Abrajano, Marisa A. and Alvarez, Michael M.. 2010. New Faces, New Voices: The Hispanic Electorate in America. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Bertrand, Marianne, Chugh, Dolly and Mullainathan, Sendhil. 2005. Implicit Discrimination. American Economic Review 95(2): 9498.Google Scholar
Bertrand, M. and Duflo, E.. 2017. Field Experiments on Discrimination. In Handbook of Field Experiments, eds. Vinayak Banerjee, Abhijit and Duflo, Esther. Vol. 1. Amsterdam, Netherlands: North-Holland 309393. http://www.sciencedirect.com/science/article/pii/S2214658X1630006X Google Scholar
Bertrand, Marianne and Mullainathan, Sendhil. 2004. Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. American Economic Review 94(4): 9911013.CrossRefGoogle Scholar
Butler, Daniel M. 2014. Representing the Advantaged: How Politicians Reinforce Inequality. Cambridge, MA: Cambridge University Press.Google Scholar
Butler, Daniel M and Broockman, David E.. 2011. Do Politicians Racially Discriminate Against Constituents? A Field Experiment on State Legislators. American Journal of Political Science 55(3): 463477.Google Scholar
Butler, Daniel M and Homola, Jonathan. 2017. An Empirical Justification for the Use of Racially Distinctive Names to Signal Race in Experiments. Political Analysis 25(1): 122130.CrossRefGoogle Scholar
Devine, Patricia G. 1989. Stereotypes and Prejudice: Their Automatic and Controlled Components. Journal of Personality and Social Psychology 56(1): 5.CrossRefGoogle Scholar
Einstein, Katherine Levine and Glick, David M.. 2017. Does Race Affect Access to Government Services? An Experiment Exploring Street-Level Bureaucrats and Access to Public Housing. American Journal of Political Science 61: 100116.CrossRefGoogle Scholar
Gaddis, S Michael and Ghoshal, Raj. 2015. Arab American Housing Discrimination, Ethnic Competition, and the Contact Hypothesis. The Annals of the American Academy of Political and Social Science 660(1): 282299.CrossRefGoogle Scholar
García-Bedolla, Lisa and Michelson, Melissa R.. 2012. Mobilizing Inclusion: Transforming the Electorate Through Get-out-the-Vote Campaigns. New Haven, CT: Yale University Press.Google Scholar
Gell-Redman, Micah, Visalvanich, Neil, Crabtree, Charles and Fariss, Christopher. 2018. It’s All About Race: How State Legislators Respond to Immigrant Constituents. Political Research Quarterly 71(3):517531.CrossRefGoogle Scholar
Gerber, Alan S and Green, Donald P.. 2012. Field experiments: Design, analysis, and interpretation. New York, NY: WW Norton.Google Scholar
Grimmer, Justin, Hersh, Eitan, Meredith, Marc, Mummolo, Jonathan and Nall, Clayton. 2018. Obstacles to Estimating Voter ID laws’ Effect on Turnout. Journal of Politics 80(3):10451051.CrossRefGoogle Scholar
Hajnal, Zoltan and Abrajano, Marisa. 2015. White Backlash: Immigration, Race, and American Politics. New Haven, CT: Princeton University Press.Google Scholar
Hajnal, Zoltan, Lajevardi, Nazita and Nielson, Lindsay. 2017. Voter Identification Laws and the Suppression of Minority Votes. The Journal of Politics 79(2):363379.Google Scholar
Hajnal, Zoltan and Lee, T.. 2011. Why Americans Don’t Join the Party: Race, Immigration, and the Failure (of Political Parties) to Engage the Electorate. New Haven, CT: Princeton University Press.CrossRefGoogle Scholar
Hughes, D. Alex, Gell-Redman, Micah and Crabtree, Charles. 2016. Who Gets to Vote? Evidence in Government and Politics, EGAP ID: 20161001AA. http://egap.org/registration/2183 Google Scholar
Hughes, D. Alex, Gell-Redman, Micah, Crabtree, Charles, Krishnaswami, Natarajan, Monge, Guillermo and Rodenberger, Diana. 2019. Replication Data for: Persistent Bias Among Local Election Officials. Harvard Dataverse doi: 10.7910/DVN/8E1IIM.CrossRefGoogle Scholar
Jamal, Amaney and Naber, Nadine. 2007. Race and Arab Americans Before and After 9/11: From Invisible Citizens to Visible Subjects. Syracuse, NY: Syracuse University Press.Google Scholar
Lipsky, Michael. 1980. Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. New York, NY: Russell Sage.Google Scholar
McNulty, John E., Dowling, Conor M. and Ariotti, Margaret H.. 2009. Driving Saints to Sin: How Increasing the Difficulty of Voting Dissuades Even the Most Motivated Voters. Political Analysis 17(4): 435455.CrossRefGoogle Scholar
Pager, Devah. 2003. The Mark of a Criminal Record. American Journal of Sociology 108(5): 937975.CrossRefGoogle Scholar
Panagopoulos, Costas. 2006. The Polls-Trends: Arab and Muslim Americans and Islam in the Aftermath of 9/11. Public Opinion Quarterly 70(4): 608624.Google Scholar
White, Ariel R., Nathan, Noah L. and Faller, Julie K.. 2015. What Do I Need to Vote? Bureaucratic Discretion and Discrimination by Local Election Officials. American Political Science Review 109(1): 129142.Google Scholar
Figure 0

Table 1 Response Rates by Experimental Condition

Figure 1

Figure 1 Points represent the ITT, the estimated difference in response rates to e-mails from the named identity, compared to the white response rate baseline. Thick bars report ITT ± SE, thin bars report ITT ± 1.96 × SE. All estimates are difference in means, except the weighted average which estimates a precision weighted difference (Gerber and Green 2012) utilizing 2012 (White et al., 2015) and 2016 Latino evidence.

Figure 2

Figure 2 Rapidly slowing rates of response. The vertical axis plots the cumulative number of respones, split by group identity of sender; the horizontal axis plots time since sending. Election Day and National Association of Secretaries of State (NASS) e-mails are noted with vertical dashed lines. Responses follow a clear diurnal rhythm, and patterns of bias appear rapidly.

Supplementary material: Link

Hughes et al. Dataset

Link
Supplementary material: PDF

Hughes et al. supplementary material

Appendix

Download Hughes et al. supplementary material(PDF)
PDF 1.3 MB