Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-11T02:54:55.026Z Has data issue: false hasContentIssue false

Voter Mobilization, Experimentation, and Translational Social Science

Published online by Cambridge University Press:  31 August 2016

Rights & Permissions [Opens in a new window]

Abstract

Field experiments on voter mobilization enable researchers to test theoretical propositions while at the same time addressing practical questions that confront campaigns. This confluence of interests has led to increasing collaboration between researchers and campaign organizations, which in turn has produced a rapid accumulation of experiments on voting. This new evidence base makes possible translational works such as Get Out the Vote: How to Increase Voter Turnout that synthesize the burgeoning research literature and convey its conclusions to campaign practitioners. However, as political groups develop their own in-house capacity to conduct experiments whose results remain proprietary and may be reported selectively, the accumulation of an unbiased, public knowledge base is threatened. We discuss these challenges and the ways in which research that focuses on practical concerns may nonetheless speak to enduring theoretical questions.

Type
Praxis
Copyright
Copyright © American Political Science Association 2016 

The Dawn of Experimentation on Voting

The study of voter turnout has long been animated by a blend of civic concern, political competition, and theoretical curiosity. During the 1920s, magazine articles routinely expressed concern about “the vanishing voter,” the sharp decline in turnout that dated back to the 1890s.Footnote 1 Harold Gosnell began his classic 1927 book Getting Out the Vote by summarizing the mobilization tactics of his day and what was known about their effectiveness. Gosnell noted that during the lead-up to the 1924 election, the political parties along with civic groups ranging from the League of Women Voters to the Boy Scouts sought to mobilize voters via door-to-door canvassing but also via “the pulpit, the daily press, the theater, and the lecture platform.”Footnote 2 Gosnell wondered whether these efforts in fact increased turnout, especially among “habitual non-voters” and newly enfranchised women. Reasoning that his own surveysFootnote 3 had demonstrated a powerful correlation between education and turnout, Gosnell conjectured that get-out-the-vote efforts might perform an educative function that laid the groundwork for the formation of voting habits.

Gosnell’s next move was remarkable for its methodological and theoretical prescience. Prior to the 1924 presidential election and the 1925 municipal election, Gosnell and his research team systematically enumerated the inhabitants of Chicago neighborhoods and grouped them by street block. He assigned street blocks to treatment and control conditions. Eligible voters residing in treatment locations were sent mailings encouraging them to register and vote. The precise method of assignment is not described in detail, although Gosnell later refers to “the method of random sampling”;Footnote 4 if random sampling refers to random assignment, Gosnell’s study would qualify as one of the earliest social science experiments, predating R.A. Fisher’s advocacy of randomized experiments in his landmark books of 1925 and 1926.Footnote 5 Regardless of whether Gosnell in fact used random assignment to allocate street blocks, his study is remarkable for its early use of controlled interventions in a naturalistic setting. The study is also remarkable on substantive grounds. Gosnell crafted competing interventions that differed in theoretically telling ways. A “factual” mailing presented voters with instructions about the upcoming deadline for registering and voting, while an “emotional” mailing presented voters with a cartoon that scolded those who fail to vote. In particular, his cartoon likened the “slacker who doesn’t vote” to the “slacker who won’t defend his country in time of war”Footnote 6—an early example of what would later come to be known as social pressure mail.Footnote 7

The Eclipse of Experimentation

Although Gosnell closes his book by noting that “it is possible by the method of random sampling to measure the success of any device designed to interest people in elections,”Footnote 8 his path-breaking research method was seldom used in the decades that followed. Gosnell moved away from experiments and turned his attention to institutional factors that predict aggregate turnout rates and later to the subject of urban politics.Footnote 9 Between 1927 and 1998, other researchers seldom conducted voting experiments outside of college campuses. In 1935 George Hartmann orchestrated a controlled experiment in Allentown, Pennsylvania, in which he distributed 10,000 leaflets bearing either “rational” or “emotional” appeals for the Socialist party to selected voting wards, using official returns to assess the effectiveness of these messages.Footnote 10 The first voting studies to explicitly use random assignment were Eldersveld’s experimental investigations of voter mobilization in the Ann Arbor elections of 1953 and 1954.Footnote 11 Assigning small numbers of registered voters to receive phone calls, mail, or personal contact prior to Election Day, these experiments examined the effects of different types of appeals, both separately and in combination with one another. Adams and Smith studied get-out-the-vote (GOTV) phone calls in a special election held in Washington, D.C., and Miller, Bositis, and Baer studied mail, phone calls, and canvassing during a party primary in Carbondale, Illinois.Footnote 12 More than a half-century after Gosnell, this experimental literature remained small and obscure—the most widely cited according to ISI Web of Science was Eldersveld’s Reference Eldersveld1956 American Political Science Review article, which attracted a total of 25 citations prior to 1998. No field experiments on voting (or any other subject) were published in any political science journal during the 1990s. Books on voter turnout from this era scarcely mentioned experimental research or called for more of it.

Why were so few experimental studies of voting conducted in real-world settings during this period? One contributing factor was the discipline’s shift in methodological focus brought about by the advent of survey research in the 1940s. The empirical studies of voting that captured the discipline’s imagination were Voting Footnote 13 and The American Voter,Footnote 14 which placed the act of voting in sociological context and explored the psychology of engagement with politics. This line of non-experimental survey analysis dominated the study of voting until well into the 1990s, as scholars measured the participatory orientations that linked voting to other forms of political action,Footnote 15 described peer influences on voters’ political outlook,Footnote 16 and assessed the ways in which turnout is shaped by education or registration laws.Footnote 17 The effects of voter-mobilization campaigns, too, were studied using surveys in which respondents reported whether they were called or canvassed during the course of election campaigns.Footnote 18

Survey research overshadowed not just field experiments but all fieldwork on campaign tactics, whether experimental or not. Relatively few studies after Eldersveld’s reported on the results of collaborations between scholars and campaign practitioners designed to evaluate campaign tactics such as door-to-door canvassing. A small flurry of fieldwork appeared in the early 1970s, with researchers working closely with campaigns to measure which outreach efforts were directed at which voters.Footnote 19 This type of close-to-the-ground research, however, subsided thereafter, apart from the occasional case study of a registration drive or union organizing effort.

The dearth of field research during the period between 1927 and 1998 did not reflect a lack of interest in the topic of voter turnout. On the contrary, interest intensified as scholars puzzled over the paradox of voter turnout that grew out of the theoretical arguments of Downs and Olson.Footnote 20 This paradox may be summarized as follows: millions of people vote in national elections despite the fact that a given voter has an infinitesimal chance of casting the decisive vote in a large electorate; a rational voter therefore has no incentive to pay the costs of voting even when the stakes of the election are high. Downs suggested that the paradox could be overcome by positing that voters seek to uphold democracy through their participation, but as Barry pointed out, upholding democracy is itself a collective action dilemma insofar as no one has an incentive to bear the costs of voting when the effect on democracy is negligible.Footnote 21 A large literature developed stressing the costs of voting and the “resources”—both tangible and intellectual—that voters need in order to bear these costs. One strand of this literature focused on the transaction costs that made it difficult or time-consuming to register and vote. It was argued that voter turnout would increase if one could register on Election Day,Footnote 22 vote by mail,Footnote 23 or vote on a national holiday.Footnote 24 Other proposals included information campaigns to inform people when and where to vote, and civic education programs to increase interest in public affairs and understanding of the political process.Footnote 25 Another strand of this literature stressed the implications of socioeconomic variation in turnout rates. Just as Gosnell noted the strong relationship between educational attainment and voter turnout, many contemporary scholars pointed out that this correlation augments the political representation of the affluent.Footnote 26 As Lijphart declared in his presidential address to the American Political Science Association, “low voter turnout means unequal and socioeconomically biased turnout.”Footnote 27

Reconnecting Campaign Craft and Experimental Science

Although political scientists were actively and often passionately engaged in the study of voter turnout, their research had little connection to or effect on the ways in which political campaigns mobilized voters. Looking back at the “How To” books of the 1990s and early 2000s, it is striking how rarely their authors drew on political science research when making recommendations about tactics designed to mobilize or persuade voters. The second edition of Shaw’s The Campaign Manager makes no mention of scholarly research in its chapters on getting-out-the-vote, direct mail, or canvassing.Footnote 28 Grey’s How to Win a Local Election urges candidates to seek the wisdom of “an old hand” but neglects to recommend the wisdom of academic studies.Footnote 29 The most scholarly of the how-to volumes is arguably Shea and Burton’s Campaign Craft, but even its Returning to the Grassroots chapter relies primarily on anecdotes.Footnote 30 The first practical book on campaign craft to display a penchant for evidence-based reasoning is Hal Malchow’s The New Political Targeting, which offered methods for quantitative assessment of persuasion and mobilization tactics.Footnote 31

What changed between the 1990s and the present? Across the social sciences, scholars showed new interest in experiments conducted in real-world settings. Development economists launched dozens of experiments to assess whether poverty could be reduced through interventions such as small-scale loans, agricultural technologies, nutritional supplements, educational reforms, anti-corruption measures, and women’s empowerment. Criminologists conducted experiments to assess the effects of policing on crime rates or punishment on recidivism. Sociologists used experiments to assess job discrimination against minority applicants or to evaluate the effects of anti-drug programs in schools. Education researchers studied randomly varying class sizes or lotteries that allowed children to attend private schools. And statisticians elucidated the way that statistical methods could be used to address the special complications that arise in the course of conducting field experiments, such as the failure to treat everyone in the assigned treatment group. Amid this intellectual ferment, political scientists launched an unprecedented number of field experiments designed to assess the effectiveness of voter-mobilization tactics ranging from door-to-door canvassing to television ads.

This tectonic shift in the social sciences, sometimes known as the “credibility revolution,” reflected a growing unease with the quality of evidence used to establish causal claims.Footnote 32 As Bositis and Steinel had predicted years earlier, election researchers were gradually coming to appreciate the limits of what could be learned through non-experimental research.Footnote 33 Non-experimental studies of voter mobilization were especially vulnerable to methodological critique. The most influential studies (e.g., Rosenstone and Hansen Reference Rosenstone and Hansen1993) used American National Election Study data to assess whether exposure to campaign activity such as canvassing or phone calls increased turnout. Respondents who answered affirmatively to the question “Did anyone from one of the political parties call you up or come around and talk to you about the campaign?” were coded as contacted, and voter turnout was regressed on self-reported contact and an array of covariates that described respondents’ demographic attributes and political orientations. This approach is subject to three main concerns. First, if campaigns target likely voters when allocating their communication resources, the apparent correlation between targeting and turnout may be spurious. Regression could produce evidence of an apparent relationship even if the true effect of campaign contact were nil. Second, the survey questions used to measure campaign contact are vague. The researcher does not know in what ways the voter was contacted, how often, by whom, or how close to Election Day. Taken literally, the question asks the respondent to focus solely on partisan phone or face-to-face contact leading to a conversation about the campaign, which omits all campaign contact through mail, all contact by non-partisan groups or issue groups, contact about political issues other than the campaign, and possibly all manner of contact about turnout or the upcoming election that did not lead to a discussion of campaign issues. Finally, it is unclear whether respondents accurately recall and report the campaign contact that they experienced. Indeed, surveys that have measured the relationship between actual and reported campaign contact suggest that respondents’ answers may be unreliable.Footnote 34

The Revival of Field Experiments on Voter Mobilization

Voter mobilization is especially conducive to field experimentation. As Gosnell recognized, public records provide a reliable and inexpensive measure of whether a person voted in a given election. High-speed computing has made possible vast experiments that dwarf Gosnell’s studies, which involved thousands of voters and were extraordinarily large for their time. For example, Bond et al. examined voting records for 6.3 million Facebook users to assess whether randomly-assigned inducements to vote, especially announcements about which of one’s friends had voted, increased turnout in the 2010 election.Footnote 35 The growth and development of digitized voter lists has further reduced the costs of measurement.Footnote 36 Meanwhile, computational advances have provided researchers with increasingly sophisticated tools for conducting random assignments in ways that automatically balance subjects’ background attributes.Footnote 37

Figure 1 traces the trajectory of field experiments on voting over time. Using Google Scholar, we searched for all titles that included “experiment” and “voting” or “voter” or “turnout” but excluded titles that contained the words “quasi,” “natural,” “thought,” or “laboratory.” The set of titles meeting these criteria were then hand-coded to restrict the time-series to journal articles. Although this procedure clearly understates the total number of field experiments, it provides a rough sense of the over-time trajectory of this line of research.

Figure 1 Number of journal articles using field experiments to study voting, 1925–2015

Source: Google Scholar, May 2016. Refer to the text for more information about the search parameters.

The pattern depicted in figure 1 is reminiscent of other graphs that have charted the explosive growth of experiments over time in political science.Footnote 38 The number of field experiments on voting surges after the 1990s, and the annual rate of publication has climbed steadily.

The details of this experimental literature—substantively, methodologically, and normatively—have been summarized elsewhere. The three editions of Get Out the Vote: How to Increase Voter Turnout describe the progression of studies and their findings.Footnote 39 Methodological discussions span topics such as heterogeneous treatment effects,Footnote 40 optimal design under noncompliance,Footnote 41 variance estimation,Footnote 42 and spillovers.Footnote 43 Normative discussions about who is mobilized to vote and the distributive consequences of campaigns’ focus on high-propensity voters may be found in García Bedolla and Michelson’s book Mobilizing Inclusion (2012).Footnote 44 Since these are well-trodden topics, our focus for the remainder of this essay will be a bit different: to call attention to the growing collaboration between scholars and practitioners and how this collaboration affects the research agenda, how studies are conducted, and how knowledge is accumulated.

Collaboration with Practitioners and Translational Social Science

Field experiments often grow unexpectedly from close collaboration between scholars and practitioners who seek to make more efficient use of resources. The precursor to our own foray into field experiments was Alan Gerber’s six months embedded with a campaign consulting firm in Washington, D.C., during which he got an inside look at their research methods and decisions-making processes. His experience illustrates the benefits that political scientists can gain from close interaction with practitioners. The consulting firm was very active in the summer and fall of 1996, providing research and advice to about a dozen senate and house campaigns. A few features of the campaigns and the consulting group’s research activities were especially striking. First, although the national and local media focused on television broadcast advertising, a sizeable portion of the campaign activity in the consultant’s congressional and senate campaigns was delivered at the household level. For instance, nearly all of the campaigns sent out multi-piece mail programs. These mailings targeted particular demographic groups that were selected based on voter file information. Second, the results of the firm’s formal research, including polling and focus groups, were routinely incorporated into the design, messaging, and targeting of campaign communications, but there were only minimal attempts to measure whether the campaign efforts were effective. Campaigns used tracking polls to gauge whether an advertising campaign was followed by a boost in the polls, but there was no attempt to randomize the location, targeting, or timing of these interventions. Further, Gerber observed no systematic post-election effort to measure the effects of campaign activity on turnout or vote share.

Spending an extended period of time in a campaign consulting environment made it easier to implement subsequent experimental research. Daily exposure to campaign professionals taught Gerber how consultants thought and talked about campaigns, which later made it easier to communicate with political operatives about field experimental research projects. And the consulting firm acquaintances Gerber made with professionals who manage campaigns, design direct mail, and operate phone banks prompted him to solicit their input and contract with their firms when it came time to design our own nonpartisan get-out-the-vote campaign in New Haven in 1998.

Our first experiment attempted to assess whether voter turnout increases when voters are encouraged to vote via door-to-door canvassing, phone calls from a commercial phone bank, or up to three pieces of direct mail. These tactics were tested by themselves and in various combinations. Because the experimental campaign was funded and implemented by a tax-exempt 501(c)(3) organization, all of the appeals to vote were nonpartisan in character; those in the treatment group were randomly presented with a message that either stressed civic duty, the closeness of the election, or the notion that politicians do more for neighborhoods whose residents vote. Because we designed and implemented the GOTV campaign ourselves with only nominal collaboration with the local League of Women Voters, we had more autonomy than would have been the case were we collaborating with a political campaign or interest group whose resources and staff funded and implemented the GOTV campaign.

However, the nagging question that arises whenever academics craft and implement their own interventions is whether the estimated effects that emerge from their “artificial” interventions are different from what one ordinarily expect from naturally-occurring campaigns. Our study drew criticism from campaign consultants who contended that our use of nonpartisan messages undercut the effectiveness of voter mobilization. Real voter mobilization, it was argued, motivates voters to support a given cause or candidate.Footnote 45 This critique raises the empirical question of whether nonpartisan messages are in fact less effective than advocacy messages, a hypothesis that prompted subsequent experimentation.Footnote 46 Another concern is that our results may have been driven by the idiosyncrasies of New Haven or 1998 or both. This concern is of special importance when interventions such as door-knocking or phone calls fail to reach their target. Were the “compliers” in our study unusually responsive or unresponsive to these interventions? Would average effects among compliers have been different in other settings, where compliance rates differed? This question set in motion several replication studies in other settings including some that made herculean efforts to contact a high proportion of the intended targets.Footnote 47

More broadly, one may ask whether campaigns, whether naturally-occurring or artificial, vary in their effectiveness and whether this variation can be traced systematically to features of the campaign, such as their messages, messengers, or targeting. Experiments that speak to this larger research agenda have related the apparent effectiveness of voter-mobilization tactics to the ways in which campaign workers are trained,Footnote 48 the manner in which the firms that contract for campaign work recruit and compensate their workers,Footnote 49 and the messengers that campaigns deploy.Footnote 50

Given the many dimensions along which voter mobilization campaigns vary, the investigation of why some mobilization efforts are more effective than others has generated a sprawling experimental enterprise that spans multiple election cycles and multiple continents. By the time the 2015 edition of Get Out the Vote: How to Increase Voter Turnout went to press, we counted a total of 51 distinct experiments on canvassing, 85 on direct mail, 47 on live phone calls, and still more on e-mail, automated calls, text messaging, social media, mass media, and other assorted tactics.

As field experience and experimental studies accumulated, it became possible for academic researchers to write from the perspective of an evaluation researcher tasked with both assessing the practical challenges of implementing an intervention and measuring its apparent impact on voter turnout. But rather than write evaluations of specific campaigns, the task was bigger: write about the feasibility and effectiveness of each mobilization tactic from the vantage point of someone who has conducted or read many such evaluations. Get Out The Vote differs in three important ways from most how-to books on campaign management. First, advice is derived from the findings of a specific set of research studies as opposed to an amorphous set of anecdotes. Second, these research studies all meet a high methodological standard insofar as they are random assignment studies conducted in naturalistic settings. Where multiple studies evaluate the effectiveness of a given tactic, meta-analysis is used to summarize the research literature in a systematic fashion. Third, the relevant research findings are summarized not only in terms of what they imply for campaign practice but also in terms of the uncertainty associated with the research findings. Using from one to three “stars” to denote how reliable the body of evidence is for a particular claim, the chapter summaries convey in nontechnical terms to a lay audience what a confidence interval conveys to a scientific audience.

Although Get Out the Vote’s conclusions are derived from meta-analyses of the large and growing collection of experiments on voter mobilization, we continually remind readers that the extant research literature has important limitations and blind spots. The experimental literature comprises studies that proved feasible given the many impediments to collaboration between scholars and their non-academic partners. Our experience suggests that it is often difficult to align scholarly aims and the incentives of those who work for candidates, interest groups, labor unions, political parties, social media platforms, nonprofit groups, or government agencies. Research collaborations may fizzle for any of the following reasons. The campaign may be unwilling to vary the treatments or withhold treatment altogether, especially when the political stakes are high (e.g., presidential elections). Those conducting the campaign may fear that the experiment will return unflattering results and may feel the venture is too risky professionally even if scholars promise them anonymity or an embargo period. The campaign lacks (or believes that it lacks) sufficient capacity to implement the treatments according to the experimental design. Collaborators may be uninterested in the research question, perhaps because they feel they already know the answer. Events or personnel changes may upset the working relationship between the campaign and outside researchers.Footnote 51 The net effect of these constraints is a research literature that, with a few notable exceptions, tends to focus on relatively inexpensive tactics deployed in low- and medium-salience elections. One might argue that these are precisely the kinds of tactics and settings that are typical of the vast majority of U.S. elections; nevertheless, it is important to be clear about these features of the research literature when drawing generalizations.

Another potential stumbling block is selective disclosure of experimental results. On the academic side, there is ample evidence that splashy statistical results are more likely to find their way into print. In their meta-analysis of canvassing, phone, and mail studies, Green, McGrath, and Aronow report that unpublished papers tended to find smaller effects on turnout than published papers.Footnote 52 To some extent, this problem can be overcome by making special efforts to round up unpublished reports or presentation slides. Pre-registration of experiments in searchable public registries helps keep the file-drawer problem in check, but usage of such registries by scholars conducting GOTV experiments remains spotty.

As time goes on, the selective reporting problem grows more nettlesome as interest groups increasingly conduct their experiments “in-house” and release the results of these (unregistered) studies selectively. We next discuss the special challenges that this poses to those who seek to synthesize the experimental research literature.

The Challenge of Proprietary Research

In recent years, ideological groups have gradually developed their own research capacity. On the left, the Analyst Institute, which describes itself on its website as “a clearinghouse for evidence-based best practices in progressive voter contact,” conducts randomized experiments on topics such as voter mobilization and persuasion. Its counterpart on the right, Center for Strategic Initiatives, is a forum for randomized trials that evaluate the effectiveness of tactics designed to bolster the election of conservative candidates.Footnote 53 In addition to these groups, some presidential campaigns and party committees have their own internal research groups,Footnote 54 as do corporations such as Google or Facebook. In other words, experimentation on the effectiveness of political tactics is no longer an activity in which university researchers play the lead role. Although the extent of proprietary research activities remains unknown, the number and magnitude of experiments conducted outside of universities seem to be growing and may eventually surpass research housed at universities.

It is not hard to understand the allure of in-house experimentation from the standpoint of the sponsoring organization. The lack of formal ties to academic institutions or public funding means that experiments can be conducted outside the purview of institutional review boards. Once the study is launched, the organization is able to keep a lid on findings that might disclose tactical secrets or, more darkly, cast doubt on the effectiveness of its campaign activities. Indeed, the organization can pull the plug on studies midway through the course of research if they seem to be generating unappealing results. Proprietary research is not subject to the same norms of data-sharing that apply to published academic studies, which together with the lack of pre-analysis plans means that findings can be reported selectively.Footnote 55

Proprietary studies, on the other hand, have some attractive features for academic researchers. The scale of some organizations’ interventions presents opportunities to greatly extend the boundaries of what academic researchers could feasibly undertake. The aforementioned Facebook experiment involved an extraordinary level of sustained exposure to persuasive messages that is not available at any price to those outside Facebook. The same is true for direct-mail campaigns targeting tens of millions of voters or mass-media campaigns covering vast swaths of the country. Sometimes researchers working internally in these campaigns are given permission to present the results publicly; for example, Vincent Pons oversaw a nationwide canvassing experiment conducted by the Socialist Party in France prior to the 2012 presidential elections, and the result is an unusually rich contribution to the research literature.Footnote 56 The problem is the studies that we do not see—for example, we do not know whether a follow-up Facebook study was conducted and, if so, how it came out. Organizations such as Evidence in Governance and Politics urge academic researchers to disclose whether they have their collaborating organization’s permission to publish results regardless of how they come out, but no similar framework exists for research conducted in-house in nonacademic settings.

The problem of selective reporting represents one of the leading challenges for those who aspire to do translational social science in this domain. In Get Out the Vote, we explain to readers that as scholars who regularly interact with researchers working inside campaign organizations on both sides of the aisle, we are privy to results that remain outside the public domain. This creates a dilemma. We do not want to summarize public-domain research in a way that we know to be contradicted by proprietary results; at the same time, we are not at liberty to disclose proprietary results, and we cannot be sure that we have seen all of the relevant proprietary results. Our admittedly inadequate solution is to base our recommendations on meta-analyses of public studies and those proprietary studies that became public without any apparent filtering process by the sponsoring organization. In cases where the results from our meta-analysis are challenged by a proprietary study outside the public domain of which we are aware, we check to see whether our meta-analysis would be materially affected by the inclusion of the proprietary results. Fortunately, the results have to date proven robust, but that may change in years ahead as proprietary research comes to comprise a larger share of the experimental literature. More thinking needs to go into the question of how to assess the potential for bias arising from selective reporting of proprietary studies; perhaps researchers in this domain need to develop a checklist of questions to ask of proprietary studies, akin to those used by researchers assembling meta-analyses in other disciplines.Footnote 57

Translational Social Science Can Be Theoretically Informative

Although the experimental literature on voter turnout and persuasion comprises hundreds of studies, the research agenda remains vibrant and theoretically engaging. One reason for the continued growth and development of this literature is the internationalization of the research program. Canvassing studies have been conducted in settings as different as China, Pakistan, and Sweden. The wide range of institutional and socioeconomic conditions present a difficult hurdle for narrow psychological theories about why mobilization works. For example, a growing body of experimental evidence from various European countries suggests that door-to-door mobilization tactics fare poorly in places such as Denmark, France, or Spain, which have no political tradition of door-to-door canvassing.Footnote 58 Where canvassing is a longstanding feature of electoral campaigns, as in England, it appears to be substantially more effective.Footnote 59 This pattern runs counter to what one would expect if canvassing works because it reminds voters about an upcoming election or imparts a sense that the election is meaningful. If we are to understand why canvassing works, we need a better understanding of why it fails to work under certain conditions.

Experimental research in this domain has also become increasingly focused on research questions of theoretical significance. Whereas the early studies sought to map out the broad contours of what interventions seem to work best, more recent work has zeroed in on social psychological hypotheses about which messages work best and for whom. A prominent set of hypotheses underscore the role of prescriptive social norms, or widely-shared views about how one ought to behave. One such norm is participating in elections, which is widely regarded as a civic duty. One way to overcome the paradox of voter turnout is to posit that voters derive intrinsic satisfaction from upholding this norm. Another hypothesis is that voters also respond to extrinsic social incentives, such as the threat of disapproval from others in the event that failure to vote were to become known. Several experiments have gauged the extent to which voter turnout rises when voters are presented with information suggesting that voter turnout is publicly observable and that their compliance with the norm of voting is being monitored.Footnote 60 Political campaigns have adapted these tactics for the own voter-mobilization efforts, sometime softening the messages to prevent voter anger.Footnote 61 One of the more interesting research questions is whether voters gradually become inured to receiving social pressure mailings and cease to respond to them. This hypothesis suggests another reason that the experimental literature remains theoretically engaging—the causal parameters of interest may change over time.

Many experiments are designed to detect theoretically interesting phenomena. One example is the transmission of effects from one person to another. It has long been conjectured that voting patterns among people living in the same environment are correlated because they influence each other’s decision to vote, but it is only recently that interpersonal influence has been demonstrated experimentally in field settings. Evidently, canvassingFootnote 62 and text messagingFootnote 63 have mobilizing effects that indirectly affect others in the household. Another theoretically intriguing phenomenon is the over-time persistence of voters’ propensity to vote. To what extent does a random inducement to vote in one election increase turnout in subsequent elections? Although one cannot rule out the possibility that memories of the initial intervention have enduring effects on turnout, it seems clear that inducements to vote often elevate turnout in elections held months or even years later.Footnote 64 In addition to being theoretically intriguing, interpersonal transmission and persistence of effects have enormous practical implications, as they have the potential to dramatically alter a campaign’s benefit-cost calculation.

Finally, it should be stressed that the experimental literature may be theoretically informative even when the individual experiments that comprise it are not guided by any particular theory or puzzle. For example, scholars have long noted the correlation between voter turnout and the closeness of elections. Congressional districts with more competitive elections tend to attract more voters to the polls.Footnote 65 One hypothesis is that close elections attract more campaign spending, which in turn generates campaign communication, which in turn stimulates turnout.Footnote 66 By this argument, it is not get-out-the-vote activity so much as the overall heat of the race, which informs voters about the candidates and generates the impression that the stakes are high. However, the body of experimental evidence runs counter to this hypothesis insofar as it demonstrates that showering voters with campaign communication (e.g., large numbers of mailers advocating support for a legislative candidate) does nothing to raise turnout.Footnote 67 The experimental literature is noteworthy for puncturing many such claims: the notion that people would be substantially more likely to vote if they were paid to do soFootnote 68 or if they were provided with easy-to-read voter guides.Footnote 69 By clarifying whether these hypothesized effects are important enough to be theoretically meaningful, these experiments also shed light on whether they are likely to be useful in practice.

Concluding Thoughts

The recurrent theme of Get Out the Vote and the research literature on which it is based is that rigorous experimental evaluation generates stubborn facts that rein in theories and provide a much-needed sense of proportion to practitioners who seek to allocate scarce campaign resources. The advent of experimentation in the domain of voter mobilization or persuasion is arguably like the development of the microscope or telescope, which generated rapid scientific advances with important practical consequences. With better ways of measuring the effectiveness of campaign tactics, scholars can differentiate tactics that work from those that do not and gradually develop an empirically-grounded set of propositions about the conditions under which campaign communication is effective.

Many experimental evaluations of campaign interventions produce precisely estimated effects that are close to zero. One interesting practical implication is that resources are routinely wasted on ineffective campaign tactics. Why campaigns persist in deploying their resources inefficiently remains an unresolved puzzle. On the surface, it would seem that campaign consultants operate in a highly competitive environment that should encourage the adoption of effective tactics; apparently, this process is retarded by other factors such as the way that campaign consultants are compensated, the information available to those who hire them, or their resistance to scrutiny by outside evaluators. In the future, as the financial stakes of political campaign activity climb, we may see the emergence of a new niche of professionals tasked with real-time monitoring and evaluation of political campaigns and their subcontractors.

Another practical implication is the gradual development of realistic expectations about what campaigns can and cannot achieve. As Sides and Vavreck argue in their book The Gamble: Choice and Chance in the 2012 Presidential Election, the advantage that accrues to one candidate on account of its superior mobilization efforts typically amounts to just a few percentage points.Footnote 70 This could be the margin of victory in a close election but may have little bearing on the outcome of most elections, which are uncompetitive. Superior campaign tactics might still make a difference for a large portfolio of uncompetitive elections, but often the effects are subtle and probably cannot be detected reliably without a carefully controlled evaluation. The experimental literature on campaign effects suggests that electorally-relevant effects are often elusive in part because most campaigns rely on high-volume interventions, such as direct mail or automated phone calls, which tend to have limited effects. Larger effects can be obtained from high-quality personal interactions at voters’ doorsteps or via volunteer phone banks, but quality is difficult to maintain as the scale of the campaign expands from a precinct to an entire state.

Here is where the science of voter mobilization brushes shoulders with broader topics, such as how to inspire the enthusiasm and sustained activism of large numbers of campaign workers. In time, the focus of experimental campaign research may go beyond the study of individual voters to the study of entire campaigns. A political party, labor union, or interest group may one day randomize which kinds of candidates it deploys across an array of legislative races, what kinds of campaign messages they emphasize, and the terms under which consultants are retained. This research frontier currently lies outside the realm of what is considered feasible. One role that scholars play in the process of innovation is to show that such research can be conducted and to lay out the costs and benefits of acquiring new knowledge.

Footnotes

1 Schlesinger and Eriksson Reference Schlesinger and Eriksson1924.

2 Gosnell Reference Gosnell1927, 2.

3 Merriam and Gosnell Reference Merriam and Gosnell1924.

4 Gosnell Reference Gosnell1927, 104, 110.

5 Forsetlund, Chalmers, and Bjørndal Reference Forsetlund, Chalmers and Bjørndal2007.

6 Gosnell Reference Gosnell1927, 27.

7 Gerber, Green, and Larimer Reference Gerber, Green and Larimer2008.

8 Gosnell Reference Gosnell1927, 110.

12 Adams and Smith Reference Adams and Smith1980; Miller, Bositis, and Baer Reference Miller, Bositis and Baer1981.

13 Berelson, Lazarsfeld, and McPhee Reference Berelson, Lazarsfeld and McPhee1954.

16 Huckfeldt and Sprague Reference Huckfeldt and Sprague1992.

17 Wolfinger and Rosenstone Reference Wolfinger and Rosenstone1980.

18 Prominent examples include Kramer Reference Kramer1970, Patterson and Caldeira Reference Patterson and Caldeira1983, Rosenstone and Hansen Reference Rosenstone and Hansen1993, and Wielhouwer and Lockerbie Reference Wielhouwer and Lockerbie1994.

22 Rosenstone and Wolfinger Reference Rosenstone and Wolfinger1978.

26 Leighley and Nagler Reference Leighley and Nagler2013; Schlozman, Verba, and Brady Reference Schlozman, Verba and Brady2012.

27 Lijphart Reference Lijphart1997, 2.

29 Grey Reference Grey1999, ch. 6.

30 Shea and Burton Reference Shea and Burton2001.

31 Malchow Reference Malchow2003. More recent how-to books, such as The Political Campaign Desk Reference (McNamara Reference McNamara2012) or the fifth edition of Shaw’s The Campaign Manager (2014), continue to offer advice without references, but some chapters show signs of familiarity with the experimental literature.

32 Angrist and Pischke Reference Angrist and Pischke2010.

33 Bositis and Steinel Reference Bositis and Steinel1987.

36 Hersh Reference Hersh2015. Individual records of turnout are available in a small number of countries outside the United States, such as the United Kingdom. Nevertheless, recent years have seen dramatic growth in the number of experiments conducted outside the American context. For a review of this literature, see Green and York forthcoming.

37 Moore and Schnakenberg Reference Moore and Schnakenberg2015.

42 Aronow and Middleton Reference Aronow and Middleton2013.

44 García Bedolla and Michelson Reference García Bedolla and Michelson2012. See also Enos, Fowler, and Vavreck Reference Enos, Fowler and Vavreck2014.

45 Grenzke and Watts Reference Grenzke and Watts2005.

47 Green, Gerber, and Nickerson Reference Green, Gerber and Nickerson2003; Michelson Reference Michelson2003.

49 Mann and Klofstad Reference Mann and Klofstad2015.

50 Sinclair, McConnell, and Michelson Reference Sinclair, McConnell and Green2013.

51 In addition, experiments may fall apart due to the constraints under which academic researchers operate. Scholars are often reluctant to work on narrow evaluations that seem unlikely to eventuate in publication. That might explain why randomized experiments have to our knowledge never assessed the relative effectiveness of phone banks using automated phone dialers versus hand-dialed phone numbers.

52 Green, McGrath, and Aronow Reference Green, McGrath and Aronow2013, p.31.

55 Humphreys, de la Sierra, and van der Windt Reference Humphreys, de la Sierra and van der Windt2013.

59 John and Brannan Reference John and Brannan2008.

61 Mann Reference Mann2010; Panagopoulos Reference Panagopoulos2013b. See Green and Gerber Reference Green and Gerber2015 for a discussion of randomized trials testing other social-psychological theories—for example, those pertaining to implementation intentions and self-prophesy.

65 Cox and Munger Reference Cox and Munger1989.

66 Another hypothesis is that voters are more interested in close elections. Yet another hypothesis is that people turn out to vote in close elections in the hopes of casting the pivotal vote, but that hypothesis receives no support from experiments that specifically call attention to the prospect of a close election—even in the wake of a tied election. See Enos and Fowler Reference Enos and Fowler2014.

67 Cubbison Reference Cubbison2015; Gerber, Green, and Green Reference Gerber, Green and Green2003.

69 García Bedolla and Michelson Reference García Bedolla and Michelson2009.

70 Sides and Vavreck Reference Sides and Vavreck2014, 239.

References

Adams, William C. and Smith, Dennis J.. 1980. “Effects of Telephone Canvassing on Turnout and Preferences: A Field Experiment.” Public Opinion Quarterly 44(3): 389–95.CrossRefGoogle Scholar
Angrist, Joshua D. and Pischke, Jörn-Steffen. 2010. “The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics.” Journal of Economic Perspectives 24(2): 330.CrossRefGoogle Scholar
Aronow, Peter M. 2012. “A General Method for Detecting Interference between Units in Randomized Experiments.” Sociological Methods and Research 41(1): 316.CrossRefGoogle Scholar
Aronow, Peter M. and Middleton, Joel A.. 2013. “A Class of Unbiased Estimators of the Average Treatment Effect in Randomized Experiments.” Journal of Causal Inference 1(1): 135–54.CrossRefGoogle Scholar
Barry, Brian M. 1978. “Circumstances of Justice and Future Generations.” In Obligations to Future Generations, ed. Sikora, Richard I. and Barry, Brian M.. Philadelphia, PA: Temple University.Google Scholar
Berelson, Bernard R., Lazarsfeld, Paul F., and McPhee, William N.. 1954. Voting: A Study of Opinion Formation in a Presidential Campaign. Chicago, IL: University of Chicago Press.Google Scholar
Bhatti, Yosef, Olav Dahlgaard, Jens, Hedegaard Hansen, Jonas, and Hansen, Kasper M.. 2014. “The (Lack of) Effect of Door-to-Door Canavassing in Denmark.” Working Paper. University of Copenhagen.Google Scholar
Bhatti, Yosef, Olav Dahlgaard, Jens, Hedegaard Hansen, Jonas, and Hansen, Kasper M.. 2016. “How Voter Mobilization from Short Text Messages Travel within Households and Families: Evidence from two Nationwide Field Experiments.” Working Paper. Danish Institute for Local and Regional Government Research and the University of Copenhagen.CrossRefGoogle Scholar
Blydenburg, John C. 1971. “The Closed Rule and the Paradox of Voting.” Journal of Politics 33(1): 5771.CrossRefGoogle Scholar
Bond, Robert M., Fariss, Christopher J., Jones, Jason J., Kramer, Adam D. I., Marlow, Cameron, Settle, Jaime, and Fowler, James H.. 2012. “A 61-Million Person Experiment in Social Influence and Political Mobilization.” Nature 489: 295–98.CrossRefGoogle Scholar
Bositis, David A. and Steinel, Douglas. 1987. “A Synoptic History and Typology of Experimental Research in Political Science.” Political Behavior 9(3): 263–84.CrossRefGoogle Scholar
Campbell, Angus, Converse, Philip E., Miller, Warren E., and Stokes, Donald E.. 1960. The American Voter. Chicago, IL: University of Chicago Press.Google Scholar
Cox, Gary W. and Munger, Michael C.. 1989. “Closeness, Expenditures, and Turnout in the 1982 U.S. House Elections.” American Political Science Review 83(1): 217–31.CrossRefGoogle Scholar
Cubbison, William. 2015. “The Marginal Effects of Direct Mail on Vote Choice.” Presented at the 2015 Annual Meeting of the American Political Science Association, San Francisco, CA, September 3–6.Google Scholar
Cutts, David, Fieldhouse, Edward, and John, Peter. 2009. “Is Voting Habit Forming? The Longitudinal Impact of a GOTV Campaign in the UK.” Journal of Elections, Public Opinion and Parties 19(3): 251–63.CrossRefGoogle Scholar
Davenport, Tiffany C., Gerber, Alan S., Green, Donald P., Larimer, Christopher W., Mann, Christopher B., and Panagopoulos, Costas. 2010. “The Enduring Effects of Social Pressure: Tracking Campaign Experiments over a Series of Elections.” Political Behavior 32(3): 423–30.CrossRefGoogle Scholar
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper.Google Scholar
Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur, eds. 2011. Cambridge Handbook of Experimental Political Science. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Eldersveld, Samuel J. 1956. “Experimental Propaganda Techniques and Voting Behavior.” American Political Science Review 50(1): 154–65.CrossRefGoogle Scholar
Eldersveld, Samuel J. and Dodge, Richard W.. 1954. “Personal Contact or Mail Propaganda? An Experiment in Voting and Attitude Change.” In Public Opinion and Propaganda, ed. Katz, Daniel, Cartwright, Dorwin, Eldersveld, Samuel, and Lee, Alfred M.. New York: Dryden.Google Scholar
Enos, Ryan D. and Fowler, Anthony. 2014. “Pivotality and Turnout: Evidence from a Field Experiment in the Aftermath of a Tied Election.” Political Science Research and Methods 2(2): 309–19.CrossRefGoogle Scholar
Enos, Ryan D., Fowler, Anthony, and Vavreck, Lynn. 2014. “Increasing Inequality: The Effect of GOTV Mobilization on the Composition of the Electorate.” Journal of Politics 76(1): 273–88.CrossRefGoogle Scholar
Fitzgerald, Mary. 2005. “Greater Convenience but Not Greater Turnout: The Impact of Alternative Voting Methods on Electoral Participation in the United States.” American Politics Research 33(6): 842–67.CrossRefGoogle Scholar
Forsetlund, Louise, Chalmers, Iain, and Bjørndal, Arild. 2007. “When Was Random Allocation First Used to Generate Comparisons in Experiments to Assess the Effects of Social Interventions?” Economics of Innovation and New Technology 16(5): 371384.CrossRefGoogle Scholar
García Bedolla, Lisa and Michelson, Melissa R. 2009. “What Do Voters Need to Know? Testing the Role of Cognitive Information in Asian American Voter Mobilization.” American Politics Research 37(2): 254–74.CrossRefGoogle Scholar
García Bedolla, Lisa and Michelson, Melissa R. 2012. Mobilizing Inclusion: Transforming the Electorate through Get-Out-the-Vote Campaigns. New Haven, CT: Yale University Press.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Green, Matthew N.. 2003. “The Effects of Partisan Direct Mail on Voter Turnout.” Electoral Studies 22(4): 563–79.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., Kaplan, Edward H., and Kern, Holger L.. 2010. “Baseline, Placebo, and Treatment: Efficient Estimation for Three-Group Experiments.” Political Analysis 18(3): 297315.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W.. 2008. “Social Pressure and Voter Turnout: Evidence from a Large-scale Field Experiment.” American Political Science Review 102(1): 3348.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W.. 2010. “An Experiment Testing the Relative Effectiveness of Encouraging Voter Participation by Inducing Feelings of Pride and Shame.” Political Behavior 32(3): 409–22.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Shachar, Ron. 2003. “Voting May Be Habit Forming: Evidence from a Randomized Field Experiment.” American Journal of Political Science 47(3): 540–50.CrossRefGoogle Scholar
Gertzog, Irwin N. 1970. “The Electoral Consequences of a Local Party Organization’s Registration Campaign: The San Diego Experiment.” Polity 3(2): 247–64.CrossRefGoogle Scholar
Gosnell, Harold F. 1927. Getting Out the Vote: An Experiment in the Stimulation of Voting. Chicago, IL: University of Chicago Press.Google Scholar
Gosnell, Harold F. 1930. Why Europe Votes. Chicago, IL: University of Chicago Press.Google Scholar
Gosnell, Harold F. 1935. Negro Politicians: The Rise of Negro Politics and Chicago. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Gosnell, Harold F. 1937. Machine Politics: The Chicago Model. Chicago, IL: University of Chicago Press.Google Scholar
Green, Donald P. and Gerber, Alan S.. 2004. Get Out the Vote: How to Increase Voter Turnout. 1st ed. Washington, DC: Brookings Institution Press.Google Scholar
Green, Donald P. and Gerber, Alan S.. 2008. Get Out the Vote: How to Increase Voter Turnout. 2nd ed. Washington, DC: Brookings Institution Press.Google Scholar
Green, Donald P. and Gerber, Alan S.. 2015. Get Out the Vote: How to Increase Voter Turnout. 3rd ed. Washington, DC: Brookings Institution Press.Google Scholar
Green, Donald P., Gerber, Alan S., and Nickerson, David W.. 2003. “Getting Out the Vote in Local Elections: Results from Six Door-to-Door Canvassing Experiments.” Journal of Politics 65(4): 1083–96.CrossRefGoogle Scholar
Green, Donald P. and Kern, Holger L.. 2012. “Modeling Heterogeneous Treatment Effects in Survey Experiments with Bayesian Additive Regression Trees.” Public Opinion Quarterly 76(3): 491511.CrossRefGoogle Scholar
Green, Donald P., McGrath, Mary C., and Aronow, Peter M.. 2013. “Field Experiments and the Study of Voter Turnout.” Journal of Elections, Public Opinion and Parties 23(1): 2748.CrossRefGoogle Scholar
Green, Donald P. and York, Erin A.. Forthcoming. “Field Experiments in Political Behavior.” Routledge Handbook of Public Opinion and Voting Behaviour (Fisher, Justin et al., eds.). New York: Routledge.Google Scholar
Grenzke, Janet and Watts, Mark. 2005. “Hold the Phones: Taking Issue with a Get-Out-the-Vote Strategy.” Campaigns & Elections 25 (December/January): 8183.Google Scholar
Grey, Lawrence. 1999. How to Win a Local Election: A Complete Step-by-Step Guide. Lanham, MD: M. Evans.Google Scholar
Hartmann, George W. 1936. “A Field Experiment on the Comparative Effectiveness of ‘Emotional’ and ‘Rational’ Political Leaflets in Determining Election Results.” Journal of Abnormal and Social Psychology 31(1): 99114.CrossRefGoogle Scholar
Hersh, Eitan D. 2015. Hacking the Electorate: How Campaigns Perceive Voters. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Huckfeldt, Robert and Sprague, John. 1992. “Political Parties and Electoral Mobilization: Political Structure, Social Structure, and the Party Canvass.” American Political Science Review 86(1): 7086.CrossRefGoogle Scholar
Humphreys, Macartan, de la Sierra, Raul Sanchez, and van der Windt, Peter. 2013. “Fishing.” Political Analysis 21(1): 120.CrossRefGoogle Scholar
Imai, Kosuke and Strauss, Aaron. 2011. “Estimation of Heterogeneous Treatment Effects from Randomized Experiments, with Application to the Optimal Planning of the Get-Out-The-Vote Campaign.” Political Analysis 19(1): 119.Google Scholar
Imai, Kosuke and Ratkovic, Marc. 2013. “Estimating Treatment Effect Heterogeneity in Randomized Program Evaluation.” Annals of Applied Statistics 7(1): 443–70.CrossRefGoogle Scholar
Issenberg, Sasha. 2012. The Victory Lab: The Secret Science of Winning Campaigns. New York: Crown Publishers.Google Scholar
Issenberg, Sasha. 2015. “Inside the GOP’s Effort to Close the Campaign-Science Gap with Democrats. Bloomberg. Available at http://www.bloomberg.com/politics/features/2015-07-08/inside-the-gop-s-effort-to-close-the-campaign-science-gap-with-democrats.Google Scholar
John, Peter and Brannan, Tessa. 2008. “How Different are Telephoning and Canvassing? Results from a ‘Get Out the Vote’ Field Experiment in the British 2005 General Election.” British Journal of Political Science 38 :565–74.CrossRefGoogle Scholar
Kramer, Gerald H. 1970. “The Effects of Precinct-Level Canvassing on Voter Behavior.” Public Opinion Quarterly 34(4): 560–72.CrossRefGoogle Scholar
Leighley, Jan E. and Nagler, Jonathan. 2013. Who Votes Now? Demographics, Issues, Inequality, and Turnout in the United States. Princeton, NJ: Princeton University Press.Google Scholar
Lijphart, Arend. 1997. “Unequal Participation: Democracy’s Unresolved Dilemma.” American Political Science Review 91(1): 114.CrossRefGoogle Scholar
Lupfer, Michael and Price, David E.. 1972. “On the Merits of Face-to-Face Campaigning.” Social Science Quarterly 53(3): 534–43.Google Scholar
Macedo, Stephen, Alex-Assensoh, Yvette, Berry, Jeffrey M., Brintnall, Michael, Campbell, David E., Fraga, Luis Ricardo, Fung, Archon, Karpowitz, Christopher F., Levi, Margaret, Levinson, Meira, Lipsitz, Keena, Niemi, Richard G., Putnam, Robert D., Rahn, Wendy M., Reich, Rob, Rodgers, Robert R., Swanstrom, Todd, and Walsh, Katherine Cramer. 2005. Democracy at Risk: How Political Choices Undermine Citizen Participation, and What We Can Do about It. Washington, DC: Brookings Institution Press.Google Scholar
Malchow, Hal. 2003. The New Political Targeting. Washington, DC: Campaigns and Elections.Google Scholar
Mann, Christopher B. 2010. “Is There a Backlash to Social Pressure? A Large-scale Field Experiment on Voter Mobilization.” Political Behavior 32(3): 387407.CrossRefGoogle Scholar
Mann, Christopher B. and Klofstad, Casey A.. 2015. “The Role of Call Quality in Voter Mobilization: Implications for Electoral Outcomes and Experimental Design.” Political Behavior 37(1): 135–54.CrossRefGoogle Scholar
McNamara, Michael. 2012. The Political Campaign Desk Reference: A Guide for Campaign Managers, Professionals and Candidates Running for Office. 2nd ed. Denver, CO: Outskirts Press.Google Scholar
Merriam, Charles and Gosnell, Harold F.. 1924. Non-Voting: Causes and Methods of Control. Chicago, IL: University of Chicago Press.Google Scholar
Michelson, Melissa R. 2003. “Getting Out the Latino Vote: How Door-to-Door Canvassing Influences Voter Turnout in Rural Central California.” Political Behavior 25(3): 247–63.CrossRefGoogle Scholar
Michelson, Melissa R. 2014. “Memory and Voter Mobilization.” Polity 46: 591610.CrossRefGoogle Scholar
Milbrath, Lester W. 1965. Political Participation. Chicago, IL: Rand McNally.Google Scholar
Miller, Roy E., Bositis, David A., and Baer, Denise L.. 1981. “Stimulating Voter Turnout in a Primary: Field Experiment with a Precinct Committeeman.” International Political Science Review 2(4): 445–60.CrossRefGoogle Scholar
Moore, Ryan T. and Schnakenberg, Keith. 2015. “blockTools: Blocking, Assignment, and Diagnosing Interference in Randomized Experiments.” R package version 0.6–2.Google Scholar
Nickerson, David W. 2005. “Scalable Protocols Offer Efficient Design for Field Experiments.” Political Analysis 13: 120.CrossRefGoogle Scholar
Nickerson, David W. 2007. “Quality Is Job One: Volunteer and Professional Phone Calls.” American Journal of Political Science 51(2): 269–82.CrossRefGoogle Scholar
Nickerson, David W. 2008. “Is Voting Contagious? Evidence from Two Field Experiments.” American Political Science Review 102(1): 4957.CrossRefGoogle Scholar
Olson, Mancur. 1965. The Logic of Collective Action. Cambridge, MA: Harvard University Press.Google Scholar
Panagopoulos, Costas. 2009. “Partisan and Nonpartisan Message Content and Voter Mobilization: Field Experimental Evidence.” Political Research Quarterly 62(1): 7077.CrossRefGoogle Scholar
Panagopoulos, Costas. 2010. “Affect, Social Pressure, and Prosocial Motivation: Experimental Evidence of the Mobilizing Effects of Pride, Shame, and Publicizing Voting Behavior.” Political Behavior 32(3): 369–86.CrossRefGoogle Scholar
Panagopoulos, Costas. 2013a. “Extrinsic Rewards, Intrinsic Motivation, and Voting.” Journal of Politics 75(1): 266–80.CrossRefGoogle Scholar
Panagopoulos, Costas. 2013b. “Positive Social Pressure and Prosocial Motivation: Evidence from a Large-Scale Field Experiment on Voter Mobilization.” Political Psychology 34(2): 265–75.CrossRefGoogle Scholar
Patterson, Samuel C. and Caldeira, Gregory A.. 1983. “Getting Out the Vote: Participation in Gubernatorial Elections.” American Political Science Review 77(3): 675–89.CrossRefGoogle Scholar
Pons, Vincent. 2016. “Will a Five-Minute Discussion Change Your Mind? A Countrywide Experiment on Vote Choice in France.” Technical Report. Harvard Business School Working Paper.Google Scholar
Ramiro, Luis, Morales, Laura, and Jimenez-Buedo, Maria. 2012. “The Effects of Party Mobilization on Electoral Results: An Experimental Study of the 2011 Spanish Local Elections.” Presented at the 2012 Annual Meeting of the International Political Science Association, Madrid, July 8–12. Available at http://paperroom.ipsa.org/papers/view/15950.Google Scholar
Reback, G.L. 1971. “The Effects of Precinct Level Voter Contact Activities on Voter Behavior.” Experimental Study of Politics 1: 6597.Google Scholar
Rosenstone, Steven J. and Hansen, John Mark. 1993. Mobilization, Participation, and Democracy in America. New York: Macmillan.Google Scholar
Rosenstone, Steven J. and Wolfinger, Raymond E.. 1978. “The Effect of Registration Laws on Voter Turnout.” American Political Science Review 72(1): 2245.CrossRefGoogle Scholar
Schlesinger, Arthur M. and Eriksson, Erik McKinley. 1924. “The Vanishing Voter.” The New Republic, October 15.Google Scholar
Schlozman, Kay Lehman, Verba, Sidney, and Brady, Henry E.. 2012. The Unheavenly Chorus: Unequal Political Voice and the Broken Promise of American Democracy. Princeton, NJ: Princeton University Press.Google Scholar
Shaw, Catherine. 2000. The Campaign Manager: Running and Winning Local Elections. 2d ed. Boulder, CO: Westview Press.Google Scholar
Shaw, Catherine. 2014. The Campaign Manager: Running and Winning Local Elections. 5th ed. Boulder, CO: Westview Press.Google Scholar
Shea, Daniel M. and Burton, Michael John. 2001. Campaign Craft: The Strategies, Tactics, and Art of Political Campaign Management. Westport, CT: Praeger Publishers.Google Scholar
Sides, John and Vavreck, Lynn. 2014. The Gamble: Chance and Choice in the 2012 Presidential Election. Princeton, NJ: Princeton University Press.Google Scholar
Sinclair, Betsy, McConnell, Margaret, and Green, Donald P.. 2012. “Detecting Spillover Effects: Design and Analysis of Multilevel Experiments.” American Journal of Political Science 56(4):10551069.CrossRefGoogle Scholar
Sinclair, Betsy, McConnell, Margaret, and Green, Donald P.. 2013. “Local Canvassing: The Efficacy of Grassroots Voter Mobilization.” Political Communication 30(1): 4257.CrossRefGoogle Scholar
Wattenberg, Martin P. 1998. “Should Election Day be a Holiday?” The Atlantic Monthly 282(4): 4246.Google Scholar
Wielhouwer, Peter W. and Lockerbie, Brad. 1994. “Party Contact and Political Participation, 1952–1990.” American Journal of Political Science 38(1): 211–29.CrossRefGoogle Scholar
Wolfinger, Raymond E. and Rosenstone, Steven J.. 1980. Who Votes? New Haven, CT: Yale University Press.Google Scholar
Figure 0

Figure 1 Number of journal articles using field experiments to study voting, 1925–2015Source: Google Scholar, May 2016. Refer to the text for more information about the search parameters.