Hostname: page-component-7b9c58cd5d-g9frx Total loading time: 0 Render date: 2025-03-16T14:07:30.277Z Has data issue: false hasContentIssue false

Does Compulsory Voting Lead to More Informed and Engaged Citizens? An Experimental Test

Published online by Cambridge University Press:  09 October 2008

Peter John Loewen*
Affiliation:
Université de Montréal
Henry Milner*
Affiliation:
Université de Montréal
Bruce M. Hicks*
Affiliation:
Université de Montréal
*
Peter Loewen, Département de science politique, Université de Montréal, CP 6128, Succ. Centre-ville, Montreal, Canada, H3C 3J7, peter.john.loewen@umontreal.ca
Henry Milner, Département de science politique, Université de Montréal, CP 6128, Succ. Centre-ville, Montreal, Canada, H3C 3J7, henry.milner@umontreal.ca
Bruce M. Hicks, Département de science politique, Université de Montréal, CP 6128, Succ. Centre-ville, Montreal, Canada, H3C 3J7, bruce.hicks@umontreal.ca
Rights & Permissions [Opens in a new window]

Abstract

Abstract. Does compulsory voting lead to more knowledgeable and engaged citizens? We report the results from a recent experiment measuring such “second-order effects” in a compulsory voting environment. We conducted the experiment during the 2007 Quebec provincial election among 121 students at a Montreal CEGEP. To receive payment, all the students were required to complete two surveys; half were also required to vote. By comparing knowledge and engagement measures between the two groups, we can measure the second-order effects of compulsory voting. We find little or no such effects.

Résumé. Le vote obligatoire augmente-t-il le niveau d'information et l'engagement politique des citoyens? Nous présentons les résultats d'une expérience mesurant de tels « effets secondaires' » dans un environnement caractérisé par le vote obligatoire. Nous avons mené une expérience auprès de 121 étudiants d'un cégep montréalais lors de l'élection québécoise de 2007. Afin de recevoir une somme d'argent, les étudiants n'avaient qu'à compléter deux questionnaires; une moitié des participants devait en plus voter le jour de l'élection. En comparant le niveau d'information et l'engagement entre les deux groupes, nous pouvons mesurer les effets secondaires du vote obligatoire. Notre expérience révèle que le vote obligatoire a peu ou pas d'effet sur les connaissances et la participation.

Type
Research Article
Copyright
Copyright © Canadian Political Science Association 2008

By compelling people to vote we are likely to arouse in them an intelligent interest and to give them a political knowledge that they do not at present possess.

Arend Lijphart (Reference Lijphart1997), quoting an early Australian advocate of compulsory voting.

1. Introduction

In his well-known presidential address to the American Political Science Association, Arend Lijphart (Reference Lijphart1997) called for compulsory voting as a solution to unequal electoral participation in the United States. In doing so, he restated the main arguments of the advocates of compulsory voting. Most importantly, compulsory voting would increase turnout in elections. Second, compulsory voting would lead to a more politically knowledgeable and engaged electorate.Footnote 1

There can be no quibble with Lijphart's first assertion, which we regard as a first-order effect of compulsory voting. The cross-sectional (Jackman, Reference Jackman1987; Blais and Carty, Reference Blais and Carty1990; Blais and Dobrzynska, Reference Blais and Dobrzynska1998; Franklin, Reference Franklin, LeDuc, Niemi and Norris1996, Reference Franklin2004) and quasi-experimental (Hirczy, Reference Hirczy1994) evidence for this claim is clear. Compulsory voting increases turnout in national elections on average by some 10 to 15 percentage points—and even more in regional and local elections. However, the evidence for the second-order effects of compulsory voting is much less clear, at least partly because of the difficulty of making causal claims about cross-national differences in more subjective variables like political knowledge and engagement.

We argue that an experimental approach is an appropriate way in which to address this gap in our knowledge. To this end, we conducted an experiment in the winter of 2007 in the midst of the Quebec provincial election. Our experiment required that one group of (first-time) voters complete two surveys to receive a monetary reward, while another group was also required to vote in the provincial election, that is, they faced a financial penalty if they chose to abstain from voting. We take between-group differences in knowledge, news consumption, and political discussion as measures of the second-order effects of compulsory voting. To anticipate our findings, we find little evidence of the second-order effects of compulsory voting.

In section 2, we briefly review existing knowledge on the second-order effects of compulsory voting. In doing so, we draw a connection between the lack of current evidence and the value of an experimental approach, an approach which is gaining currency in political science (see Druckman et al., Reference Druckman, Green, Kuklinski and Lupia2006). In section 3, we first operationalize Lijphart's second-order claim in the form of three hypotheses and then describe our experimental design and procedure. In section 4, we present our results. We then conclude.

2. Existing Knowledge and the Case for Experimentation

We lack a body of systematic empirical knowledge about the second-order effects of compulsory voting. For example, Bilodeau and Blais could uncover no empirical studies to support Lipjhart's claim (Reference Bilodeau and Blais2005). To fill the gap, they attempted to substantiate his claim in three ways. They first examined whether citizens in Western European countries with compulsory voting report that they discussed politics more than those in non-compulsory countries. Second, they examined the behaviour of immigrants to New Zealand from compulsory-voting Australia. Third, they examined the behaviour of immigrants to Australia from compulsory voting countries. In each case they sought differences in reported levels of political discussion, interest in politics and attitudes toward voting but were unable to find evidence of second-order effects due to compulsory voting.

A recent analysis of Belgian survey data by Engelen and Hooghe could not find evidence of knowledge effects from compulsory voting (Reference Engelen and Hooghe2007). They used the hypothetical question, “What if voting were not compulsory?” to isolate those who vote to avoid sanction. They find evidence that those who vote to avoid sanction are less knowledgeable about and engaged in politics, suggesting that while compulsory voting is effective at bringing the otherwise less engaged to the polls, it is not necessarily effective at increasing their knowledge levels. Another recent study using data from the Polish Election Survey used the same method in reverse, asking non-voters what they would do if voting were compulsory (Czesnik, Reference Czesnik2007). Not surprisingly, those who reported voting to avoid sanction were the least interested and knowledgeable. As with the Belgian study, this merely demonstrates that compulsory voting brings the otherwise less knowledgeable voters to the polls. Finally, Ballinger looked at the British and Australian evidence, concluding that Australian respondents are no better informed about political systems than British respondents (Reference Ballinger2007).

While all of these studies are informative, they illustrate two methodological obstacles to testing the second-order effects of compulsory voting. First, in contrast to an objective measure like turnout, there is a major problem with cross-national comparability in survey questions tapping political knowledge. It is very difficult to establish that two national scales are measuring the same type and amount of political knowledge (King et al., Reference King, Murray, Saloman and Tandon2004). Moreover, even if our scales are measuring exactly the same quantities, we cannot be certain that each country requires the same amount of knowledge for effective democratic citizenship. Second, even if one can come up with directly comparable measures of survey knowledge, the analyst will still be confronted with a problem of unobserved heterogeneity. It is entirely plausible that countries which adopt compulsory voting are also those which have a more engaged citizenry than those countries which do not require compulsory voting. Hence, we cannot assume that any observed differences are a function of compulsory voting and not some unobserved variable(s) in the populations (Gerber et al., Reference Gerber, Green, Kaplan, Shapiro, Smith and Massoud2004; Shapiro et al., Reference Shapiro, Smith and Massoud2004).

In the absence of a change in electoral law within a country allowing for a before-and-after quasi-experiment, there is no unambiguous empirical basis for determining the second-order effects of compulsory voting. What is needed, therefore, is a method which decouples the presence of compulsory voting from pre-existing levels of citizen engagement and knowledge. One such method is an experiment which randomly assigns some voters to a treatment which resembles one context (such as compulsory voting), while assigning others to a control condition. This is an analytical strategy in keeping with the experimental turn in political science (Druckman et al., Reference Druckman, Green, Kuklinski and Lupia2006; McDermott, Reference McDermott2002; Lupia, Reference Lupia2002; Druckman and Lupia, Reference Druckman, Lupia, Tilly and Goodinforthcoming). We now describe and report results from one such experiment.

3. Hypotheses and Experimental Design

Hypotheses

Following Lijphart, compulsory voting could lead to a more informed and engaged electorate (Reference Lijphart1997: 10). We operationalize these second-order effects in the form of three hypotheses:

  • H1: Those who face a financial incentive against abstention should learn more about politics than those who do not face a similar incentive.

  • H2: Those who face a financial incentive against abstention should discuss politics more frequently than those who do not face a similar incentive.

  • H3: Those who face a financial incentive against abstention should follow the news more frequently than those who do not face a similar incentive.

To each of these three hypotheses we add this common extension: the second-order effects should be greatest among those who would not otherwise go to the polls.

To test these hypotheses, we conducted an experiment among eligible-to-vote students at a Montréal CEGEP during the March 2007 provincial election. The logic of our experimental design is quite simple. We recruited a group of students to participate in a study about “youth attitudes,” consisting of two surveys administered approximately one month apart, at either end of a provincial election campaign. All students who completed these surveys were eligible to receive $25 (CDN).Footnote 2

However, to receive this money a randomly selected subset of the students were also required to vote in the provincial election.Footnote 3 Accordingly, we were left with two groups, one of which faced a financial disincentive if they chose not to vote, the other of which faced no such disincentive. By comparing differences between these two groups in political knowledge, media news consumption and reported discussion about politics, we are able to draw inferences about the effects of compulsory voting-like incentives on voters, especially first-time voters. We note that those in our treatment condition faced a financial incentive to vote, which is not theoretically identical to the prospect of losing money through a fine (Kahneman and Tversky, Reference Kahneman and Tversky1979; Kahneman, Reference Kahneman2003; Cohen and Blum, Reference Cohen and Blum2002). However, we feel confident that, for the purposes of our experiment, this sufficiently approximates compulsion. Moreover, it is the closest we could come within reasonable ethical limitations.

The office of the Director General of Elections (DGE) in Quebec is responsible for the administration of elections in the province, including the registration of voters and the administration of polling stations. Their co-operation made it possible to verify voting by our subjects.Footnote 4 The survey was conducted at Vanier College, a Montreal English-language CEGEP with over 5000 students from a variety of socio-economic groups and a number of ethnic backgrounds, the majority of whom are in pre-university programs.

3.2 Subject Recruitment and Survey Administration

Recruitment occurred in over 60 Vanier College classes, targeting specifically students in pre-university social science and commerce general education courses (such as those with minimal admission requirements). The targeted classes were those most likely to contain students who would be at least 18 years of age on election day, the voting age in Quebec (as in the rest of Canada). Interested students were asked to fill out a registration form. This form contained ten unrelated questions, one of which asked if the student expected to vote in the upcoming Quebec election.Footnote 5

Our subject recruitment occurred in two waves. First, once the election was formally announced, 205 students who filled out the forms and who were eligible to vote were invited by email or telephone to complete the questionnaire in a classroom at the college on a given date and time. Our initial sample included all students who indicated on the recruitment form that they did not expect to vote. The balance of participants was drawn randomly from those who indicated they intended to vote. Half of the 205 were randomly assigned to two treatment rooms and the other half were randomly assigned to two control rooms.Footnote 6 In total, 55 students showed up as instructed. All subjects were administered instructions, a research consent form and a questionnaire, with the only difference being that the subjects who attended the two treatment rooms were informed of the future obligation to vote. Subjects in the control group rooms were not informed that any subjects were being asked to vote. The subjects were not told that the survey was associated in any way with the election, only that there would be a second questionnaire in approximately one month's time.

To expand our sample, we then sent out an email to or telephoned those who did not turn up at the first invitation and to 255 of the remaining students who had filled out the forms (and stated that it was likely they would vote). We offered the option of either completing the attached survey by email or completing it in a secretary's office on the college campus at a time of their convenience (within a five-day window). Once again, assignment to treatment was randomly determined (for details concerning the randomization process, see appendix A). At the end of the first round we had 82 subjects in the control condition and 101 in the treatment condition. Overall, 52 per cent of subjects completed the first survey online, while the remainder filled out a paper copy version. Results of this first survey displayed no significant differences between our control and treatment group in political knowledge, political discussion, or media usage. Moreover, we could find no significant bivariate differences in demographics. We take this as evidence of proper randomization and balance (see appendix A for more details).

The second round of the survey was administered in the five days prior to the election. All subjects from the first round of the survey were emailed the second survey and asked to complete it online or to complete it in on paper at the same secretary's office within the five-day window. The email text differed for those in the treatment and control groups only in regards to the obligation to vote. The deadline to complete the second questionnaire coincided with the close of polls on election day (March 26, 2007). One-hundred and forty-three students completed the second questionnaire (all but six completed it electronically).

In order to verify that they had voted, all subjects had to complete and sign a research consent form in person giving the college permission to provide the DGE with their name and address. Excluding those who failed to fill out consent forms as well as those who we could not officially confirm had voted, we had 55 subjects in the control group and 66 in the treatment group at the end of the study.Footnote 7

3.3 Survey and Dependent Variables

Those subjects who chose to participate in the experiment (either on paper or online) were all given the same survey. The first survey asked them a number of questions about media usage, political discussion, and attitudes toward politics and political involvement, followed by 11 political knowledge questions.

As the overall purpose of the experiment is to determine whether those who have a financial incentive to vote (or a financial disincentive to not vote) to engage more in and learn more about politics, we carefully selected a variety of different knowledge questions. These ranged from questions about the positions of the parties on the issues (for example, on raising university tuition), to relevant political facts (for example, which party was in power when the election was called), to knowledge about the elections, (such as date and eligibility to vote). In sum, we included a variety of knowledge questions which should distinguish those with a rudimentary knowledge about politics generally and current Quebec politics specifically. We did much the same with the second questionnaire. However, we added several political knowledge questions, bringing the total to 20. Nine repeated the previous questions verbatim, two repeated them in altered form, and nine new questions were added, almost all of which were closely linked to developments in the campaign (these questions are reproduced in appendix B). We are confident that the full battery of questions provides an appropriate instrument for uncovering any significant knowledge differences between our two groups relevant to electoral participation in this time and place. A variable called “knowledge” measures respondents' political knowledge as a percentage of questions correctly answered.

We measure political discussion using four questions. The first two questions ask respondents how often they follow what is occurring in “government and public affairs” and how closely they have followed the Quebec election. Each question allows four response categories ranging from “never” to “most of the time.” The next two questions query how often respondents discuss current events with friends and family, with response categories ranging from “never” to “very often.” We scale each of these responses from 0 to 1, and then create a variable, “discussion,” which averages these scores. Accordingly, a subject with a high score follows current events and discusses them with friends and family. A subject with a lower score engages in less such discussion.

Our final dependent variable is “media usage.” We queried subjects on how many days a week they read the newspaper, watch the national news on TV, listen to news on the radio, or read news on the Internet. Our final variable measures the average number of days a week an individual consumes all of these media. Accordingly, a subject with the maximum possible score (7) would consume all of these media every day, whereas a subject with the lowest possible score (0) would consume none of these media on any day of the week.

3.4 Sample Profile

Table 1 presents a profile of our final subjects and their scores on relevant variables. Our subject pool certainly reflects what we would expect from a convenience sample at an English CEGEP. Our subjects are young and principally anglophone. While they are likely more interested in politics than their peers who declined to participate in the survey, they can hardly be described as politically sophisticated. In the first round of the survey, subjects answered less than one in three knowledge questions correctly (28.4 per cent). In the second survey, the percentage of correctly answered questions rose to just 43.1 per cent, and this despite the majority of the questions being repetitions of first-round questions. Similarly, our subjects cannot be easily described as “news junkies” or political conversationalists. Indeed, subjects report consuming news on the radio, TV, Internet and newspaper less than two days per week. The average subject would only report discussing news with family and friends somewhere between rarely and sometimes. Finally, when we examine the other political activities of our subjects, we do not find strong evidence of political engagement. Just one in twenty subjects has ever written a newspaper or contacted a television or radio program regarding a political issue. Only half of subjects report ever having signed a written or email petition.

Table 1. Sample profile

We do not have general population statistics with which to compare these scores. However, we have good reason to believe that our sample does not substantially over-represent political sophisticates. Indeed, the growth in knowledge between the first and second rounds of the survey suggests that subjects were capable of learning more over the course of an election. And subjects could certainly have increased their media consumption and political discussion if so inclined. In sum, this is a reasonable sample on which to test the proposition that compulsory voting encourages greater political engagement, as growth in these measures over the course of the campaign was possible.

4. Results

We find little support in our data for the above hypotheses.Footnote 8Table 2 presents differences in knowledge, discussion, and media usage according to treatment. The cells under “control” and “treatment” present a mean and a standard deviation for each group. The final row provides the results or a t-test of mean differences between those who received the treatment and those who did not.

Table 2. Effects of compulsory voting treatment on political knowledge, political discussion, and media usage (mean differences)

As can be seen, the overall difference in knowledge scores in the second round between groups under treatment and control conditions is not significant. On average, both groups appear to be able to answer approximately four of ten political knowledge questions correctly.

We next consider the possibility that the treatment students did try to learn more about politics but were unable to do so. We find no evidence that they increased their general engagement with politics through discussion, which could have signalled greater effort at learning. Rather, by the end of the campaign those in control and treatment both appeared to engage in conversation with friends and family somewhere between the “rarely” and “sometimes” response categories.

When it comes to media usage, however, there is some indication that subjects in the treatment condition consumed more news by the end of the campaign than those in the control condition. Media usage does seem to increase with treatment, though not at a 95 per cent significance level. The estimate suggests that those in the control condition on average reported consuming all forms of news an average of 2.05 days out of seven, while those in the treatment condition reported consuming all forms of news 2.43 days out of seven. It is hard to know how much significance to attribute to this as we do not know at which point greater media consumption begins to bestow knowledge benefits, or at which point it signals a more engaged electorate. In itself, it is consonant with the claims of compulsory voting advocates, but it leaves a new puzzle in that it does not manifest itself in any measurable increase in knowledge.

Aside from our media usage finding, we have not found support for the hypotheses that financially compelling individuals to vote causes them to become more politically attentive and knowledgeable citizens. It is possible that this is because our treatment was simply not strong enough. Indeed, in the case of some subjects, our monetary incentive was not enough to compel them to vote. This reasonably leads to the question of whether we can expect to find second-order effects where no first-order effects are present. A fairer test of Lijphart's hypotheses would be to exclude those in the treatment condition who did not vote and to look for effects particularly among those who did not intend to vote at the outset of the study but were assigned the treatment and voted. We test this proposition in Table 3. We limit our analysis to those in the control condition who completed both surveys and those in the treatment condition who completed both surveys and voted.Footnote 9 Our approach is to use an OLS regression with the following form:


$$
\displaylines{
  Y{\rm  }(Knowledge) = a + \beta 1*Treatment + \beta 4*Allo \cr 
  {\rm  } + \beta 5*French + \beta 6*Female + \varepsilon  \cr} 
$$

If we wish to isolate the effect of treatment (treatment = 1) on initial non-voters (voter = 0), we are left with the following equation:


$$
\displaylines{
  Y{\rm  }(Knowledge) = a + \beta 1*Treatment + \beta 2*ExpVote \cr 
  {\rm  } + \beta 3*ExpVote*Treatment + \beta 4*Allo \cr 
  {\rm  } + \beta 5*French + \beta 6*Female + \varepsilon  \cr} 
$$

Accordingly, the specific effect of compulsory voting on the knowledge acquisition (or levels of discussion or media usage) among non-voters is captured by the coefficient on treatment.Footnote 10

Table 3. Effect of treatment on knowledge, news consumption and discussion of politics for voters and non-voters (OLS regression)

* = p < .10

** = p < .05

We should note that we do not include several other variables which we know are related to political knowledge and engagement (see Fournier, Reference Fournier, Everitt and O'Neill2002). Because we are using a randomly assigned experiment, we can assume that these factors are equally present in both our control and treatment conditions. Including them should theoretically not change the estimated effects of the compulsory voting treatment. Accordingly, we exclude them and rest with a simpler model.

As Table 3 demonstrates, while we find a treatment effect on news consumption for those who intended to vote in the first place, we can find no effect of the treatment for those who would otherwise be non-voters. We are unable to reject the null hypothesis that compulsory voting does not increase the news consumption of non-voters. Moreover, on both our knowledge variable and discussion variable, we cannot find a significant effect of treatment either among those who intended to vote or those who did not. In sum, our data do not give us any good basis for rejecting the null hypotheses; to the extent that our experiment reproduces a compulsory voting environment, we do not find that compulsory voting boosts political knowledge or discussion about politics. All that is left is a small effect on media usage among those who originally intended to vote.

5. Conclusion

There is little question about the first-order effects of compulsory voting. Countries which have compulsory voting exhibit significantly higher levels of voter turnout. This alone may be enough to recommend its implementation. Its second-order effects, however, are much less certain. We have attempted to test one mechanism by which second-order effects may be generated, namely financial compulsion.

If a relationship between compulsory voting and greater political engagement exists, it is likely so for a number reasons beyond mere financial compulsion. Political parties in compulsory voting environments may expend more effort educating voters or countries with compulsory voting may also possess or develop a political culture which encourages greater engagement in politics, or compulsory voting may compel the media to place a greater effort on educating voters. There are, in other words, many plausible mechanisms by which compulsory voting may be associated with increased political engagement. However, as we have argued, we cannot easily adjudicate between these by cross-sectional research alone. An experimental approach can fill some of this gap.

We have used such an approach to answer a very specific question: Do the financial incentives of a compulsory voting environment increase citizen knowledge, discussion, and media consumption? Our results suggest that though a sufficient motivator for getting an uninformed voter to the polls, avoiding forgoing money cannot be assumed to be a sufficient motivator for getting him or her to learn more about politics. Our results thus place the ball back in the court of the advocates of compulsory voting, especially those who suggest that individuals will seek out more information so as to make correct decisions when compelled to vote. This is hardly the end of the story. But advocates of compulsory voting will need to provide a more compelling, empirically based micro-story about how it makes for better—or at least more informed—citizens.

Acknowledgments

We would like to thank the office of the Directeur Général des Elections de Québec, without whose co-operation and many hours of effort the project could not have been carried out. We thank Vanier College, a CEGEP in Montréal, for its help and support, as well its letting the survey be conducted on its premises and with its students. We also thank Frances Boylston for her assistance and Nora Boyadjin for her competent handling of the written questionnaires, consent forms, and distribution of the money. Various members of the faculty and staff of Vanier College and the Canada Research Chair in Electoral Studies at the Université de Montréal provided much assistance and advice. We thank finally Jamie Druckman, Arthur Spirling, Nick Gallus and this journal's reviewers for helpful comments. All remain errors are our own.

This study was financed by grants from the Secrétariat à la réforme des institutions démocratiques, Ministère du Conseil executive, Gouvernment de Québec, and the Institute for Research on Public Policy. Loewen acknowledges support from SSHRC in the form of a CGS doctoral grant.

Appendix A: Treatment Assignment Procedure

The randomization of participants proceeded in three steps. First, we identified all subjects (119) who indicated on the initial recruitment form that they did not expect to vote or were unsure. Using a random number generator, we assigned each of these subjects a number and then ranked them according to this number. The top half were assigned to the treatment condition and the bottom half to the control condition. Second, we then assigned a random number to all potential participants who indicated they were likely to vote. We selected the top 86 of these participants. The top half of the selected group was assigned to the treatment condition and the bottom half was assigned to the control condition. Third, to expand our sample using an online survey we invited the remaining 255 eligible participants to take part in the study. We assigned subjects to treatment and control prior to contact using the method of random number assignment and then ranking described above. However, in this instance 70 per cent were assigned to treatment and the remaining 30 per cent were assigned to control.

We have checked our randomization procedure across several key variables, and found only one significant difference between conditions in the first round, suggesting that our randomization worked. In each case, we test balanced using an X2 test of the relationship between treatment and the variable in question. Our treatment was balanced according to gender (X2 = 0.82, p < .37), with female participants making up 73 per cent of the treatment group and 67 per cent of the control group. Internet usage was also insignificantly related to treatment assignment (X2 = 5.84, p < .44). Most importantly, there was no difference in the average knowledge scores on the first wave of the survey between the two groups (X2 = 7.06, p < .63). The same is true of political discussion and media news consumption.

We did encounter one possible problem in our randomization. Specifically, considering all those we invited to participate, those who were assigned to the control group chose to participate in larger numbers (66 per cent) than those in the treatment group (54 per cent). This is a significant difference (X2 = 6.50, p < .03) and raises the possibility of a difference between those who were assigned to the treatment condition and then chose to participate compared to those in the control condition who chose to participate. Because the treatment condition requires more effort than the control condition (that is, voting), those who chose to participate under the treatment regime may be more motivated in general. This general level of motivation may also make them more likely to seek out political information. If these groups are unbalanced, any growth found in political knowledge among the treatment group could be attributed to their general levels of motivation (which could differ from the control group) rather than the incentive to learn imposed by mandatory voting. Nevertheless, other factors led us to lay aside this concern. In our first round of invitations, which invited potential participants to a room but did not tell them the details of the experiment, we had 31 participants in the control condition and 22 in the treatment. No participants who showed up declined to fill out the survey. Despite being randomly assigned, we had about 50 per cent more participants in the control conditions show up than those in the treatment. But as this is due to chance, there is no unobserved effect among our first set of participants. When we consider subjects from both rounds of invitations, the possible motivation effect disappears and the difference between the two groups likelihood of participating in the experiment is no longer significant. Taken together, all of these tests suggest that our randomization procedure did not lead to any unobserved differences between the groups which could also be expected to affect knowledge acquisition or political engagement.

Appendix B: Variables

“First-Round Knowledge Score” is the percentage of the following questions answered correctly. Response categories are given in parenthesis with the correct answer in bold.

  • Between the Parti Québécois and the Liberals, which would you say is further to the right (i.e., more conservative) than the other? (Parti Québécois, Liberals)

  • In this country, what is the maximum number of years between elections allowed by law? (3, 4, 5, 6, DK)*

  • Which of the following best describes who is entitled to vote in Quebec elections? (Resident of Quebec, taxpayer in Quebec, landed immigrant in Quebec, Canadian citizen living in Quebec, DK)

  • Which party was in power in Quebec when the Quebec election was called? (Parti Québécois, Liberals, Parti conservateur, ADQ, DK)*

  • When the election was called, which party had the second largest number of seats in the Assemblée Nationale? (Parti Québécois, Liberals, Parti conservateur, ADQ, DK)*

  • Which party leader has raised questions about Quebec's approach of “reasonable accomodation” of minorities? (André Boisclair, Mario Dumont, Gilles Duceppe, Stéphane Dion, DK)*

  • The date of the Quebec election is the (15 March, 26 March, 15 April, 26 April, DK).*

  • Which party wants to maintain the freeze on university tuition fees? (Parti Québécois, Liberals, ADQ, DK)*

  • Which party leader advocates paying mothers who stay home with the children? (André Boisclair, Mario Dumont, Francoise David, Jean Charest, DK)*

  • The Charest government has proposed selling off part of a provincial park. In which region have they proposed this? (Mont Tremblant, Orford, St. Maurice, Charlevoix, DK)*

  • Which party leader is taking credit for Quebec having made progress on eliminating the fiscal imbalance with Ottawa? (André Boisclair, Mario Dumont, Francoise David, Jean Charest, DK)*

“Second-Round Knowledge Score” is the percentage correctly answered of the following questions plus first-round questions marked with an asterisk. Response categories are given in parenthesis with the correct answer in bold.

  • The leader of the Quebec Liberal Party is (please write in, Jean Charest).

  • The leader of the Parti Québécois is (please write in, André Boisclair).

  • The leader of the ADQ (Action démocratique) is (please write in, Mario Dumont).

  • Of the three main parties, which is the most federalist? (Parti Québécois, Liberals, ADQ, DK)

  • During the campaign, an important moment came with decisions announced by Jim Flaherty on March 19th. What is his position? (Federal Finance Minister, Quebec Finance Minister, Premier of Ontario, Premier of Alberta)

  • How many party leaders participated in the March 13th debate? (one, two, three, four, five, DK)

  • Which party leader appeared confused at one point about whether Quebec was divisible or indivisible? (André Boisclair, Mario Dumont, Francoise David, Jean Charest, DK)

  • Which party leader was criticized at one point for using the term “slanted eyes”? (André Boisclair, Mario Dumont, Francoise David, Jean Charest, DK)

  • The polls show how many parties have the support of at least one-quarter of the voters? (one, two, three, four, DK)

“First- and Second-Round Political Discussion” are calculated as the average response to three questions in the first round, and four in the second round. The response category indicating the least frequency is set to 0 and the most frequency is set to 1. The second-round questions were as follows, with those from the first round indicated by an asterisk:

  • Some people seem to follow what's going on in government and public affairs most of the time. Others aren't that interested. Do you follow what's going on in government and public affairs most of the time, some of the time, rarely or ever?

  • Some people seem to follow what's going on in the Quebec election campaign most of the time. Others aren't that interested. Have you been following what's going on in the Quebec election campaign most of the time, some of the time, rarely or never?*

  • How often do you talk about current events or things you have heard about in the news with your FAMILY—very often, sometimes, rarely or never?*

  • How often do you talk about current events or things you have heard about in the news with your FRIENDS—very often, sometimes, rarely or never?*

“First- and Second-Round Media Usage” are calculated as the average response of the following four questions. The questions were in both the first and second survey and are preceded by the following preamble: “Here are some ways that people get news and information. Over the last 7 days, please estimate on how many days you have done each of the following. (Please circle the number of days.)”

  • Read a newspaper. (0–7).

  • Watch the news on TV. (0–7).

  • Listen to the news on the radio. (0–7).

  • Read news on the Internet. (0–7).

“Political Activities” are determined by four questions in the first-round survey, all preceded by the preamble “Here is a quick list of things that some people have done to express their views. For each one, please indicate whether you have ever done it or not.”

  • Contacted a newspaper or magazine to express your opinion on an issue.

  • Called in to a radio or television talk show to express your opinion on a political issue, even if you did not get on the air.

  • Taken part in a protest, march or demonstration.

  • Signed an e-mail or a written petition about a social or political issue.

Footnotes

1 Lijphart also claimed that compulsory voting could reduce the incentive for attack ads and reduce the influence of money in politics. We do not test these claims.

2 This compares with compulsory voting fines of $20 (AUS) in Australia and far exceeds fines in countries such as Argentina (approximately $3.25 to 6.50 (CDN)) or some Swiss cantons (approximately $3 (CDN)).

3 The requirement that all participants be entitled to be paid was a requirement of the Director General of Elections, so that in a formal sense it was not a matter of people being paid to vote.

4 We should like to note that this required no small effort on the part of the DGE.

5 This included questions such as “Do you play sports on campus?” “Do you own a cellphone?” and “Do you plan to go on to university?”

6 Vanier College has two closely situated campuses. To ensure maximum ease of participation, students were given a choice of coming to a room on either of the campuses, and the time coincided with the weekly universal break when no classes are supposed to be scheduled. This break occurs in the middle of day.

7 The attrition rate between the first- and second-round surveys was slightly higher among those in the treatment condition than the control condition (32.9 per cent and 34.7 per cent, respectively). We have excluded those whom the DGE could not find on a voters list so as to verify their having voted.

8 As we found no significant differences between treatment and control conditions in our first-round scores, we limit the analysis to second-round scores. We have done similar analysis using differences between first- and second-round scores as dependent variables. Substantive results do not change.

9 This regression does not include those in the treatment group whom we have identified as non-voting, but it does include non-voters in the control group. The reason for the exclusion is that we want to isolate effects among those for whom the experiment worked (that is, those who voted) and then compare them to what our “electorate” would look like without compulsory voting (that is, one which included voters and non-voters)

10 The treatment effect for those who intended to vote and did vote is captured by the addition of the treatment coefficient and the treatment * expvoter interaction coefficient. Finally, the effect of expecting to vote in the first place is captured by “expected to vote.”

References

Ballinger, Chris. 2007. “Compulsory Voting: Palliative Care for Democracy in the UK.” Paper presented at the ECPR Joint Sessions workshop, “Compulsory Voting: Principles and Practice,” Helsinki.Google Scholar
Bilodeau, Antoine and Blais, André. 2005. “Le vote obligatoire, a-t-il un effet de socialisation politique?” Paper presented to Colloque int. vote obligatoire, Inst. d'études Polit. Lille.Google Scholar
Blais, André and Carty, R. Kenneth. 1990. “Does Proportional Representation Foster Voter Turnout?European Journal of Political Research 18(2): 167–81.Google Scholar
Blais, André and Dobrzynska, Agnieszka. 1998. “Turnout in Electoral Democracies.” European Journal of Political Research 33(2): 239–61.CrossRefGoogle Scholar
Cohen, Jonathan D. and Blum, Kenneth I.. 2002. “Reward and Decision.” Neuron 36: 193–98.CrossRefGoogle ScholarPubMed
Czesnik, Mikolaj. 2007. “Is Compulsory Voting a Remedy? Evidence from the 2001 Polish Parliamentary Elections.” Paper presented at the ECPR Joint Sessions workshop, “Compulsory Voting: Principles and Practice,” Helsinki.Google Scholar
Druckman, James N. and Lupia, Arthur. Forthcoming. “Mind, Will, and Choice: Lessons from Experiments in Contextual Variation.” In The Oxford Handbook on Contextual Political Analysis, ed. Tilly, Charles and Goodin, Robert E.. Oxford: Oxford University Press.Google Scholar
Druckman, James N., Green, Donald P., Kuklinski, James H. and Lupia, Arthur. 2006. “The Growth and Development of Experimental Research in Political Science.” American Political Science Review 100(4): 627–37.CrossRefGoogle Scholar
Engelen, Bart and Hooghe, Marc. 2007. “Compulsory Voting and its Effects on Political Participation, Interest and Efficacy.” Paper presented at the ECPR Joint Sessions workshop, “Compulsory Voting: Principles and Practice,” Helsinki.Google Scholar
Fournier, Patrick. 2002. “The Uninformed Canadian Voter.” In Citizen Politics: Research and Theory in Canadian Political Behaviour, ed. Everitt, Joanna and O'Neill, Brenda. Toronto: Oxford University Press.Google Scholar
Franklin, Mark N. 1996. “Electoral Participation.” In Comparing Democracies: Elections and Voting in Global Perspective, ed. LeDuc, Laurence, Niemi, Richard G and Norris, Pippa. Thousand Oaks, CA: Sage.Google Scholar
Franklin, Mark N. 2004. Voter Turnout and the Dynamics of Electoral Competition in Established Democracies since 1945. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P. and Kaplan, Edward H.. 2004. “The Illusion of Learning from Observational Research.” In Problems and Methods in the Study of Politics, ed. Shapiro, Ian, Smith, Rogers and Massoud, Tarek. Cambridge: Cambridge University Press.Google Scholar
Hirczy, Wolfgand. 1994. “The Impact of Mandatory Voting Laws on Turnout: A Quasi-Experimental Approach.” Electoral Studies 13: 6476.Google Scholar
Jackman, Robert W. 1987. “Political Institutions and Voter Turnout in the Industrial Democracies.” American Political Science Review 81(2): 405–24.CrossRefGoogle Scholar
Kahneman, Daniel. 2003. “A Psychological Perspective of Economics.” The American Economic Review 93: 162–68.Google Scholar
Kahneman, Daniel and Tversky, Amos. 1979. “Prospect Theory: An Analysis of Decisions under Risk.” Econometrica 47: 313–27.Google Scholar
King, Gary, Murray, Christopher J.L., Saloman, Joshua A. and Tandon, Ajay. 2004. “Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research.” American Political Science Review 98(1): 191207.CrossRefGoogle Scholar
Lijphart, Arend. 1997. “Unequal Participation: Democracy's Unresolved Dilemma.” American Political Science Review 91(1): 114.CrossRefGoogle Scholar
Lupia, Arthur. 2002. “New Ideas in Experimental Political Science.” Political Analysis 10: 319–24.Google Scholar
McDermott, Rose. 2002. “Experimental Methodology in Political Science.” Political Analysis 10(4): 325–42.CrossRefGoogle Scholar
Shapiro, Ian, Smith, Rogers and Massoud, Tarek, eds. 2004. Problems and Methods in the Study of Politics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Figure 0

Table 1. Sample profile

Figure 1

Table 2. Effects of compulsory voting treatment on political knowledge, political discussion, and media usage (mean differences)

Figure 2

Table 3. Effect of treatment on knowledge, news consumption and discussion of politics for voters and non-voters (OLS regression)