Hostname: page-component-7b9c58cd5d-v2ckm Total loading time: 0 Render date: 2025-03-16T13:02:57.767Z Has data issue: false hasContentIssue false

Exit Polling in Canada: An Experiment

Published online by Cambridge University Press:  18 December 2006

Steven D. Brown
Affiliation:
Wilfrid Laurier University
David Docherty
Affiliation:
Wilfrid Laurier University
Ailsa Henderson
Affiliation:
Wilfrid Laurier University
Barry Kay
Affiliation:
Wilfrid Laurier University
Kimberly Ellis-Hale
Affiliation:
Wilfrid Laurier University
Rights & Permissions [Opens in a new window]

Abstract

Abstract. Although exit polling has not been used to study Canadian elections before, such polls have methodological features that make them a potentially useful complement to data collected through more conventional designs. This paper reports on an experiment with exit polling in one constituency in the 2003 Ontario provincial election. Using student volunteers, a research team at Wilfrid Laurier University conducted an exit poll in the bellwether constituency of Kitchener Centre to assess the feasibility of mounting this kind of study on a broader scale. The experiment was successful in a number of respects. It produced a sample of 653 voters that broadly reflected the partisan character of the constituency, and which can hence be used to shed light on patterns of vote-switching and voter motivations in that constituency. It also yielded insights about best practices in mounting an exit poll in the Ontario context, as well as about the potential for using wireless communication devices to transmit respondent data from the field. The researchers conclude that exit polling on a limited basis (selected constituencies) is feasible, but the costs and logistics associated with this methodology make a province-wide or country-wide study unsupportable at present.

Résumé. Bien que les sondages “sortie des urnes” n'aient pas été utilisés jusqu'ici dans l'étude des élections au Canada, de tels sondages possèdent certaines caractéristiques qui en font un complément potentiellement très utile des méthodes plus traditionnelles de cueillette des données. Cet article rend compte d'un sondage “sortie des urnes” expérimental effectué dans une circonscription lors de l'élection provinciale de 2003 en Ontario. Utilisant des bénévoles étudiants, une équipe de recherche de l'Université Wilfrid Laurier a conduit un sondage “sortie des urnes” à Kitchener Centre, une circonscription indicatrice de tendance, afin de déterminer la faisabilité de ce type d'étude au niveau fédéral. L'expérience a réussi à plusieurs égards. Elle a fourni un échantillon de 653 électeurs qui reflétaient en gros le caractère partisan de la circonscription, ce qui a rendu possible l'étude des motivations des électeurs et des revirements de vote dans la région. L'expérience a aussi fourni des renseignements sur les pratiques exemplaires concernant l'utilisation des sondages “sortie des urnes” au niveau provincial, ainsi que sur la possibilité d'employer des techniques de communication sans fil pour transmettre les données recueillies des répondants. Les chercheurs ont conclu que les sondages “sortie des urnes” sont réalisables dans un cadre restreint, dans certaines circonscriptions sélectionnées, mais que les coûts et la logistique nécessités par cette méthodologie la rendent actuellement impraticable pour une étude à l'échelle provinciale ou nationale.

Type
Research Article
Copyright
© 2006 Cambridge University Press

Introduction

Few would contend that elections have been neglected as objects of study by Canada's polling and academic communities. After all, there were no less than 55 national polls published over the 56 days of the 2006 federal election campaign, and pre-election polls have increased their profile in campaign media coverage for each successive Canadian election over the past two decades (Andersen, 2000; Brown and Kay, 1997/98). Similarly, national surveys of voters have been conducted by multi-university research teams during or following 13 of the past 14 Canadian elections. Given this longstanding interest and expertise in Canadian elections, then, it might surprise outside observers to learn that Canada has virtually no history of “exit polling” as part of its election coverage and analysis.

“Exit polls” take their name from their timing in the election campaign: they are surveys of randomly selected voters interviewed as they leave their respective polling locations. An exit poll was first tried in 1967 by CBS News as an experiment to enhance the network's US election coverage. Since the early 1980s, these polls have become a fixture of US election night coverage, a regular feature in most other Western political systems and, more recently, an independent check on electoral integrity in fragile democracies such as Venezuela, Mexico, Georgia and the Ukraine.1

For a brief overview of the development of exit polling and the conventional methodologies employed, see Hofrichter (1999).

Given their ubiquity elsewhere, why has exit polling not been introduced in the Canadian context? While a combination of factors is relevant here, clearly the economic consideration is paramount. With the need to staff hundreds of geographically dispersed polling locations at the same time, exit polls, by their nature, are labour-intensive and pose formidable logistical challenges to organizers. Indeed, in the US, the crushing expense of mounting exit polls in presidential election years has forced the highly competitive mass media industry to pool their resources for collecting election day data.2

For the 2004 US presidential election, the National Election Pool (NEP), a consortium of six television networks, conducted exit polls in about 1480 precincts throughout the country, employing a staff of more than 5,000. For a description of the exit poll methodology they employed in this election, see NEP (2004).

Not surprisingly, then, in a country like Canada, with its regional diversity, its relatively sparse population, and its much smaller national media, the economics of mounting a poll of this kind has so far presented an insurmountable obstacle.3

While “exit polls,” as the term is conventionally used, have not been introduced into Canada, polling organizations in the 2004 and 2006 Canadian federal elections have been experimenting with election day telephone and online surveys. In 2004, for example, COMPAS Research conducted a 1000-respondent survey of Canadians which was used by CanWest Global as part of its election night programming. In 2006, CanWest Global used an Ipsos-Reid Internet panel of 35,000 reported voters to supplement its election night coverage.

An argument can be made that the venture warrants renewed attention. While exit polls are attractive to the mass media because they provide the basis for instantaneous election day analysis for their evening coverage, immediacy is neither their only nor even their most valuable feature. For those studying electoral behaviour, exit polls have three advantages over pre- and post-election research designs: they effectively screen nonvoters out of the sample,4

It should be acknowledged that they screen out “early” and “absentee” voters as well, with uncertain implications. In the US, the proportion of voters exercising their franchise through “early” and “absentee” channels has been estimated at 15 per cent of all voters. The problem is further complicated by the fact that “early” and “absentee” voters frequently do not reflect the partisan split on election day (Konner, 2003, Langer, 2004). As a consequence, in 2004, the National Election Pool supplemented their exit polling with prior telephone surveys in 13 states with high proportions of such voters. Although no systematic study has been conducted, fragmentary evidence suggests that those voting in “advance polls” in Canada is a much smaller proportion of the voting population than in the US, and (with the possible exception of the 2004 federal election) usually mirror the voting day partisan division. In Kitchener Centre in the previous 1999 provincial election, for example, the proportion of the vote cast at the “advance poll” was about 3.8 per cent of the total.

they facilitate access to traditionally hard-to-contact populations, and they tap the decision calculus of voters virtually at the time of the decision itself. Because of these features, exit polls have served as a very useful complement to data collected through more conventional designs (Butler, 1996). Working from this premise, a research team at Wilfrid Laurier University experimented with an exit poll during the 2003 Ontario provincial election to assess the feasibility of mounting such a venture on a larger scale. This paper describes that experiment and assesses its success.

Description of the Experiment

Given limited resources, the research team settled on a survey design that was based on student volunteer assistance and was limited to one constituency.5

Several months prior to the 2003 Ontario provincial election, the research team approached a number of potential media partners with a proposal to run an exit poll in five or six representative constituencies on election day. This exercise would serve as a pilot study for a more ambitious project to be undertaken for the national election expected in the spring of 2004. However, the estimated costs for each of these projects made the idea too expensive to be commercially viable for the media.

The constituency of Kitchener Centre was selected for the exercise. In part, it was selected because it is geographically compact and proximal to the campus of Wilfrid Laurier University (within five kilometres). However, it was selected as well because it is an excellent bellwether constituency for both federal and provincial elections—indeed, as a bellwether, it ranks second in accuracy among all of Ontario's 103 constituencies.6

Ontario is the only province in which federal and provincial electoral boundaries are identical. In the recent past, only the constituency of Stoney Creek near Hamilton has a better bellwether record than Kitchener Centre.

The team adopted a two-stage sampling design. The primary sampling unit for the study was the “polling location.” A polling location is a physical structure—usually a school, church or community centre—housing a cluster of between two and seven polls from the surrounding area. Sixteen of 176 polling locations in the constituency were initially selected, and these were selected purposively rather than randomly. The main criteria for selection were (1) partisan representativeness in that each party's vote in the previous provincial election had not varied at that location by more than 5 per cent from the overall constituency average; and (2) size of registered electorate, in that more populous stations were preferred over smaller and more remote locations. The number of registered voters at these 16 locations ranged from 1100 to 2520 in the previous 1999 election. From these 16 initial selections, ten sites were designated as primary selections. The other six locations were designated as alternates, to be used if a primary selection proved inappropriate for some reason. Within each polling location, respondents were selected using systematic random sampling. Interviewers were instructed to select every seventh voter emerging from the polling location, or every fourth voter following a refusal.

Each polling location was assigned a polling captain, and a pair of two-person interview teams to split the 11-hour polling period into two shifts. This meant that 50 students were employed (5 per location) in addition to the supervisory role of academic researchers. None of the staff were financially compensated, but many of the participating students wrote a report based on their experience for some course credit.

The questionnaire was intentionally brief (13 questions occupying one side of one page), and timed to be completed within a two-minute period. Handheld wireless communication devices were provided to each of the ten teams, so that results could be transmitted regularly to a central database housed on campus.7

The Blackberry wireless handheld devices were supplied by the Waterloo-based firm, Research in Motion. Use of the devices allowed for constant wireless e-mail communication between the field and the university.

For the most part, the exit poll conducted on election day (October 3rd, 2003) unfolded according to this design. The student interview teams were deployed to their assigned stations in a timely fashion and rapid communication of the interview data from the field by e-mail allowed us to build a data file through the day.8

Initially, the research team intended to use the Blackberry handheld devices directly in the interview process, with the interviewer reading the questions from the screen and keying in the responses. However, pre-tests revealed that this procedure was too distracting for both interviewer and respondent, too prone to error under field conditions, and might raise issues of voter trust in this methodology. As a more practical alternative, interviewers administered a one-page hard copy questionnaire, and transmitted the pre-coded responses as an e-mail message back to a campus computer.

After the polls closed at 8 p.m., the results were made available online and to the media.

Apart from unseasonably cool weather and some morning precipitation, the biggest obstacle on election day was the resistance put up by Elections Ontario, the province's agency responsible for the administration of elections. In research based on US exit polls, “interviewing position,” among all election day factors, has been found to have the strongest effect on response rates (Merkle and Edelman, 2002). It affects both “miss” rates and refusal rates. Accordingly, the research team contacted Elections Ontario almost three months before the election to alert officials of its research intentions and to provide reassurance that interviewers would not interfere with voters in the exercise of their franchise.

The agency's response on election day was to maximize the obstacles the field teams faced in attempting to contact exiting voters. We had instructed our interviewers to stay outside the building, and to approach voters as they left for the adjoining parking lot or street. Elections Ontario based its concern with this procedure on provisions of the Elections Act, and in particular, section 42 concerning voter privacy. Elections Ontario took the position that contact with electors anywhere on the property of the polling place—whether those electors were entering or departing—constituted voter interference and therefore was not permitted.

As a consequence of persistent objections by returning officers and, in some cases, threats of legal action, many of our teams were forced to take up positions on sidewalks well away from building exits and well away from parking lots. The tenacity of Elections Ontario officials in enforcing this regimen varied from one polling location to the next, and the geographic layout of particular polling locations gave some teams more reliable access to voters than it did to others. In two cases, the zealousness of election officials forced the team to substitute alternate polling locations for the sites originally selected.

Clearly, the stance adopted by Elections Ontario undermined our sampling procedure because interviewers were frequently denied systematic access to the potential population, leading to missed selections. Our design called for a sampling interval of every 7th voter or about 14 per cent of the voting population at these polling locations. In fact, our actual sampling interval averaged about every 10th voter or 10 per cent of the voting population at these locations. In all, 653 interviews were completed over the day.

The Results

Participation in the Survey

As noted, exit polls have a number of advantages over other more conventional survey designs. However, these design strengths are of little value if voters refuse to cooperate. As they left their respective polling locations, how did voters in this Ontario constituency respond to our interviewers' requests?

About two of every three voters (64.8 per cent) who were approached by our interviewers agreed to complete the survey; however, the effective participation rate was somewhat lower at 57.2 per cent, because some respondents who agreed to the interview declined to reveal their vote direction. This result is comparable to participation rates in most US exit polls (Mitofsky, 1991; Mitofsky and Edelman, 1993; but see also Bishop and Fisher, 1995; Busch and Lieske, 1985). In 2004, for example, the comparable overall completion rate for the US exit poll was 59.8 per cent9

The completion rate reported by Edison Media Research and Mitofsky International was 53.2 per cent, but the base for this percentage includes “missed” respondents—that is, respondents who should have been contacted, but who, for a variety of reasons, were not. The base for the Kitchener Centre response rate is limited to respondents who were actually invited to participate. It should be noted that US response rates have been falling steadily from a high of 61 per cent in 1992 to 51 per cent in the 2000 presidential election (Konner, 2003).

(Edison Media Research and Mitofsky International, 2005). On the one hand, then, our results are promising because they suggest that Canadians—or at least the voters of Kitchener Centre—are as receptive to this methodology as voters in the US, where a successful track record has been established. On the other hand, our participation rate pales in comparison with some European experiences. In Germany, for example, response rates have regularly topped 70 per cent (Hofrichter, 1999) in recent elections, and Curtice and Payne (1995) suggest that the British rates are even higher than that.

What do we know about those who refused? To assist in analyzing participation rates, interviewers were asked to code three visible attributes of all refusals: their apparent age group (under 30, 30–64, over 65), their gender, and whether they were a member of a visible minority.10

The “visible minority” variable recorded so little variation in this largely white constituency that it has not been included in the subsequent analysis.

From the interviewer cover sheets, we were also able to determine the gender of the interviewer and the time of vote.

Table 1 compares the refusal rates for different demographic groupings and for different interview situations. The table shows that age and time of vote are both significantly related to participation, but that gender of respondent and gender of interviewer are not. The age differences are the most dramatic of these, with co-operation rates falling steadily from about 80 per cent for the youngest cohort to 40 per cent among those over 65 years. An age-refusal relationship has been observed by others in exit poll situations (Merkle and Edelman, 2002; Edison Media Research and Mitofsky International, 2005), but the strength of the relationship here is quite striking. Examination of the differences suggests that the relationship was stronger because the youngest age group in our constituency was considerably more co-operative (at 81 per cent) than US respondents of the same age (whose response rate averages between 55 per cent and 58 per cent), and the older cohort (at 40 per cent) was somewhat less co-operative than its US counterpart (with average response rates between 43 per cent and 46 per cent). Was this a function of the greater rapport that our young interviewers could establish with those closer to their age? Merkle and Edelman (2002) tested such a hypothesis, but found no support for it (for young voters). Interestingly, they did find age-of-interviewer effects for the oldest age cohort in that older voters were significantly more co-operative when approached by a middle-aged interviewer. Given the lack of age variation among interviewers in our experiment, we are not in a position to test this further.

Participation Rates for Age, Gender, Time of Day, and Interviewer Gender

There is also a significant increase in co-operation after 4:00 p.m. While there might be reason to suspect that this “time of vote” relationship is spurious—after all, most senior citizens voted before 4:00 p.m. and seniors were generally less co-operative with our interviewers—Table 1 indicates that even when age of respondent is controlled, these time differences persist, at least for the mid-age group. The participation rate for the oldest age cohort may actually have decreased after 4:00 p.m., although not enough to attain statistical significance.

What might account for the general increase in co-operation as the day wore on? One possibility is that our student interviewers simply gained experience and technique through the day. However, this seems unlikely because there were two separate shifts and a step function breaking at the 3:00–4:00 p.m. mark seems to describe the data more effectively (see Figure 1).

Participation rates across the day (9:00 a.m. to 8:00 p.m.)

Another possibility is that interviewers encountered a somewhat different type of voter in the post-4:00 p.m. period. For example, researchers in US exit polls have found that employment status distinguishes voters in different time periods over the day (Bush and Lieske, 1985; Fuchs and Becker, 1968; Klorman, 1976). Perhaps not surprisingly, those who are most time constrained—that is, those who have regular employment outside the home—tend to vote after work. Employment might affect participation in two ways. First, it may be that those who are more involved outside the home also feel more comfortable interacting with the likes of our young student interviewers. Such a “comfort factor”—what others have called the “social isolation” hypothesis (Groves and Couper, 1998; Merkle and Edelman, 2002)—might help to explain both the greater reticence of older voters to participate in our survey as well as this higher participation rate among late afternoon and early evening voters. Second, it may be that those who are employed but who are able to slip out to vote during the day may be less inclined to co-operate simply because they must rush to return to work.11

We are indebted to one of the anonymous reviewers for this latter suggestion.

Accuracy of the Poll

One of the strongest selling points for conducting an exit poll lies with its capacity to generate an accurate profile of the behaviour and perspectives of those determining the election outcome. Compared to pre- and post-election surveys, there are fewer factors that can intervene to distort a picture of the electorate's decision and decision-making rationale. Because the accuracy of such polls is a central justification for their considerable cost, it is important to assess the success of our experiment by this measure.

We do not have population values for the voting public for most variables in the constituency of Kitchener-Centre, but we do have such parameters for party support levels in both the 2003 and the previous 1999 provincial elections. How well did our sample estimate these support levels? Table 2 compares the sample and population distributions for the two elections. The table shows that, when asked to recall their vote from the previous 1999 election, our sample performs quite well at estimating actual support levels in that earlier contest; however, our sample does not fare as well at estimating the 2003 election result in Kitchener Centre. Specifically, our sample overestimates the 2003 Liberal vote by about 5 per cent, and underestimates the Progressive Conservative and “other” party votes each by about 2 per cent. Because departures this large are unlikely to be the result of chance variation, we must consider other possible explanatory factors. There are several candidates here.

Comparison of Exit Poll Results with Actual Constituency Results

The first two of these concern possible sampling problems: one, that the polling locations we selected may not adequately represent the constituency; or two, that differential completion rates across the ten polling locations created an unrepresentative sample of our polling location population. We examined our sample with a view to assessing the importance of these factors. First, we established that our ten polling locations are a very representative subsample of the constituency as a whole. Estimates based on this subsample do not deviate from the constituency results by more than a percentage point for any of the four party groups and what small deviations there are here are in the wrong direction to account for our over-report. If our sample of polling locations accurately reflects the larger constituency, could it be that our sample of that population overweights some polling locations and underweights others? That is, were distortions introduced because some interview teams had higher completion rates than others and those teams happened to be in Liberal strongholds? To examine this possibility, we compared the proportion of respondents that we should have drawn from each polling location with the proportion that we did in fact draw. As we might anticipate, there are modest deviations here, some as large as .04 or 4 per cent. However, reweighting our sample to correct for these disproportions does not correct for the distortions in our original sample—indeed, it actually accentuates them by about 1 per cent for most parties.

If our sample of polling locations is representative, and our sample of respondents is not disproportionately drawn from those locations, then we must suspect that the problem lies with the selection process in the field. An obvious candidate here is the age differential in refusal rates noted above. If older voters are more likely to have been Progressive Conservative party supporters in recent elections (and they are), and older voters are also more resistant to switching in this “change” election (a plausible hypothesis), then the under-representation of older voters in our sample might explain the under-representation of PC votes and the over-representation of Liberal ones. However, our test of the underlying assumptions here yielded only a modest age-vote relationship, and one due largely to the decidedly pro-Liberal leaning of the youngest cohort. There were no substantial differences in the voting direction or in the propensity to shift between the “over 30” and “over 65” cohorts. Not surprisingly, then, reweighting the sample to correct for the age maldistribution had almost no impact on the vote distribution of our sample. Similarly, reweighting to make our sample representative on the “time of vote” variable had no impact on our estimates.

Given our lack of success after reweighting, and given our data limitations, we are left only with conjectures about the source of the distortion in our sample. Two possibilities suggest themselves. Either voters misreported their vote to our interviewers or Liberal voters were simply more willing to reveal their preference to us. Although hard evidence is difficult to come by, both have been suspected in cases of recent US and British exit poll distortions (Traugott and Price, 1992; Curtice and Payne, 1995). Most recently, organizers of the 2004 US presidential exit poll have concluded that, for reasons unknown, Democratic party supporters were generally more willing to be interviewed than Republican voters, leading to an over-estimate of Kerry support nationally of 6.5 per cent (Edison Media Research and Mitofsky International, 2005).

This would seem to be the most plausible explanation in the Kitchener Centre case. The 2003 Ontario provincial election campaign was not a very close election in that both polls and pundits were predicting from the outset the defeat of the incumbent Progressive Conservative government by the Liberals. It may be that this prevailing climate of opinion put some PC voters in a less cooperative mood, knowing that they had just backed a losing candidate. Clearly, there is a need for more investigation of the impact of “political” factors on the willingness of voters to participate in exit polls.

Time of Day and the Vote

Although the exit poll provided us with a unique opportunity to forecast the results of the 2003 election in Kitchener Centre, this dividend was of short term-benefit. The long-term benefits of the survey lie in our ability to analyze the behaviour of Ontario voters in a provincial election. One of the questions that exit polls are uniquely suited to answer concerns the time of day that different groups of people vote. What little comparative research there is on this question suggests that there are indeed systematic differences in the electorates that vote at different times of the day (Busch and Lieske, 1985; Fuchs and Becker, 1968; Mendelsohn and Crespi, 1970; but see also Klorman, 1976). For the most part, the differences centre on employment status and variables correlated with it. Table 3 tests this idea by partitioning the sample into three time frames: 9–noon, 12–4 p.m. and 4–8 p.m., and displaying their socio-political profiles.

Comparison of Voters' Socio-Political Profiles for Three Time Periods of the Day

Table 3 suggests that distinct voter profiles are associated with each of the three time periods, but that the greatest contrast is between the “evening” cohort and the two “daytime” cohorts. The distinctions are drawn most sharply on the variables of occupation and age, but to a lesser extent on gender. As we have been led to expect from the literature, the daytime cohorts are over-represented by those whose time commitments tend to be most flexible—the retired, homemakers, and those who are unemployed. Probably as a consequence, the daytime profiles are also over-represented by those over 65, and, to a less extent, by women. Conversely, the evening cohort is over-represented by those who are likely to have daytime job commitments. Interestingly, the differences in cohorts does not extend reliably to their voting profiles; while there is a hint of Liberal over-representation and PC under-representation in the evening cohort, the pattern is not sufficiently distinct to attain statistical significance.

Conclusion

While the exit poll represents a potentially useful tool for those studying voting behaviour, the technique has not until now been deployed in a Canadian electoral context. Our experiment with the methodology in one Ontario constituency yields a number of observations and insights that may be of use to others considering the technique.

First, we found that election day polls of this kind are feasible and not exorbitantly expensive when the population is confined to a modest geographic unit (like a constituency). While we employed a volunteer staff of about 60 for Kitchener-Centre, that number could easily be halved if the staff were compensated for their day's work and the teams streamlined. By the same token, our experiment suggests that a broader exit poll project encompassing the province or the country would pose unacceptable logistical and economic challenges for researchers. For those contemplating a broader project, a selection of constituency studies across the province or country would seem to be the only feasible option.

Second, it appears that the electorate is not unduly resistant to the methodology. To be sure, over a third of those approached by our interviewers declined to participate, and the refusal rate was much higher for those over 65. However, these response rates are not radically different from those experienced in the US. Moreover, our interviewing staff was comprised entirely of young people, the interviewer age group that US studies have found has the weakest response rates (Edison Media Research and Mitofsky International, 2005). This suggests that response rates could be improved if a more age-diverse field staff was employed.

Third, the data collected through this exit poll was useful in that it generated a sample of respondents who seem to represent the partisan character of the constituency, and hence, whose responses can be used to shed light on patterns of vote-switching and voter motivations for this constituency in this election (Brown, Docherty et al., 2004). To be cautious, we limited the interview to about two minutes so as not to jeopardize response rates, but our experience suggests that an interview of twice that length (collecting twice the information) would be feasible.

Fourth, we chose procedurally to administer the questionnaire orally to respondents, to record their responses on hard copy and to transmit the pre-coded responses from the field using handheld wireless devices. In general, the system worked well and we would make only one modest change in a subsequent experiment. Drawing on US experience, we would allow respondents to fill out the questionnaire themselves, and to deposit the completed questionnaire in a “ballot box” on site. Research in the US suggests that this added step tends to enhance the willingness of voters to participate in the poll (Bishop and Fisher, 1995; Levy, 1983).

Finally, the most vexing problem we confronted in Kitchener Centre was restricted access to voters due to the stiff opposition of Elections Ontario staff. While the sample of respondents we interviewed through the day closely resembled the partisan parameters of the electorate, the varying conditions interviewers encountered at different polling stations clearly contributed to nonsampling error in ways that are difficult to assess. This problem needs to be addressed in future studies, ideally through negotiations with election authorities.

Acknowledgments

An earlier draft of this paper was presented at the annual meeting of the Canadian Political Science Association, Winnipeg, Manitoba, June 2004. We would like to thank the reviewers at those meetings and the anonymous Journal reviewers for their very helpful comments on the respective drafts of the article.

References

Andersen, R. 2000. “Reporting public opinion polls: The media and the 1997 Canadian election.” International Journal of Public Opinion Research 12, 28598.Google Scholar
Bishop, G.F. and B.S. Fisher. 1995. “‘Secret ballots’ and self-reports in an exit-poll experiment.” Public Opinion Quarterly 59, 56888.Google Scholar
Brown, S. and B. Kay 1997/98. “Discerning the public mind.” National History 1:3, 23345.Google Scholar
Brown, S., D. Docherty, K. Ellis-Hale, A. Henderson and B. Kay. 2004. “You're leaving? Great! Exit polling in the 2003 Ontario General Election.” Paper presented to the annual meetings of the Canadian Political Science Association, Winnipeg, Manitoba, June 2004.
Busch, R.J. and J.A. Lieske. 1985. “Does time of voting affect exit poll results.” Public Opinion Quarterly 49, 94104.Google Scholar
Butler, D. 1996. “Polls and elections.” In Comparing democracies: Elections and voting in global perspective, ed. Lawrence LeDuc, Richard G. Niemi and Pippa Norris. Thousand Oaks: Sage.
Curtice, J. and C. Payne. 1995. “Forecasting the 1992 election: The BBC experience.” In Political communication: The General Election campaign of 1992, eds. Ivor Crewe and Brian Gosschalk. Cambridge: Cambridge University Press.
Edison Media Research and Mitofsky International. 2005. Evaluation of Edison/Mitofsky Election System 2004. Report prepared for the National Election Pool.
Fuchs, D. and J. Becker. 1968. “A brief report on the time of day when people vote.” Public Opinion Quarterly 32, 43740.Google Scholar
Groves, R.M. and M.P. Couper. 1998. Nonresponse in household interview surveys. New York: Wiley.
Hofrichter, J. 1999. “Exit polls and election campaigns.” In The Handbook of Political Marketing, ed. Bruce Newman. Thousand Oaks: Sage Publications.
Klorman, R. 1976. “Chronopolitics: What time do people vote?Public Opinion Quarterly 40, 18293.Google Scholar
Konner, J. 2003. “The case for caution: This system is dangerously flawed.” Public Opinion Quarterly 67, 518.CrossRefGoogle Scholar
Langer, G. 2004. “Campaign ends with one-point race: One point in voter preference separates candidates as election looms.” ABC Original Campaign Report (November 1st, 2004), http://abcnews.go.com/Politics/story?id=215911.
Levy, M.R. 1983. “The Methodology and Performance of Election Day Polls.” Public Opinion Quarterly 47, 5467.Google Scholar
Medlsohn, H. and I. Crespi. 1970. Polls, television and the new politics. Scranton, PA: Chandler.
Merkle, D.M. and M. Edelman. 2002. “Nonresponse in exit polls: A comprehensive analysis.” In Survey Nonresponse, ed. Robert M. Groves. New York: Wiley.
Mitofsky, W.J. and M. Edelman. 1993. “A review of the 1992 VRS exit polls.” Paper presented to the Annual Meeting of the American Association for Public Opinion Research, May 20–23, St. Charles, IL.
Mitofsky, W.J. 1991. “A short history of exit polls.” In Polling and Presidential Elections, eds. Paul J. Lavrakas and Jack K. Holley. Newbury Park, CA: Sage.
National Election Pool 2004. http://www.exit-poll.net/ (July 10, 2006).
Nevitte, N., A. Blais, E. Gidengil and R. Nadeau. 2000. Unsteady State: The 1997 Canadian Federal Election. Toronto: Oxford University Press.
Province of Ontario. 1990. Election Finances Act. R.S.O.
Province of Ontario. Legislative Assembly of Ontario. 2004.
Traugott, M.W. and V. Price. 1992. “A review: exit polls in the 1989 Virginia Gubernatorial race: Where did they go wrong?Public Opinion Quarterly 56, 24553.Google Scholar
Figure 0

Participation Rates for Age, Gender, Time of Day, and Interviewer Gender

Figure 1

Participation rates across the day (9:00 a.m. to 8:00 p.m.)

Figure 2

Comparison of Exit Poll Results with Actual Constituency Results

Figure 3

Comparison of Voters' Socio-Political Profiles for Three Time Periods of the Day