Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-10T13:17:52.878Z Has data issue: false hasContentIssue false

The Failure to Examine Failures in Democratic Innovation

Published online by Cambridge University Press:  12 June 2017

Paolo Spada
Affiliation:
University of Coimbra (Portugal)
Matt Ryan
Affiliation:
University of Southampton (United Kingdom)
Rights & Permissions [Opens in a new window]

Abstract

Type
Symposium: Civic Engagement and Civic Technology
Copyright
Copyright © American Political Science Association 2017 

INTRODUCTION

Deliberation scholars have changed the landscape of democratic theory irreversibly, providing us with a coherent set of principles that undergird contemporary approaches to democracy making (e.g., Bohman Reference Bohman1998; Gutmann and Thompson Reference Gutmann and Thompson1996). Deliberation has become a normative project owned and invested in by practitioners of democracy worldwide (see Bherer, Gauthier, and Simmard Reference Bherer, Gauthier and Simmard2016). The relevance of deliberative democracy is clear, as it seeks to respond to a set of disquieting contemporary phenomena that increasingly manifest in declining social capital and trust, and increasing deference to extremist populist appeals.

Beyond the deliberative turn in democratic theory, an empirical turn in deliberative democracy has also by now been undertaken. Buoyed by the active praxis of early adopters (Dienel and Renn Reference Dienel, Renn, Renn, Webler and Wiedemann1995; Fishkin Reference Fishkin1991), the empirical study of democratic innovations has diffused in line with the diffusion of these innovations themselves. A distinct subfield of political research has convened with the express goal of explaining the nature and impact of democratic innovations—institutions “designed specifically to increase and deepen citizen participation in the political decision-making process” (Smith Reference Smith2009, 1).

An exploratory analysis of a sample of seven journals in political science Footnote 1 shows that since 2006 around 9% (200/2272) of the articles published in these journals held deliberation as a primary focus. There is no clear trend of increase or decrease in number of published articles on deliberation (see figure 1), which is indicative of the maturation and stabilization of the subfield within the discipline. The majority (151/200=76%) of these studies have an empirical focus and analyze a democratic innovation or a laboratory experiment designed to explore the inner-workings of deliberation (see table 1). The ratio between empirical vis-à-vis strictly theoretical work (the latter incorporating normative political philosophy and conceptual modelling) is fairly stable over the 10 years we have reviewed (see figure 2).

Figure 1 Trends in Published Articles in the Top 5 Journals in Political Science + PAS + JPD

Note: We conducted the mapping in July 2016, data from AJPS, PA, ARPS, APSR, Governance, PAS and JPD.

Table 1 Aggregate Articles on Deliberation in the Last 10 Years (top 5 journals + PAS + JPD)

Figure 2 Empirical Analysis on Deliberation

Note: We conducted the mapping in July 2016, data from AJPS, PA, ARPS, APSR, Governance, PAS and JPD.

However, when analyzing the argument put forward by this large body of empirical studies in further detail, a striking result emerges. The vast majority of articles focus on best practices (64%). Only 18% of empirical articles explore the varying quality of implementation of democratic innovations, and just seven studies (4%) investigate deliberative initiatives that according to the author(s) themselves are failures.

For example, the vast majority of case studies of deliberative polls, citizens’ assemblies, and participatory budgeting are members of the first category. The articles in this first category often highlight some minor issues and problems, but overall portray the innovation as a best practice or a step towards a best practice. Articles in the second category instead explore more significant problems emerging in cases of democratic innovation. Typical members of the second category are randomized controlled trials that analyze how certain commonly used designs generate negative outcomes (e.g., Karpowitz et al. Reference Karpowitz, Mendelberg and Shaker2012, Spada and Vreeland Reference Spada and Vreeland2013), articles that explore the conditions that moderate the impact of democratic innovations (Kosack and Fung Reference Kosack and Fung2014; Wang and Dai Reference Wang and Dai2013), and articles that discuss the degradation of innovations (Baiocchi and Ganuza Reference Baiocchi and Ganuza2014). Lastly we found only seven empirical articles that explored cases that the authors themselves categorize as failures or extremely problematic (Hughes Reference Hughes2016; Kamenova and Goodman Reference Kamenova and Goodman2015; Griffin et al. 2015; Ravazzi and Pomatto Reference Ravazzi and Pomatto2014; Smith Reference Smith2013; Lopes Alves and Allegretti Reference Alves and Allegretti2012; Gaynor Reference Gaynor2011). For a discussion of the coding protocol see the online appendix.

While the number of articles analyzing the variety of these new democratic institutions including failures appears to slowly increase over time (figure 3), it is important to keep in mind that the vast majority of these articles have been published in the Journal of Public Deliberation and in Politics and Society. There is not a single article analyzing a failure in any of the top five journals in political science (second column of table 1).

Figure 3 A Breakdown of Empirical Articles on Deliberation

Note: We conducted the mapping in July 2016, data from AJPS, PA, ARPS, APSR, Governance, PAS and JPD.

Does this mean that the vast majority of cases of democratic institution-building are resounding successes? We do not think so. Various specialized monographs have highlighted how many instances of deliberation fail to promote democratic goods (Fuji-Johnson Reference Fuji-Johnson2015; Hendricks Reference Hendriks2012 and Smith Reference Smith2009), many provide a façade of democracy or are cooptation programs with largely undemocratic aims (Wampler Reference Wampler2007), and some do not even survive long enough to generate detectable impacts. Why then has this been elided by the literature? One theoretical reason may be that, unfortunately, the sub-discipline lacks a clear grasp on what might count as failure which could be used to systematically explore the success rate of democratic innovations (something we return to below). For example, if we consider survival over time as the most basic characteristic of a successful democratic innovation, then the data generated by the Brazilian Participatory Budgeting census (Spada Reference Spada2012) available on Participedia.net offer a grim outlook on the survival rate of this family of democratic innovations. On average only half of these processes survive four years of implementation (see table 2). The other half are discontinued.

We argue that this lack of representativeness in the real-world cases of deliberation that command the attention of political scientists is currently a major barrier to understanding democratic improvements.

Table 2 The Survival of Participatory Budgeting (PB) among Brazilian Cities with more than 50,000 Inhabitants

The time periods reflect the city government four-year term in Brazil. The cities considered are those that have a population larger than 50,000 in 1992 excluding Brasilia. Four cities became independent in 1992.

Source: Authors’ calculation based on the Participatory Budgeting Census

We argue that this lack of representativeness in the real-world cases of deliberation that command the attention of political scientists is currently a major barrier to understanding democratic improvements. Without a comparison of success and failure, our models for successful outcomes will be chronically overdetermined, which ultimately reduces their chances of adoption in practice.

Why do we see this pattern of “failure neglect” in top journals? This article explores some explanations for a disconnect between disciplinary focus and real-world outcomes and offers recommendations for design of empirical studies that can provide better feedback to conceptual and normative debates. We begin by discussing some of the pitfalls of research and analysis in an emerging field. We then discuss perverse incentives that affect the relationship between gatekeepers and researchers. We turn to the familiar problem of publication bias as it pertains to the subfield, before considering some causes for optimism and ways forward.

ROADWORKS NEEDED AT THE EMPIRICAL TURN

The achievements of empirical studies of deliberation in practice are significant. Research has helped us to understand preference change in deliberation among randomly selected groups (Fishkin Reference Fishkin2009, Nabatchi et al. Reference Nabatchi, Gastil, Weiksner and Leighninger2012), deliberation’s effect on efficacy and political participation (Gastil, Dees and Weiser Reference Gastil, Dees and Weiser2002), as well as the effect of facilitators on group discussion (Fung Reference Fung2006) to give just a few important examples. Though not fully comprehensive across all existing journals, the trend uncovered above challenges both the overall validity of the empirical turn as it stands and how such work can and should influence the normative project of democratic deepening. We wish to make some suggestions in aid of both exploring and mitigating the causes of this trend.

Sampling and Interpretation Bias

The sub-discipline we address is quite fortunate in that it is characterized by regular engagement between political philosophers, political scientists, and practitioners on merit. Many of the movers and shakers in the discipline can justifiably claim expertise across these categories. This is no mean achievement. However, despite the positive relationship that has been built between philosophy and systematic data collection in the sub-discipline, we argue that more needs to be done to refine and integrate the lessons of practice and theory. This is not a new refrain (see, for example Mutz Reference Mutz2008 and Thompson Reference Thompson2008), but it is one whose nature we wish to update. There is a healthy and necessary tension between normative and empirical work in the discipline (Sabl Reference Sabl2015). We argue that the advancement of any such sub-discipline is hampered when this tension is either too strong or lost altogether; and that both problems arise in the context of work on deliberative institutions.

First we contend that some of the pattern witnessed above can be explained by a lacuna of work on conceptualizing so-called democratic innovations themselves in order to establish standards for invoking empirical cases of the class. The meaning of almost every major concept in social sciences can be contested. While a fixation on definitional issues can be poisonous to the advancement of a discipline, it is hard to overstate the difficulty that defining a class by the quality of being innovative creates for the scientific process of comparison. A normative project of democratic deepening presupposes that improvements on the status quo are necessitated. The gestation of the research area within normative circles has had a lasting influence in the naming of the sub-discipline itself and the expectation of what is studied empirically. Studying innovation implies a commitment to understanding the process of experimentation oriented to identifying and implementing improvements. But what counts as innovation for the purposes of systematic comparison can be confusing because a solutionist approach automatically connotes an idiosyncratic improvement on whatever has gone before.

This scenario presents clear dangers as sampling of exceptional cases is incentivized. An entire class is selected on the dependent variable (or a positively skewed almost-constant as it were). A correction towards the mean is made difficult because each case can be presented as both unique and a member of the class—where what defines the class is the quality of being unique in some way. Such an account is stylized in the sense that no serious scholar will make the explicit claim that a case is both unique and the same in terms of characteristics relevant to the study at hand. And of course if a sub-discipline did not depart from what went before there would be no need for the sub-discipline. But it is the lack of lucid conceptual analysis (which requires both theoretical and empirical contributions) that allows for this selection bias to manifest in a dearth of studies of failures. Failures will simply not be as salient where a class is defined by solutionist claims. Footnote 2

There are very few occasions that we are aware of where empirical scholars have been clearly able to map out or even borrow from theory a set of necessary conditions that distinguish what a non-case of a democratic innovation under investigation is from a case with a negative outcome based on a chosen dependent variable. If the first cases brought to attention are all those with exceptional outcomes, we might expect that in reality the typical case cannot be so, and we cannot be sure of what the typical case looks like and how it differs from our exceptional ones. We would expect much like in Galton’s (Reference Galton1886) original that the child’s height would regress from that of the exceptional parent (say the British Columbia Citizen’s Assembly archetype) towards the mean (something less deliberative or well-planned perhaps). The problem here is that we don’t have a good sense of a population and we don’t know what the mean might look like. If we took a sample of participatory processes over time we would probably expect that the results in Porto Alegre were atypical. It is not that variation does not exist, it is that the standards for communicating and interpreting that variation do not. Current conceptions of deliberative or democratic innovations either are too exclusive of real-life consultations that are regularly replicated by a plethora of governments and agencies; or can be too inclusive of non-deliberative interventions whose effect departs from standard outcomes—a criticism that has been levelled at the burgeoning deliberative systems literature (Owen and Smith Reference Owen and Smith2015). Where that tension between normative and empirical work is weakened, conceptual clarity will suffer.

This leads us to the second related problem which manifests not when the tension between normative and empirical analysis is too weak, but when it is too strong. What should researchers do when presented evidence that falsifies a hypothesis? Where normative commitments are strongly held, it is all too easy to categorize failures post-hoc as instances of non-deliberation or unintended consequences of otherwise successful processes.

This leads us to the second related problem which manifests not when the tension between normative and empirical analysis is too weak, but when it is too strong. What should researchers do when presented evidence that falsifies a hypothesis?

We should be wary of temptations towards concept-shifting (i.e., oh well that wasn’t really deliberation then) as a response to evidence of negative effects on democratic outcomes generated by democratic innovations. If deliberative scholars are overzealous in their normative commitments, evidence that reduces the odds that mechanisms designed to improve democratic deliberation are successful in certain contexts will be ignored or downplayed and the project of democratic deepening will suffer. Where such a situation persists a vicious circle is created whereby negative portrayals of deliberative democracy in certain contexts are not seen as part of the field of democratic innovation (Hibbing and Theiss-Morse Reference Hibbing2002, Shapiro Reference Shapiro2003); and the sub-field becomes no more than a self-referencing echo-chamber. Following Mutz (Reference Mutz2008) we reiterate calls that contributors to debates on democratic institution-building should make clear distinctions between what they expect deliberative democracy to deliver, and what democratic goods they expect specific instances of deliberation to deliver. In other words, deliberative democrats should reflexively consider the scope of their arguments in light of the evidence. That is, we need a better idea of how deliberative democracy is likely to achieve democratic goods in different contexts; at the expense of wasting time trying to interpret what might make a context ‘deliberative’ or not. It is only once this work is done that we can begin the crucial empirically-informed normative work of deciding what kind of deliberation should be prioritized and when.

Pressure at the Gate—Supply Failure

Another factor that contributes to the scarcity of interest in failures and fragilities of democratic innovations is increasing constraints on relationships between researchers and gatekeepers that generates a low supply of these type of studies.

The competitive nature of the emerging democratic innovation ‘industry’ and the scarcity of long-term funds implies that firms that specialize in the facilitation and support of democratic innovations crave academic validation, that is, research that cheerleads their efforts and products. This system is born not of excessive greed or narcissism on the part of for-profits or non-profits that make up the sector, but of a survival imperative. Perverse incentives are generated by market forces, political competition and austerity measures. The problem is not to compete with other providers of similar services, but to survive a crowded marketplace for political reform, beholden to ideological competition. Often democratic innovations are adopted in the midst of fierce political competition and thus any critique, no matter how small and abstract, might be used to justify abandoning the project.

Inevitably the opportunity spaces for research are limited or mollified, when small organizations are tasked with collecting process data that can be used to review their own performance. The critique of evaluation in the industry is not new (Lee Reference Lee2015), but intensifies as the public participation professional (PPP) space becomes more crowded and competition among small organizations for survival becomes more pronounced in times of austerity. Academics are often involved in these evaluations but their skill-set can be used as much to avoid serious scrutiny as to provide it. The request for signing of a confidentiality agreement (see an example in appendix 2) is becoming common for academics that join the research boards of organizations implementing democratic innovations.

Thus academics when invited to evaluate democratic innovations are forced to inhabit a difficult space because they have to generate knowledge that is rigorous, but at the same time cannot be used to damage the reputation of the innovation itself. In many cases the innovation is promoted by well-meaning organizations that open their doors to research. The incentive for the academic is to stop asking the difficult questions (or at least answering them publicly for now), and concentrate on ancillary debates in which significant positive impact can be shown. Most evaluations of deliberative institutions, for example, show increases in learning and internal efficacy of participants—something that is quite important—but very few enter into system-level questions of real policy impact and long-term empowerment.

In order to overcome these problems, academics have developed and implemented innovations themselves. The developer/researcher is a role that has become well understood in the discipline and which funding bodies have supported. However, internalizing development does not automatically shield research from market or political forces in the long-run. When initial exploratory grants run dry, the innovation developed by an academic has to self-fund on the basis of its own merits and not on the basis of the research output. Often developer/researchers have been criticized for not being forthcoming with their data or allowing external impact evaluation of their innovations (Lupia Reference Lupia2004). As a community we are left sleepwalking into a scenario where researchers who have fought long and hard to develop good reputations are criticized because of information constraints not entirely of their own making.

Thus the bias towards generating evidence that is positive is rooted in a mix of moral and personal incentives. Developers, activists and academics themselves are for the most part committed to the normative project of democratic deepening and are interested in the proliferation of these programs so that they can generate more studies and obtain more data.

Publication Bias—Demand Failure

Debates above may of course be exacerbated by a familiar bias for publishing ‘findings’ (Gerber, Green and Nickerson Reference Gerber, Green and Nickerson2001) and innovation studies that have desirable consequences (Rogers Reference Rogers1983, Sveiby et al. Reference Sveiby, Gripenberg, Segercrantz, Eriksson and Aminoff2009). In 1983 Rogers in reviewing the literature on the diffusion of policy innovation found only 0.2% of studies investigated unintended negative consequences. This evidence bears out what we suggested about disciplinary framing above. If you don’t find an ‘innovation’ you don’t have much to shout about.

Our exploratory analysis is the first tailored to the specificity of the subfield of democratic innovations and it highlights that the top five journals in political science do not contain a single empirical paper analyzing a failure of a democratic innovation in the past ten years. We find such papers in Politics and Society, (49th in the last Thompson and Reuters ranking) and in the Journal of Public Deliberation (unranked). Are studies of failures not conducted? Or are journals not publishing them because they are less likely to receive attention?

All new fields have to prove their worth in terms of demand. But beyond that there are also very specific characteristics of the democratic innovation sub-field that might be exacerbating this problem. A failure to reach out across sub-disciplinary boundaries has likely resulted in a lack of excitement among editors of generalist journals about what is at stake in replications of democratic innovations which tend to be small-scale with little immediate policy impact (Goodin and Dryzek Reference Goodin and Dryzek2006). To most social scientists it is not surprising that these experiments fail, and therefore the prospects for learning from expected failures may seem limited (unless they are deemed exceptional). The field has established itself as a normative critique but not yet an empirically grounded one, even if we are now witnessing hundreds of thousands of new deliberative institutions bubbling up around the world.

CONCLUSION: SIGNALS OF HOPE?

By almost any measure the study of deliberative and participatory democratic innovations is established as a sub-discipline of political science. There is more funding available for, and awarded to, both researchers and practitioners. This has led to an increase in the number of cases existing as well as the amount of data accessible to scholars, the establishment of networks among researchers and practitioners (e.g., participedia.net), and increasing academic outputs in the form of books, symposia, and articles in top journals. However, a very simple review of the top five journals in political science has uncovered significant biases in the information flows that reach around the sub-discipline. In this paper we have begun to tease out some explanations for this state of affairs. While this situation might be simply a feature of the novelty of the field, we think at least three emerging approaches give indication of what can be done to hasten positive change.

First, the Participedia Project has recently refocused to establish an international network of research centers (26) and other academics (more than 50) working together to create a global map of democratic innovations. This is a long-term project, but it is specifically designed to overcome selection bias by sourcing case studies from participants themselves and mapping their key characteristics. Participedia is just one example of an emerging family of national or multinational mapping projects that strive to chart both successes and failures (e.g., ‘Latinno’, ‘Cherry-Picking’, and the ‘Brazilian Participatory Budgeting Census’). Footnote 3

Second, a new breed of academic/developers is emerging. These new figures can obtain funding that is bound not to promotion of specific innovations, but to explore a variety of different innovations under different local conditions. For example the Democracy Matters project implemented two different designs of Citizens’ Assemblies in two UK municipalities at the same time to explore the interaction between local conditions, design and outcomes. Footnote 4 On a larger scale Archon Fung and Stephen Kosack, together with other colleagues, are implementing randomized controlled trials both in Indonesia and Tanzania to explore again how different local conditions affect the impact of democratic innovations. Footnote 5 Both these projects are designed to generate different impacts, and respond to the call for specific theory-testing and reflexive theory-building.

Third, the newly established Empatia project has developed an interesting embargo approach that might help navigate the tension between practitioners and academics. Empatia is a European Research Council funded project that is implementing a new integrated platform for multi-channel engagement in four European cities (Lisbon, PT; Wuppertal, DE; Milan, IT; and Říčany, CZ). Empatia is piloting an embargo system to allow politicians and implementers to have a few months to prepare a communication strategy to manage the potential negative impact that the release of information will generate. This compromise allows Empatia to include in its researcher-gatekeeper compact a very rigorous impact evaluation component, while at the same time protecting its adopters. At the heart of this approach is the belief that democratic innovations always have challenges to overcome in real-world contexts, and that there may be no perfect solutions. What is necessitated is a robust management system for such problems that does not cover-up failures, but at the same time is not suicidal in the midst of political competition.

Overall we take these projects as welcome signals of hope, but the system of bias-generation is complex and thus we do not expect an immediate change. In particular, we think that it is important that journals themselves start experimenting with new solutions specifically designed to overcome all forms of publication bias. One such experiment conducted in 2015 by Comparative Political Studies has offered an interesting set of results about the impact of results-free review on publications (Findley et al. 2016). Of course these reforms will need to be cognizant of their plurality to different methods and organizing perspectives.

One of the most important next steps for the field is to start theorizing what constitutes a failure. We do not require a definitive answer; the answer may depend, for example, on where one stands with regard to debates in democratic theory about the independence of procedures and outcomes in establishing the democratic character of institutions (Barber Reference Barber1984; Estlund Reference Estlund, Bohman and Rehg1997; Ingham Reference Ingham2013 and Landemore Reference Landemore2013); but we do require a more coherent framework of answers. Our coding here was limited to considering failure only where it was interpreted as such by the author itself. And even without a consensus around the concept of failure for now, we suggest contributors to the field should move more systematic analyses of problems, survival and unintended negative consequences into the spotlight.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/S1049096517000579.

Footnotes

1. We have surveyed the top five journals in political science according to the 2016 Thompson-Reuters ranking (AJPS, PA, ARPS, APSR, Governance) and we have also included the Journal of Public Deliberation, a non-Thomson Reuters ranked journal dedicated to empirical studies on deliberation, as well as Politics and Society (PAS), a journal outside the top-5 that is attentive to the topic of deliberation. We included PAS and JPD as a validity test to provide some assurance that what we were observing was not an artefact of publishing practices associated with top-ranked journals. The data provided in this article was coded by one of the authors with all the limitations that a non-blind coding protocol implies. A full mining of journals within and beyond the broadly defined field of political science was beyond the scope of this article. The data should be treated in accordance with this limited scope.

2. The concept of solutionism cannot be traced to a particular author, it appears in many disciplines and has a variety of flavours. For a discussion on the origin of this pejorative term in philosophy, architecture, pedagogy and civic technology see Morozov popular book “To Save Everything Click Here.” (Morozov Reference Morozov2013).

3. For a description of Latinno see: http://www.latinno.net/en/ ; for a description of the Cherry Picking project see https://cherrypickingproject.wordpress.com/ ; for a description of the Brazilian PB Census see http://participedia.net/en/content/brazilian-participatory-budgeting-census

4. For a description of the project see: http://citizensassembly.co.uk/

6. The word could be simply contained in a footnote or even the title of a cited article.

7. The abstract of the article discusses deliberation or democratic innovations.

8. The abstract of the article specifically mentions a case study or a lab experiment or empirical data on deliberation

References

REFERENCES

Alves, Mariana Lopes and Allegretti, Giovanni. 2012. “(In) stability, a key element to understand participatory budgeting: Discussing Portuguese cases.” Journal of Public Deliberation 8 (2): Article 3.Google Scholar
Barber, B. 1984. Strong Democracy: Participatory Politics for a New Age. Berkeley: University of California Press.Google Scholar
Baiocchi, Gianpaolo and Ganuza, Ernesto. 2014. “Participatory Budgeting as if Emancipation Mattered.” Politics & Society 42 (1): 2950.Google Scholar
Bherer, Luarence, Gauthier, Mario, and Simmard, Simon, eds. 2016. The Professionalization of Public Participation. London: Routledge.Google Scholar
Bohman, James. 1998. “The Coming of Age of Deliberative Democracy.” The Journal of Political Philosophy 6 (4): 400–25.CrossRefGoogle Scholar
Dienel, Peter C. and Renn, Ortwin. 1995. “Planning Cells: A Gate to ‘Fractal’ Mediation.” In Fairness and Competence in Citizen Participation, eds. Renn, Ortwin, Webler, Thomoas and Wiedemann, Peter, 117–40. Dordrecht, The Netherlands: Kluwer Publishers.Google Scholar
Estlund, David. 1997. “Beyond Fairness and Deliberation: The Epistemic Dimension of Democratic Authority.” In Deliberative Democracy: Essays on Reason and Politics, ed. Bohman, James and Rehg, Willaim, 173204. Cambridge, MA: MIT Press.Google Scholar
Findley, Michael G., Jensen, Nathan M., Malesky, Edmunt J., and Thomas, B.. 2016. “Can Results-Free Review Reduce Publication Bias? The Results and Implications of a Pilot Study.” Comparative Political Studies 49 (13): 1667–703.CrossRefGoogle Scholar
Fishkin, James S. 1991. Democracy and Deliberation: New Directions for Democratic Reform. New York: Yale University Press.Google Scholar
Fishkin, James S. 2009. When the People Speak: Deliberative Democracy & Public Consultation. Oxford: Oxford University Press.Google Scholar
Fuji-Johnson, G. 2015. Democratic Illusion: Deliberative Democracy in Canadian Public Policy (Vol. 49). University of Toronto Press.Google Scholar
Fung, Archon. 2006. Empowered Participation: Reinventing Urban Democracy. Princeton. NJ: Princeton University Press.Google Scholar
Galton, Francis. 1886. “Regression Towards Mediocrity in Hereditary Stature.” The Journal of the Anthropological Institute of Great Britain and Ireland 15: 246–63.Google Scholar
Gastil, John E., Dees, Pierre, and Weiser, Philip J.. 2002. “Civic Awakening in the Jury Room: A Test of the Connection between Jury Deliberation and Political Participation.” Journal of Politics 64 (2): 585–95.Google Scholar
Gaynor, Niamh. 2011. “Associations, Deliberation and Democracy: The Case of Ireland’s Social Partnership.” Politics & Society 39 (4): 497520.Google Scholar
Gerber, Alan S., Green, Donald P., and Nickerson, David W.. 2001. “Testing for Publication Bias in Political Science.” Political Analysis 9 (4): 385–92.Google Scholar
Goodin, Robert E. and Dryzek, John. 2006. “Deliberative Impacts: The Macro-political Uptake of Mini-publics.” Politics & Society 34 (2): 219244.Google Scholar
Gutmann, Amy and Thompson, Dennis. 1996. Democracy and Disagreement. London: The Belknap Press of Harvard University.Google Scholar
Hendriks, Carolyn M. 2012. The Politics of Public Deliberation: Citizen Engagement and Interest Advocacy. Springer.Google Scholar
Hibbing, Theiss-Morse E JR. 2002. Stealth Democracy: Americans’ Beliefs About How Government Should Work. Cambridge, UK: Cambridge Univ. Press.Google Scholar
Hughes, Jessica M. F. 2016. “Constructing a United Disability Community: The National Council on Disability’s Discourse of Unity in the Deliberative System around Disability Rights.” Journal of Public Deliberation 12 (1): Article 8.Google Scholar
Ingham, Sean. 2013. “Disagreement and Epistemic Arguments for Democracy.” Politics, Philosophy & Economics 12 (2): 136–55.Google Scholar
Kamenova, Kalina and Goodman, Nicole. 2015. “Public Engagement with Internet Voting in Edmonton: Design, Outcomes, and Challenges to Deliberative Models.” Journal of Public Deliberation 11 (2): Article 4.Google Scholar
Karpowitz, Christopher F., Mendelberg, Tali, and Shaker, Lee. 2012. “Gender Inequality in Deliberative Participation.” American Political Science Review 106 (3): 533–47.Google Scholar
Kosack, Stephen and Fung, Archon. 2014. “Does Transparency Improve Governance?” Annual Review of Political Science 17: 6587.Google Scholar
Landemore, H. 2013. Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many. Princeton, NJ: Princeton University Press.Google Scholar
Lee, Caroline W. 2015. Do-it-Yourself Democracy: The Rise of the Public Engagement Industry. Oxford University Press.Google Scholar
Lupia, Arthur. 2004. “The Wrong Tack: Who’s to Say that People Make Better Decisions in Groups than They Do on Their Own?” Legal Affairs 3: 4345.Google Scholar
Morozov, Evgeny. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.Google Scholar
Mutz, Diana C. 2008. “Is Deliberative Democracy a Falsifiable Theory?” Annual Review of Political Science 11: 521–38.Google Scholar
Nabatchi, Tina, Gastil, John, Weiksner, Matthew, and Leighninger, G. Michael. 2012. Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement. Oxford: Oxford University Press.Google Scholar
Nylen, William R. 2003. “An Enduring Legacy: Popular Participation in the Aftermath of the Participatory Budgets of Joao Monlevade and Betim.” In Radicals in Power: The Worker’s Party (PT) and Experiments in Urban Democracy in Brazil, ed. Baiocchi, Gianpaolo. London: Zed Books Ltd.Google Scholar
Owen, David and Smith, Graham. 2015. “Survey Article: Deliberation, Democracy and the Systemic Turn.” Journal of Political Philosophy 23 (2): 213234.Google Scholar
Ravazzi, Stefania and Pomatto, Gianfranco. 2014. “Flexibility, Argumentation and Confrontation. How Deliberative Minipublics Can Affect Policies on Controversial Issues.” Journal of Public Deliberation 10 (2): Article 10.Google Scholar
Rogers, Everett. 1983. Diffusion of Innovations, 3rd edition., The Free Press, New York.Google Scholar
Ryfe, David M. 2005. “Does Deliberative Democracy Work?” Annual Review of Political Science 8: 4971.Google Scholar
Sabl, Andrew. 2015. “The Two Cultures of Democratic Theory: Responsiveness, Democratic Quality, and the Empirical-Normative Divide.” Perspectives on Politics 13 (2): 345–65.Google Scholar
Shapiro, Ian. 2003. The State of Democratic Theory. Princeton University Press.Google Scholar
Smith, Graham. 2009. Democratic Innovations: Designing Institutions for Citizen Participation , Cambridge: Cambridge University Press.Google Scholar
Smith, William. 2013. “Anticipating Transnational Publics: On the Use of Mini-Publics in Transnational Governance.” Politics & Society 41 (3): 461484.Google Scholar
Spada, Paolo. 2012. “Participatory Budgeting Census: 1989–2012.” Available at. http://participedia.net/en/content/brazilian-participatory-budgeting-census Google Scholar
Spada, Paolo and Vreeland, James R.. 2013. “Who Moderates the Moderators? The Effect of Non-neutral Moderators in Deliberative Decision Making.” Journal of Public Deliberation 9 (2): Article 3.Google Scholar
Sveiby, Karl-Erik, Gripenberg, Pernilla, Segercrantz, Beata, Eriksson, Andreas, and Aminoff, Alexander. 2009. “Unintended and undesirable consequences of innovation.” XX ISPIM Conference, The Future of Innovation. Vienna. http://www.sveiby.com/articles/UnintendedconsequencesISPIMfinal.pdf (accessed July 2016).Google Scholar
Thompson, Dennis. 2008. “Deliberative Democratic Theory and Empirical Political Science.” Annual Review of Political Science 11: 497520.Google Scholar
Wampler, Brian. 2007. Participatory Budgeting in Brazil: Contestation, Cooperation and Accountability. Pennsylvania: Penn State University Press.Google Scholar
Wang, Zhengxu and Dai, Weina. 2013. “Women’s Participation in Rural China’s Self-Governance: Institutional, Socioeconomic, and Cultural Factors in a Jiangsu County.” Governance 26 (1): 91118.CrossRefGoogle Scholar
Figure 0

Figure 1 Trends in Published Articles in the Top 5 Journals in Political Science + PAS + JPDNote: We conducted the mapping in July 2016, data from AJPS, PA, ARPS, APSR, Governance, PAS and JPD.

Figure 1

Table 1 Aggregate Articles on Deliberation in the Last 10 Years (top 5 journals + PAS + JPD)

Figure 2

Figure 2 Empirical Analysis on DeliberationNote: We conducted the mapping in July 2016, data from AJPS, PA, ARPS, APSR, Governance, PAS and JPD.

Figure 3

Figure 3 A Breakdown of Empirical Articles on DeliberationNote: We conducted the mapping in July 2016, data from AJPS, PA, ARPS, APSR, Governance, PAS and JPD.

Figure 4

Table 2 The Survival of Participatory Budgeting (PB) among Brazilian Cities with more than 50,000 Inhabitants

Supplementary material: File

Spada and Ryan supplementary material

Spada and Ryan supplementary material

Download Spada and Ryan supplementary material(File)
File 73 KB