1. INTRODUCTION
How can a group of individuals aggregate their individual judgments (beliefs, opinions) on some logically connected propositions into collective judgments on these propositions? In particular, how can a group do this under conditions of pluralism, i.e., when individuals disagree on the propositions in question? This problem – judgment aggregation is discussed in a growing literature in philosophy, economics and political science and generalizes earlier problems of social choice, notably preference aggregation in the Condorcet–Arrow tradition.Footnote 1 The problem arises in many different decision-making bodies, ranging from legislative committees and multi-member courts to expert advisory panels and monetary policy committees of a central bank.
Judgment aggregation is often illustrated by a paradox: the discursive (or doctrinal) paradox (Kornhauser and Sager Reference Kornhauser and Sager1986; Pettit Reference Pettit2001; Brennan Reference Brennan2001). To illustrate, suppose a university committee responsible for a tenure decision has to make collective judgments on three propositions:Footnote 2
a: The candidate is good at teaching.
b: The candidate is good at research.
c: The candidate deserves tenure.
According to the university's rules, c (the “conclusion”) is true if and only if a and b (the “premises”) are both true, formally c↔(a∧b) (the “connection rule”). Suppose the committee has three members with judgments as shown in Table 1.
If the committee takes a majority vote on each proposition, then a and b are each accepted and yet c is rejected (each by two thirds), despite the (unanimous) acceptance of c↔(a∧b). The discursive paradox shows that judgment aggregation by propositionwise majority voting may lead to inconsistent collective judgments, just as Condorcet's paradox shows that preference aggregation by pairwise majority voting may lead to intransitive collective preferences.
In response to the discursive paradox, two aggregation rules have been proposed to avoid such inconsistencies (e.g., Pettit Reference Pettit2001; Chapman Reference Chapman1998, Reference Chapman2002; Bovens and Rabinowicz Reference Bovens and Rabinowicz2006). Under premise-based voting, majority votes are taken on a and b (the premises), but not on c (the conclusion), and the collective judgment on c is derived using the connection rule c↔(a∧b): in Table 1, a, b and c are all accepted. Premise-based voting captures the deliberative democratic idea that collective decisions on outcomes should be made on the basis of collectively decided reasons. Here reasoning is “collectivized”, as Pettit (Reference Pettit2001) describes it. Under conclusion-based voting, a majority vote is taken only on c, and no collective judgments are made on a or b: in Table 1, c is rejected and other propositions are left undecided. Conclusion-based voting captures the minimal liberal idea that collective decisions should be made only on (practical) outcomes and that the reasons behind such decisions should remain private. Here collective decisions are “incompletely theorized” in Sunstein's (Reference Sunstein1994) terms. (For a comparison between minimal liberal and comprehensive deliberative approaches to decision making, see List Reference List2006.)
Abstracting from the discursive dilemma, List and Pettit (Reference List2002, Reference List2004) have formalized judgment aggregation and proved that no judgment aggregation rule ensuring consistency can satisfy some conditions inspired by Arrow's conditions on preference aggregation. This impossibility result has been strengthened and extended by Pauly and van Hees (Reference Pauly and Hees2006; see also van Hees Reference Hees2007), Dietrich (Reference Dietrich2006), Gärdenfors (Reference Gärdenfors2006) and Dietrich and List (Reference Dietrich2007a, Reference Dietrich2007b). Drawing on the model of “property spaces”, Nehring and Puppe (Reference Nehring and Puppe2002, Reference Nehring and Puppe2005) have offered the first characterizations of agendas of propositions for which impossibility results hold (for a subsequent contribution, see Dokow and Holzman Reference Dokow and Holzman2005). Possibility results have been obtained by List (Reference List2003, Reference List2004), Pigozzi (Reference Pigozzi2006) and Osherson and Vardi (forthcoming). Dietrich (Reference Dietrich and List2007) has developed an extension of the judgment aggregation model to richer logical languages for expressing propositions, which we use in this paper. Related bodies of literature include those on abstract aggregation theory (Wilson Reference Wilson1975)Footnote 3 and on belief merging in computer science (Konieczny and Pino-Perez Reference Konieczny and Pino-Perez2002).
But one important question has received little attention in the literature on judgment aggregation: Which aggregation rules are manipulable by strategic voting and which are strategy-proof? The answer is not obvious, as strategy-proofness in the familiar sense in economics is a preference-theoretic concept and preferences are not primitives of judgment aggregation models. Yet the question matters for the design and implementation of an aggregation rule in a collective decision-making body such as in the examples above. Ideally, we would like to find aggregation rules that lead individuals to reveal their judgments truthfully. Indeed, if an aggregation rule captures the normatively desirable functional relation between individual and collective judgments, then truthful revelation of these individual judgments (which are typically private information) is crucial for the (direct) implementation of that functional relation.Footnote 4
In this paper, we address this question. We first introduce a simple condition of non-manipulability and characterize the class of non-manipulable judgment aggregation rules. We then show that, under certain motivational assumptions about individuals, our condition is equivalent to a game-theoretic strategy-proofness condition similar to the one introduced by Gibbard (Reference Gibbard1973) and Satterthwaite (Reference Satterthwaite1975) for preference aggregation.Footnote 5 Our characterization of non-manipulable aggregation rules then yields a characterization of strategy-proof aggregation rules. The relevant motivational assumptions hold if agents want the group to make collective judgments that match their own individual judgments (e.g., want the group to make judgments that match what they consider the truth). In many other cases, such as that of “reason-oriented” individuals (as defined in Section 5), non-manipulability and strategy-proofness may come significantly apart.
By introducing both a non-game-theoretic condition of non-manipulability and a game-theoretic condition of strategy-proofness, we are able to distinguish between opportunities for manipulation (which depend only on the aggregation rule in question) and incentives for manipulation (which depend also on the motivations of the decision-makers).
We prove that, for a general class of aggregation problems including the tenure example above, there exists no non-manipulable judgment aggregation rule satisfying universal domain and some other mild conditions, an impossibility result similar to the Gibbard–Satterthwaite theorem on preference aggregation. Subsequently, we identify various ways to avoid the impossibility result. We also show that our default conditions of non-manipulability and strategy-proofness fall into general families of conditions and discuss other conditions in these families. In the case of strategy-proofness, these conditions correspond to different motivational assumptions about the decision makers. In the tenure example, conclusion-based voting is strategy-proof in a strong sense, but produces no collective judgments on the premises. Premise-based voting satisfies only the weaker condition of strategy-proofness for “reason-oriented” individuals. Surprisingly, although premise- and conclusion-based voting are regarded in the literature as two diametrically opposed aggregation rules, they are strategically equivalent if individuals are “outcome-oriented”, generating identical judgments in equilibrium. Our results not only introduce game-theoretic considerations into the theory of judgment aggregation, but they are also relevant to debates on democratic theory as premise-based voting has been advocated, and conclusion-based voting rejected, by proponents of deliberative democracy (Pettit Reference Pettit2001).
There is, of course, a related literature on manipulability and strategy-proofness in preference aggregation, following Gibbard's and Satterthwaite's classic contributions (e.g., Taylor Reference Taylor2002, Reference Taylor2005; Saporiti and Thomé Reference Saporiti and Thomé2005). An important branch of this literature, from which several corollaries for judgment aggregation can be derived, has considered preference aggregation over options that are vectors of binary properties (Barberà et al. Reference Barberà, Gul and Stacchetti1993, Reference Barberà, Massóa and Nemeb1997; Nehring and Puppe Reference Nehring and Puppe2002). A parallel to judgment aggregation can be drawn by identifying propositions with properties; a disanalogy lies in the structure of the informational input to the aggregation rule. While judgment aggregation rules collect a single judgment set from each individual (expressed in a possibly rich logical language), preference aggregation rules collect an entire preference ordering over vectors of properties. Whether or not an individual's most-preferred vector of properties (in preference aggregation) can be identified with her judgment set (in judgment aggregation) depends precisely on the motivational assumptions we make about this individual.
Another important related literature is that on the paradox of multiple elections (Brams et al. Reference Brams, Kilgour and Zwicker1997, Reference Brams, Kilgour and Zwicker1998; Kelly Reference Kelly1989). Here a group also aggregates individual votes on multiple propositions, and the winning combination can be one that no voter individually endorses. However, given the different focus of that work, the propositions in question are not explicitly modelled as logically interconnected as in our present model of judgment aggregation. The formal proofs of all the results reported in the main text are given in the Appendix.
2. THE BASIC MODEL
We consider a group of individuals N={1, 2, . . ., n}, where n≥2.Footnote 6 The group has to make collective judgments on logically connected propositions.
2.1 Representing propositions in formal logic
Propositions are represented in a logical language, defined by two components:
• a non-empty set L of formal expressions representing propositions; the language has a negation symbol ¬ (“not”), where for each proposition p in L, its negation ¬ p is also contained in L.
• an entailment relation ⊧, where, for each set of propositions A ⊆ L and each proposition p ∈ L, A ⊧ p is read as “A logically entails p”.Footnote 7
We call a set of propositions A ⊆ Linconsistent if A ⊧ p and A ⊧ ¬p for some p ∈ L, and consistent otherwise. We require the logical language to have certain minimal properties (Dietrich Reference Dietrich and List2007; Dietrich and List Reference Dietrich2007a).Footnote 8
The most familiar logical language is (classical) propositional logic, containing a given set of atomic propositions a, b, c, . . ., such as the propositions about the candidate's teaching, research and tenure in the example above, and compound propositions with the logical connectives ¬ (“not”), ∧ (“and”), ∨ (“or”), → (“if-then”), ↔ (“if and only if”), such as the connection rule c↔(a∧b) in the tenure example.Footnote 9 Examples of valid logical entailments in propositional logic are {a, {a→b}⊧b (“modus ponens”), a → b, ¬ b ⊧¬a (“modus tollens”), whereas the entailment {a∨b}⊧a is not valid. Examples of consistent sets are a,a ∨ b, ¬ a, ¬ b,a → b, and examples of inconsistent ones are {a, ¬a, a, a → b, ¬ b and a,b,c ↔ a ∧ b), ¬ c.
We use classical propositional logic in our examples, but our results also hold for other, more expressive logical languages such as the following:
• predicate logic, which includes relation symbols and the quantifiers “there exists . . .” and “for all . . .”;
• modal logic, which includes the operators “it's necessary that . . .” and “it's possible that ldots”;
• deontic logic, which includes the operators “it's permissible that . . .” and “it's obligatory that . . .”;
• conditional logic, which allows the expression of counterfactual or subjunctive conditionals.
Many different propositions that might be considered by a multi-member decision-making body (ranging from legislative committees to expert panels) can be formally represented in an appropriate such language. Crucially, a logical language allows us to capture the fact that, in many decision problems, different propositions, such as the reasons for a particular tenure outcome and the resulting outcome itself, are mutually interconnected.
2.2 The agenda
The agenda is the set of propositions on which judgments are to be made; it is a non-empty subset X ⊆ L, where X is a union of proposition-negation pairs p, ¬ p (with p not a negated proposition). For simplicity, we assume that double negations cancel each other out, i.e., ¬¬ p stands for p.Footnote 10
Two important examples are conjunctive and disjunctive agendas in propositional logic. A conjunctive agenda is X={a 1, . . ., a k, c, c↔(a 1∧⋅⋅⋅∧a k)}+neg, where a 1, . . ., a k are premises (k≥1), c is a conclusion, and c↔(a 1∧⋅⋅⋅∧a k) is the connection rule. We write Y +neg as an abbreviation for p, ¬ p: p ∈ Y. To define a disjunctive agenda, we replace c↔(a 1∧⋅⋅⋅∧a k) with c↔(a 1∨⋅⋅⋅∨a k). Conjunctive and disjunctive agendas arise in decision problems in which some outcome (c) is to be decided on the basis of some reasons (a 1, . . ., a k). In the tenure example above, we have a conjunctive agenda with k=2.Footnote 11
Other examples are agendas involving conditionals (in propositional or conditional logic) such as X={a, b, a→b}+neg. Here proposition a might state some political goal, proposition a → b might state what the pursuit of a requires, and proposition b might state the consequence to be drawn. Alternatively, proposition a might be an empirical premise, a → b a causal hypothesis, and b the resulting prediction.
Finally, we can also represent standard preference aggregation problems within our model. Here we use a predicate logic with a set of constants K representing options (|K|≥3) and a two-place predicate R representing preferences, where, for any x, y ∈ K, the proposition xRy is interpreted as “x is preferable to y”. Now the preference agenda is the set X={xRy:x, y∈K}+neg (Dietrich and List Reference Dietrich2007a).Footnote 12
The nature of a judgment aggregation problem depends on what propositions are contained in the agenda and how they are interconnected. Our main characterization theorem holds for any agenda of propositions. Our main impossibility theorem holds for a large class of agendas, defined below. We also discuss applications to the important cases of conjunctive and disjunctive agendas.
2.3 Individual and collective judgments
Each individual i's judgment set is a subset A i⊆X, where p∈A i means that individual i accepts proposition p. As the agenda typically contains both atomic propositions and compound ones, our definition of a judgment set captures the fact that an individual makes judgments both on free-standing atomic propositions and on their interconnections; and different individuals may disagree with each other on both kinds of propositions.
A judgment set A i is consistent if it is a consistent set of propositions as defined for the logic; A i is complete if it contains a member of each proposition-negation pair p, ¬p ∈ X. A profile (of individual judgment sets) is an n-tuple (A 1, . . ., A n).
A (judgment) aggregation rule is a function F that assigns to each admissible profile (A 1, . . ., A n) a collective judgment set F(A 1, . . ., A n)=A⊆X, where p∈A means that the group accepts proposition p. The set of admissible profiles is called the domain of F, denoted Domain (F). Several results below require the following.
Universal Domain. Domain(F) is the set of all possible profiles of consistent and complete individual judgment sets.
2.4 Examples of aggregation rules
We give four important examples of aggregation rules satisfying universal domain, as just introduced. The first two rules are defined for any agenda, the last two only for conjunctive (or disjunctive) agendas (the present definitions are simplified, but a generalization is possible).
Propositionwise majority voting. For each (A 1, . . ., A n) ∈ Domain(F), F(A 1, . . ., A n) is the set of all propositions p ∈ X such that more individuals i have p∈A i than p∉A i.
Dictatorship of individual i. For each (A 1, . . ., A n) ∈ Domain(F), F(A 1, . . ., A n)=A i.
Premise-based voting. For each (A 1, . . ., A n) ∈ Domain(F), F(A 1, . . ., A n) is the set containing
• any premise a j if and only if more i have a j∈A i than a j∉A i,
• the connection rule c↔(a 1∧⋅⋅⋅∧a k),
• the conclusion c if and only if a j∈F(A 1, . . ., A n) for all premises a j,
• any negated proposition ¬p if and only if p∉F(A 1, . . ., A n).Footnote 13
Here votes are taken only on each premise, and the conclusion is decided by using an exogenously given connection rule.
Conclusion-based voting. For each (A 1, . . ., A n) ∈ Domain(F), F(A 1, . . ., A n) is the set containing
• only the conclusion c if more i have c∈A i than c∉A i,
• only the negation of the conclusion ¬c otherwise.
Here a vote is taken only on the conclusion, and no collective judgments are made on other propositions.
Dictatorships and premise-based voting always generate consistent and complete collective judgments; propositionwise majority voting sometimes generates inconsistent ones (recall Table 1), and conclusion-based voting always generates incomplete ones (no judgments on the premises).
In debates on the discursive paradox and democratic theory, several arguments have been offered for the superiority of premise-based voting over conclusion-based voting. One such argument draws on a deliberative conception of democracy, which emphasizes that collective decisions on conclusions should follow from collectively decided premises (Pettit Reference Pettit2001; Chapman Reference Chapman2002). A second argument draws on the Condorcet jury theorem. If all the propositions are factually true or false and each individual has a probability greater than 1/2 of judging each premise correctly, then, under certain probabilistic independence assumptions, premise-based voting has a higher probability of producing a correct collective judgment on the conclusion than conclusion-based voting (Grofman Reference Grofman1985; Bovens and Rabinowicz 2006; List Reference List2005, Reference List2006). Here we show that, with regard to strategic manipulability, premise-based voting performs worse than conclusion-based voting.
3. NON-MANIPULABILITY
When can an aggregation rule be manipulated by strategic voting? We first introduce a new condition of non-manipulability, not yet game-theoretic. Below we prove that, under certain motivational assumptions about the individuals, our non-manipulability condition is equivalent to a game-theoretic strategy-proofness condition. We also notice that non-manipulability and strategy-proofness may sometimes come apart.
3.1 An example
To give a simple example, we use the language of incentives to manipulate, although our subsequent formal analysis focuses on underlying opportunities for manipulation; we return to incentives formally in Section 4. Recall the profile in Table 1. Suppose, for the moment, that the three committee members each care only about reaching a collective judgment on the conclusion (c) that agrees with their own individual judgments on the conclusion, and that they do not care about the collective judgments on the premises. What matters to them is the final tenure decision, not the underlying reasons; they are “outcome-oriented”, as defined precisely later.
Suppose first the committee uses conclusion-based voting; a vote is taken only on c. Then, clearly, no committee member has an incentive to express an untruthful judgment on c. Individual 1, who wants the committee to accept c, has no incentive to vote against c. Individuals 2 and 3, who want the committee to reject c, have no incentive to vote in favour of c.
But suppose now the committee uses premise-based voting; votes are taken on a and b. What are the members' incentives? Individual 1, who wants the committee to accept c, has no incentive to vote against a or b. But at least one of individuals 2 or 3 has an incentive to vote untruthfully. Specifically, if individuals 1 and 2 vote truthfully, then individual 3 has an incentive to vote untruthfully; and if individuals 1 and 3 vote truthfully, then individual 2 has such an incentive.
To illustrate, assume that individual 2 votes truthfully for a and against b. Then the committee accepts a, regardless of individual 3's vote. So, if individual 3 votes truthfully for b, then the committee accepts b and hence c. But if she votes untruthfully against b, then the committee rejects b and hence c. As individual 3 wants the committee to reject c, she has an incentive to vote untruthfully on b. (In summary, if individual judgments are as in Table 1, voting untruthfully against both a and b weakly dominates voting truthfully for individuals 2 and 3.) Ferejohn (Reference Ferejohn2003) has made this observation informally.
3.2 A non-manipulability condition
To formalize these observations, some definitions are needed. We say that one judgment set, A, agrees with another, A*, on a proposition p ∈ X if either both or none of A and A* contains p; A disagrees with A* on p otherwise. Two profiles are i-variants of each other if they coincide for all individuals except possibly i.
An aggregation rule F is manipulable at the profile (A 1, . . ., A n) ∈ Domain(F) by individual i on proposition p ∈ X if A i disagrees with F(A 1, . . ., A n) on p, but A i agrees with F(A 1,. . .,A i*,. . .,A n) on p for some i-variant F(A 1,. . .,A i*,. . .,A n) ∈ Domain(F).
For example, at the profile in Table 1, premise-based voting is manipulable by individual 3 on c (by submitting A 3*= ¬a, ¬b, c ↔ (a ∧ b), ¬ c instead of A 3= ¬ a,b,c ↔ (a ∧ b), ¬ c) and also by individual 2 on c (by submitting A 2* = ¬ a, ¬ b, c ↔ (a ∧ b), ¬ c instead of A 2 = a, ¬ b, c ↔ (a ∧ b), ¬ c).
Manipulability thus defined is the existence of an opportunity for some individual(s) to manipulate the collective judgment(s) on some proposition(s) by expressing untruthful individual judgments (perhaps on other propositions). The question of when such opportunities for manipulation translate into incentives for manipulation is a separate question. Whether a rational individual will act on a particular opportunity for manipulation depends on the individual's precise motivation and particularly on how much he or she cares about the various propositions involved in a possible act of manipulation. To illustrate, in our example above, we have assumed that individuals care only about the final tenure decision, implying that they do indeed have incentives to act on their opportunities for manipulation. We discuss this issue in detail when we introduce preferences over judgment sets below.
Our definition of manipulability leads to a corresponding definition of non-manipulability. Let Y ⊆ X.
Non-manipulability on Y. F is not manipulable at any profile by any individual on any proposition in Y. Equivalently, for any individual i, profile (A 1, . . ., A n) ∈ Domain(F) and proposition p ∈ Y, if A i disagrees with F(A 1, . . ., A n) on p, then A i still disagrees with F(A 1,. . .,A i*,. . ., A n) on p for every i-variant (A 1,. . .,A i*,. . ., A n) ∈ Domain(F).
This definition specifies a family of non-manipulability conditions, one for each Y ⊆ X. Non-manipulability on Y requires the absence of opportunities for manipulation on the subset Y of the agenda. If Y 1⊆Y 2, then non-manipulability on Y 2 implies non-manipulability on Y 1. If we refer just to “non-manipulability”, without adding “on Y”, then we mean the default case Y=X.
3.3 A characterization result
When is a judgment aggregation rule non-manipulable? We now characterize the class of non-manipulable aggregation rules in terms of an independence condition and a monotonicity condition. Let Y ⊆ X.
Independence onY. For any proposition p∈Y and profiles (A 1, . . ., A n), (A 1*, . . .,A n*) ∈ Domain(F), if [for all individuals i, p∈A i if and only if p ∈ A i*] then [p∈F(A 1, . . ., A n) if and only if p ∈ F(A 1*, . . .,A n*)].
Monotonicity on Y. For any proposition p∈Y, individual i and pair of i-variants (A 1, . . ., A n), (A 1, . . .,A i*, . . .,A n) ∈ Domain(F) with p∉A i and p ε A i*, [p∈F(A 1, . . ., A n) implies p ε F(A 1, . . .,A i*, . . .,A n].
Weak Monotonicity onY. For any proposition p∈Y, individual i and judgment sets A 1, . . ., A i−1, A i+1, . . ., A n, if there exists a pair of i-variants (A 1, . . ., A n), (A 1, . . .,A i*, . . .,A n) ∈ Domain(F) with p∉A i and p ∈ A i*, then for some such pair [p∈F(A 1, . . ., A n) implies p ∈ F(A 1, . . .,A i*, . . .,A n)].
Informally, independence on Y states that the collective judgment on each proposition in Y depends only on individual judgments on that proposition and not on individual judgments on other propositions. Monotonicity (respectively, weak monotonicity) on Y states that an additional individual's support for some proposition in Y never (respectively, not always) reverses the collective acceptance of that proposition (other individuals' judgments remaining fixed).
Again, we have defined families of conditions. If we refer just to “independence” or “(weak) monotonicity”, without adding “on Y”, then we mean the default case Y=X.
THEOREM 1. Let X be any agenda. For each Y⊆X, if F satisfies universal domain, the following conditions are equivalent:
(ii) F is non-manipulable on Y;
(ii) F is independent on Y and monotonic on Y;
(iii) F is independent on Y and weakly monotonic on Y.
Without a domain assumption (e.g., for a subdomain of the universal domain), (ii) and (iii) are equivalent, and each implies (i).Footnote 14
No assumption on the consistency or completeness of collective judgments is needed. The result can be seen as a preference-free analogue in judgment aggregation of a classic characterization of strategy-proof preference aggregation rules by Barberà et al. (Reference Barberà, Gul and Stacchetti1993).
In the case of a conjunctive (or disjunctive) agenda, conclusion-based voting is independent and monotonic, hence non-manipulable; premise-based voting is not independent, hence manipulable. But on the set of premises Y={a 1, . . ., a k}+neg premise-based voting is independent and monotonic (as premise-based voting on those premises is simply equivalent to propositionwise majority voting), and hence it is non-manipulable on Y.
3.4 An impossibility result
Ideally, we want to achieve non-manipulability simpliciter and not just on some subset of the agenda. Conclusion-based voting is non-manipulable in this strong sense, but generates incomplete collective judgments. Are there any non-manipulable aggregation rules that generate consistent and complete collective judgments? We now show that, for a general class of agendas, including the agenda in the tenure example above, all non-manipulable aggregation rules satisfying some mild conditions are dictatorial.
To define this class of agendas, we define the notion of path-connectedness, a variant of the notion of total-blockedness introduced by Nehring and Puppe (Reference Nehring and Puppe2002) (originally in the model of “property spaces”).Footnote 15 Informally, an agenda of propositions under consideration is path-connected if any two propositions in the agenda are logically connected with each other, either directly or indirectly, via a sequence of (conditional) logical entailments.
Formally, proposition p conditionally entails proposition q if p, ¬ q ∪ Y is inconsistent for some Y⊆X consistent with p and with ¬q. An agenda X is path-connected if, for all contingentFootnote 16 propositions p, q∈X, there is a sequence p 1, p 2, . . ., p k∈X (of length k≥1) with p=p 1 and q=p k such that p 1 conditionally entails p 2, p 2 conditionally entails p 3, . . ., p k-1 conditionally entails p k. The class of path-connected agendas includes conjunctive and disjunctive agendas (see the Appendix) and the preference agenda (Nehring Reference Nehring2003; Dietrich and List Reference Dietrich2007a), which can be used to represent Condorcet–Arrow preference aggregation problems.
Consider the following conditions on an aggregation rule in addition to universal domain.
Collective Rationality. For any profile (A 1, . . ., A n)∈Domain(F), F(A 1, . . ., A n) is consistent and complete.Footnote 17
Responsiveness. For any contingent proposition p∈X, there exist two profiles (A 1, . . ., A n), (A 1*, . . .,A n*) ∈ Domain(F) such that p∈F(A 1, . . ., A n) and p ∉ F(A 1*, . . ., A n*).
THEOREM 2. For a path-connected agenda X (e.g., a conjunctive, disjunctive or preference agenda), an aggregation rule F satisfies universal domain, collective rationality, responsiveness and non-manipulability if and only if F is a dictatorship of some individual.
For the important case of compact logical languages, this result also follows from Theorem 1 above and Nehring and Puppe's (Reference Nehring and Puppe2002) characterization of monotonic and independent aggregation rules for totally blocked agendas.Footnote 18 Theorem 2 is the judgment aggregation analogue of the Gibbard–Satterthwaite theorem on preference aggregation, which shows that dictatorships are the only strategy-proof social choice functions that satisfy universal domain, have three or more options in their range and always produce a determinate winner (Gibbard Reference Gibbard1973; Satterthwaite Reference Satterthwaite1975). Below we restate Theorem 2 using a game-theoretic strategy-proofness condition.
In the special case of the preference agenda, however, there is an interesting disanalogy between Theorem 2 and the Gibbard–Satterthwaite theorem. As a collectively rational judgment aggregation rule for the preference agenda represents an Arrowian social welfare function, Theorem 2 establishes an impossibility result on the non-manipulability of social welfare functions (generating orderings as in Arrow's framework) as opposed to social choice functions (generating winning options as in the Gibbard–Satterthwaite framework); for a related result, see Bossert and Storcken (Reference Bossert and Storcken1992).
If the agenda is not path-connected, then there may exist non-dictatorial aggregation rules satisfying all of Theorem 2's conditions; examples of such agendas are not only trivial agendas (containing a single proposition-negation pair or several logically independent such pairs), but also agendas involving conditionals, including the simple example X = a,b,a → b +neg (Dietrich forthcoming).
By contrast, for atomically closed or atomic agendas, special cases of path-connected agendas with very rich logical connections, an even stronger impossibility result holds, in which Theorem 2's responsiveness condition is significantly weakened.Footnote 19
Weak Responsiveness. The aggregation rule is non-constant. Equivalently, there exist two profiles (A 1, . . ., A n), (A 1*, . . ., A n*) ε Domain(F) such that F(A 1,. . ., A n) ≠ F(A 1*, . . ., A n*).
THEOREM 3. For an atomically closed or atomic agenda X, an aggregation rule F satisfies universal domain, collective rationality, weak responsiveness and non-manipulability if and only if F is a dictatorship of some individual.
Given Theorem 1 above, this result follows immediately from theorems by Pauly and van Hees (Reference Pauly and Hees2006) (for atomically closed agendas) and Dietrich (Reference Dietrich2006) (for atomic ones).
3.5 Avoiding the impossibility result
To find non-manipulable and non-dictatorial aggregation rules, we must relax at least one condition in Theorems 2 or 3. Non-responsive rules are usually unattractive. Permitting inconsistent collective judgments also seems unattractive. But the following may sometimes be defensible.
Incompleteness. For a conjunctive or disjunctive agenda, conclusion-based voting is non-manipulable. It generates incomplete collective judgments and is only weakly responsive; this may be acceptable when no collective judgments on the premises are required. More generally, propositionwise supermajority rules–requiring a supermajority of a particular size (or even unanimity) for the acceptance of a proposition–are consistent and non-manipulable (by Theorem 1), again at the expense of violating completeness as neither member of a pair p, ¬ p ε X might obtain the required supermajority. For a finite agenda (or compact logical languages), a supermajority rule requiring at least m votes for the acceptance of any proposition guarantees collective consistency if and only if m>n−n/z, where z is the size of the largest minimal inconsistent set Z⊆X (Dietrich and List 2007b; List Reference List and Pettit2004).
Domain restriction. By suitably restricting the domain of propositionwise majority voting, this rule becomes consistent; it is also non-manipulable as it is independent and monotonic. This result holds, for example, for the domain of all profiles of consistent and complete individual judgment sets satisfying the structure condition of unidimensional alignment (List Reference List2003)Footnote 20. Informally, unidimensional alignment requires that the individuals can be aligned from left to right (under any interpretation of “left” and “right”) such that, for each proposition on the agenda, the individuals accepting the proposition are either exclusively to the left, or exclusively to the right, of those rejecting it. This structure condition captures a shared unidimensional conceptualization of the decision problem by the decision-makers. In debates on deliberative democracy, it is sometimes hypothesized that group deliberation may reduce disagreement so as to bring about such a shared unidimensional conceptualization (Miller Reference Miller1992; Dryzek and List Reference List2003), sometimes also described as a “meta-consensus” (List Reference List and Pettit2002a).
4. STRATEGY-PROOFNESS
Non-manipulability is not yet a game-theoretic concept. We now define strategy-proofness, a game-theoretic concept that depends on individual preferences (over judgment sets held by the group). We identify assumptions on individual preferences that render strategy-proofness equivalent to non-manipulability and discuss the plausibility of these assumptions.
4.1 Preference relations over judgment sets
We interpret a judgment aggregation problem as a game with n players (the individuals)Footnote 21. The game form is given by the aggregation rule: each individual's possible actions are the different judgment sets the individual can submit to the aggregation rule (which may or may not coincide with the individual's true judgment set); the outcomes are the collective judgment sets generated by the aggregation rule.
To specify the game fully, we assume that each individual, in addition to holding a true judgment set A i, also has a preference relation ≿i over all possible outcomes of the game, i.e., over all possible collective judgment sets of the form A⊆X. For any two judgment sets, A, B⊆X, A≿iB means that individual i weakly prefers the group to endorse A as the collective judgment set rather than B. We assume that ≿i is reflexive and transitive, but do not require it to be complete.Footnote 22 Individuals need not be able to rank all pairs of judgment sets relative to each other; in principle, our model allows studying a further relaxation of these conditions.
What preferences over collective judgment sets can we expect an individual i to hold when i′s judgment set is A i? The answer is not straightforward, and it may even be difficult to say anything about i′s preferences on the basis of A i alone. To illustrate this, consider first a single proposition p, say, “CO2 emissions lead to global warming”. If individual i judges that p (i.e., p∈A i), it does not necessarily follow that i wants the group to judge that p. Just imagine that i owns an oil company which benefits from low taxes on CO2 emissions, and that taxes are increased if and only if the group judges that p. In general, accepting p and wanting the group to accept p are conceptually distinct (though the literature is often unclear about this distinction). Whether acceptance and desire of group acceptance happen to coincide in a particular case is an empirical question.Footnote 23 There are important situations in which the two may indeed be reasonably expected to coincide. An important example is that of epistemically motivated individuals: here each individual prefers group judgments that she considers closer to the truth, where she may consider her own judgments as the truth. A non-epistemically motivated individual prefers judgment sets for reasons other than the truth, for example because she personally benefits from group actions resulting from the collective endorsement of some judgment sets rather than others.Footnote 24
We now give examples of possible assumptions (empirical claims) on how the individuals' preferences are related to their judgment sets. Which of these assumptions is correct depends on the group of individuals and the aggregation problem in question. Different assumptions capture different motivations of the individuals, as illustrated above. Specifically, the assumption of “unrestricted” preferences captures the case where an individual's preferences are not in any systematic way linked to her judgments; the assumption of “top-respecting” preferences and the stronger one of “closeness-respecting” preferences capture situations in which agents would like group judgments to agree with their own judgments. We use a function C that assigns to each possible judgment set A i a non-empty set C(A i) of (reflexive and transitive) preference relations that are considered “compatible” with A i (i.e., possible given A i). Our examples of preference assumptions can be stated formally as follows (in increasing order of strength).
Unrestricted preferences. For each A i, C(A i) is the set of all preference relations ≿ (regardless of A i).
Top-respecting preferences. For each A i, C(A i) is the set of all preference relations ≿ for which A i is a most preferred judgment set, i.e., C(A i)={≿:A i≿B for all judgment sets B}.
To define “closeness-respecting” preferences, we say that a judgment set B is at least as close to A i on some Y⊆X as another judgment set B* if, for all propositions p∈Y, if B* agrees with A i on p, then B also agrees with A i on p. For example, {¬ a,b,c ↔ (a ∧ b), ¬ c} is at least as close to {a, b, c↔(a∧b), c} on X as ¬ a,¬ b,c ↔ (a ∧ b), ¬c},Footnote 25 whereas { ¬a,b,c ↔ (a ∧ b),¬ c} and {a, ¬b,c ↔ (a ∧ b), ¬c} are unranked in terms of relative closeness to {a, b, c↔(a∧b), c} on X. We say that a preference relation ≿ respects closeness to A i on Y if, for any two judgment sets B and B*, if B is at least as close to A i as B* on Y, then B≿B*.
Closeness-respecting preferences on Y (for some Y ⊆ X). For each A i, C(A i) is the set of all preference relations ≿ that respect closeness to A i on Y, and we write C=C Y.
In the important case Y=X, we drop the reference “on Y” and speak of closeness-respecting preferences simpliciter. One element of C X(A i) is the (complete) preference relation induced by the Hamming distance to A i.Footnote 26 Below we analyse the important cases of “reason-oriented” and “outcome-oriented” preferences, where Y is given by particular subsets of X. Generally, if Y 1⊆Y 2, then, for all A i,C Y1 (A i) ⊆ C Y2 (A i).
4.2 A strategy-proofness condition
Given a specification of the function C, an aggregation rule is strategy-proof for C if, for any profile, any individual and any preference relation compatible with the individual's judgment set (according to C), the individual (weakly) prefers the outcome of expressing her judgment set truthfully to any outcome that would result from misrepresenting her judgment set.
Strategy-proofness for C. For any individual i, profile (A 1, . . ., A n) ∈ Domain(F) and preference relation ≿i∈ C(A i), F(A 1,. . ., A n) ≿iF(A 1,. . ., A i*,. . ., A n) for every i-variant (A 1,. . ., A i*,. . ., A n) ∈ Domain(F).Footnote 27
If the aggregation rule F has the universal domain, then strategy-proofness implies that truthfulness is a weakly dominant strategy for every individual.Footnote 28 Our definition of strategy-proofness (generalizing List Reference List2002b, Reference List2004) is similar to Gibbard's (Reference Gibbard1973) and Satterthwaite's (Reference Satterthwaite1975) classical one and related to other definitions of strategy-proofness in the literature on preference aggregation (particularly, for C X, those by Barberà et al. (Reference Barberà, Gul and Stacchetti1993, Reference Barberà, Massóa and Nemeb1997) and Nehring and Puppe (Reference Nehring and Puppe2002), employing the notion of generalized single-peaked preferences).
As in the case of non-manipulability above, we have defined a family of strategy-proofness conditions, one for each specification of C. This means that different motivational assumptions about the individuals lead to different strategy-proofness conditions. If individuals have very restrictive preferences over possible judgment sets, then strategy-proofness is easier to achieve than if their preferences are largely unrestricted. Formally, if two functions C 1 and C 2 are such that C 1⊆C 2 (i. e., for each A i, C 1(A i)⊆C 2(A i)), then strategy-proofness for C 1 is less demanding than (i.e., implied by) strategy-proofness for C 2. The more preference relations are compatible with each individual judgment set, the more demanding is the corresponding requirement of strategy-proofness.
4.3 The equivalence of strategy-proofness and non-manipulability
What is the logical relation between non-manipulability as defined above and strategy-proofness? We show that, if preferences are closeness-respecting (on some Y ⊆ X), then an equivalence between these two concepts arises. Let X be any agenda.
THEOREM 4. For each Y ⊆ X, F is strategy-proof for C Y if and only if F is non-manipulable on Y.
In other words, for any subset Y of the agenda X (including the case Y = X), strategy-proofness of an aggregation rule for closeness-respecting preferences on Y is equivalent to non-manipulability on the propositions in Y. In particular, strategy-proofness for closeness-respecting preferences simpliciter is equivalent to non-manipulability simpliciter. This also implies that, for unrestricted or top-respecting preferences, strategy-proofness is more demanding than our default condition of non-manipulability, whereas, for closeness-respecting preferences on some Y ⊆ X, it is less demanding.
Given the equivalence result of Theorem 4, we can now state corollaries of Theorems 1 and 2 above for strategy-proofness:Footnote 29
COROLLARY 1. For each Y ⊆ X, if F satisfies universal domain, the following conditions are equivalent:
(i) F is strategy-proof for C Y;
(ii) F is independent on Y and monotonic on Y;
(iii) F is independent on Y and weakly monotonic on Y.
Without a domain assumption (e.g., for a subdomain of the universal domain), (ii) and (iii) are equivalent, and each implies (i).
COROLLARY 2. For a path-connected agenda X (e.g., a conjunctive, disjunctive or preference agenda), an aggregation rule F satisfies universal domain, collective rationality, responsiveness and strategy-proofness for CX if and only if F is a dictatorship of some individual.
Corollary 2 is a judgment aggregation analogue of Nehring and Puppe's (Reference Nehring and Puppe2002) characterization of strategy-proof social choice functions in the model of “property spaces”.Footnote 30 The negative part of corollary 2 (i.e., if an aggregation rule satisfies the conditions, then it is a dictatorship) holds not only for closeness-respecting preferences (C X) but for any preference specification C at least as broad as C X, i.e., C X ⊆ C, as strategy-proofness for C then implies strategy-proofness for C X. The positive part of corollary 2 (i.e., if an aggregation rule is a dictatorship, then it satisfies the conditions) holds for any preference specification C allowing only top-respecting preferences, i.e., for any C such that, if ≿∈C(A i), then A i≿B for all judgment sets B; otherwise a dictatorship, although non-manipulable, is not strategy-proof (to see this point, recall the example of the oil company in Section 4.1).
In summary, if the individuals' preferences over judgment sets are unrestricted, top-respecting or closeness-respecting, we obtain a negative result. Moreover, in analogy with Theorem 3 above, for atomically closed or atomic agendas, we get an impossibility result even if we weaken responsiveness to the requirement of a non-constant aggregation rule.
5. OUTCOME- AND REASON-ORIENTED PREFERENCES
As we have introduced families of strategy-proofness and non-manipulability conditions, it is interesting to consider some less demanding conditions within these families. If we demand strategy-proofness for C=C X, equivalent to non-manipulability simpliciter, this precludes all incentives for manipulation, where individuals have closeness-respecting preferences. But individual preferences may sometimes fall into a more restricted set: they may be closeness-respecting on some subset Y ⊆ X, in which case it is sufficient to require strategy-proofness for C Y. As an illustration, we now apply these ideas to the case of a conjunctive (analogously disjunctive) agenda.
5.1 Definition
Let X be a conjunctive (or disjunctive) agenda. Two important cases of closeness-respecting preferences on Y are the following.
Outcome-oriented preferences. C = C Youtcome where Y outcome = {c}+neg.
Reason-oriented preferences C = C Yreason where Y reason = {a 1, . . ., a k}+neg.
An individual with outcome-oriented preferences cares only about achieving a collective judgment on the conclusion that matches her own judgment, regardless of the premises. Such preferences make sense if only the conclusion but not the premises have consequences the individual cares about. An individual with reason-oriented preferences cares only about achieving collective judgments on the premises that match her own judgments, regardless of the conclusion. Such preferences make sense if the individual gives primary importance to the reasons given in support of outcomes, rather than the outcomes themselves, or if the group's judgments on the premises have important consequences themselves that the individual cares about (such as setting precedents for future decisions). Proponents of a deliberative conception of democracy often argue that the motivational assumption of reason-oriented preferences is appropriate in deliberative settings (for a discussion, see Elster Reference Elster, Elster and Hylland1986; Goodin Reference Goodin, Elster and Hylland1986). Economists, by contrast, assume that in many settings outcome-oriented preferences are the more accurate motivational assumption. Ultimately, it is an empirical question what preferences are triggered by various settings.
To illustrate, consider premise-based voting and the profile in Table 1. Individual 3's judgment set is A 3 = {¬a, b, ¬c, r}, where r=c↔(a∧b). If all individuals are truthful, the collective judgment set is A={a, b, c, r}. If individual 3 untruthfully submits A 3* = {¬a, ¬b, ¬c, r} and individuals 1 and 2 are truthful, the collective judgment set is A* = {a, ¬b, ¬c, r}. Now A* is closer to A 3 than A on Y outcome={c}+neg, whereas A is closer to A 3 than A* on Y reason={a, b}+neg. So, under outcome-oriented preferences, individual 3 (at least weakly) prefers A* to A, whereas, under reason-oriented preferences, individual 3 (at least weakly) prefers A to A*.
5.2 The strategy-proofness of premise-based voting for reason-oriented preferences
As shown above, conclusion-based voting is strategy-proof for C X and hence also for C Yreason and C Youtcome. Premise-based voting is not strategy-proof for C X and neither for C Youtcome, as can easily be seen from our first example of manipulation. But the following holds.
Proposition 1. For a conjunctive or disjunctive agenda X, premise-based voting is strategy-proof for C Yreason.
This result is interesting from a deliberative democracy perspective. If individuals have reason-oriented preferences in deliberative settings, as sometimes argued by proponents of a deliberative conception of democracy, then premise-based voting is strategy-proof in such settings. But if individuals have outcome-oriented preferences, then the aggregation rule advocated by deliberative democrats is vulnerable to strategic manipulation, posing a challenge to the deliberative democrats' view that truthfulness can easily be achieved under their preferred aggregation rule.
5.3 The strategic equivalence of premise- and conclusion-based voting for outcome-oriented preferences
Surprisingly, if individuals have outcome-oriented preferences, then premise- and conclusion-based voting are strategically equivalent in the following sense. For any profile, there exists, for each of the two rules, a (weakly) dominant-strategy equilibrium leading to the same collective judgment on the conclusion. To state this result formally, some definitions are needed.
Under an aggregation rule F, for individual i with preference ordering ≿i, submitting the judgment set B i (which may or may not coincide with individual i's true judgment set A i) is a weakly dominant strategy if, for every profile (B 1, . . ., B i, . . ., B n) ∈ Domain(F), F(B 1,. . ., B i,. . ., B n)≿iF(B 1,. . ., B i*,. . ., B n) for every i-variant (B 1,. . .,B i*,. . .,B n) ∈ Domain(F).
Two aggregation rules F and G with identical domain are strategically equivalent on Y ⊆ X for C if, for every profile (A 1, . . ., A n) ∈ Domain(F) = Domain(G) and preference relations ≿1∈C(A 1), . . ., ≿n∈C(A n), there exist profiles (B 1, . . ., B n), (C 1, . . ., C n) ∈ Domain(F) = Domain(G) such that
(i) for each individual i, submitting B i is a weakly dominant strategy under rule F and submitting C i is a weakly dominant strategy under rule G;
(ii) F(B 1, . . ., B n) and G(C 1, . . ., C n) agree on every proposition p∈Y.
THEOREM 5. For a conjunctive or disjunctive agenda X, premise- and conclusion-based voting are strategically equivalent on Youtcome={c}+neg for CYoutcome.
Despite the differences between premise- and conclusion-based voting, if individuals have outcome-oriented preferences and act on appropriate weakly dominant strategies, the two rules generate identical collective judgments on the conclusion. This is surprising as premise- and conclusion-based voting are regarded in the literature as two diametrically opposed aggregation rules.
6. CONCLUDING REMARKS
As judgment aggregation problems arise in many real-world decision-making bodies, it is important to understand which judgment aggregation rules are vulnerable to manipulation and which not. We have introduced a non-manipulability condition for judgment aggregation and characterized the class of non-manipulable judgment aggregation rules. Non-manipulability rules out the existence of opportunities for manipulation by the untruthful expression of individual judgments. We have then defined a game-theoretic strategy-proofness condition and shown that, under some (but not all) motivational assumptions, it is equivalent to non-manipulability, as defined earlier. For these motivational assumptions, our characterization of non-manipulable aggregation rules has allowed us to characterize all strategy-proof aggregation rules. Strategy-proofness rules out the existence of incentives for manipulation. Crucially, if individuals do not generally want the group to make collective judgments that match their own individual judgments, the concepts of non-manipulability and strategy-proofness may come significantly apart.
We have also proved an impossibility result that is the judgment aggregation analogue of the classical Gibbard–Satterthwaite theorem on preference aggregation. For the class of path-connected agendas, including conjunctive, disjunctive and preference agendas, all non-manipulable aggregation rules satisfying some mild conditions are dictatorial. The impossibility result becomes even stronger for agendas with particularly rich logical connections between propositions.
To avoid this impossibility, we have suggested that permitting incomplete collective judgments or domain restrictions are the most promising routes. For example, conclusion-based voting is strategy-proof, but violates completeness. Another way to avoid the impossibility is to relax non-manipulability or strategy-proofness itself. Both conditions fall into more general families of conditions of different strength. Instead of requiring non-manipulability on the entire agenda of propositions, we may require non-manipulability only on some subset of the agenda. Premise-based voting, for example, is non-manipulable on the set of premises, but not non-manipulable simpliciter. Whether such a weaker non-manipulability condition is sufficient in practice depends on how worried we are about possible opportunities for manipulation on propositions outside the subset of the agenda for which non-manipulability holds. Likewise, instead of requiring strategy-proofness for a large class of individual preferences over judgment sets, we may require strategy-proofness only for a restricted class of preferences, for example for “outcome-” or “reason-oriented” preferences. Premise-based voting, for example, is strategy-proof for “reason-oriented” preferences. Whether such a weaker strategy-proofness condition is sufficient in practice depends on the motivations of the decision-makers.
Finally, we have shown that, for “outcome-oriented” preferences, premise- and conclusion-based voting are strategically equivalent. They generate the same collective judgment on the conclusion if individuals act on appropriate weakly dominant strategies.
Our results raise questions about a prominent position in the literature, according to which premise-based voting is superior to conclusion-based voting from a deliberative democracy perspective. We have shown that, with respect to non-manipulability and strategy-proofness, conclusion-based voting outperforms premise-based voting. This result could be generalized beyond conjunctive and disjunctive agendas.
Until now, comparisons between judgment aggregation and preference aggregation have focused mainly on Condorcet's paradox and Arrow's theorem. With this paper, we hope to inspire further research on strategic voting and a game-theoretic perspective in a judgment aggregation context. An important challenge is the development of models of deliberation on interconnected propositions–where individuals not only “feed” their judgments into some aggregation rule, but where they deliberate about the propositions prior to making collective judgments–and the study of the strategic aspects of such deliberation. We leave this challenge for further work.
A. APPENDIX
Proof of Theorem 1. Let Y⊆X. We prove first that (ii) and (iii) are equivalent, then that (ii) implies (i), and then that, given universal domain, (i) implies (ii).
(ii) implies (iii). Trivial as monotonicity on Y implies weak monotonicity on Y.
(iii) implies (ii). Suppose F is independent on Y and weakly monotonic on Y.
To show monotonicity on Y, note that in the requirement defining weak monotonicity on Y one may, by independence on Y, replace “for some such pair” by “for all such pairs”. The modified requirement is equivalent to monotonicity on Y.
(ii) implies (i). Suppose F is independent on Y and monotonic on Y. To show non-manipulability on Y, consider any proposition p ∈ Y, individual i, and profile (A 1, . . ., An) ∈ Domain(F), such that F(A 1, . . ., A n) disagrees with A i on p. Take any i-variant (A 1,. . ., A*i,. . ., A n) ∈ Domain(F). We have to show that F(A 1,. . ., A*i,. . ., A n) still disagrees with A i on p. Assume first that A i and A i* agree on p. Then in both profiles (A 1, . . ., A n) and (A 1,. . ., A i*,. . ., A n) exactly the same individuals accept p. Hence, by independence on Y, F(å) agrees with F(A 1, . . ., A n) on p, hence disagrees with A i on p. Now assume A i* disagrees with A i on p, i.e., agrees with F(A 1, . . ., A n) on p. Then, by monotonicity on Y, F(å) agrees with F(A 1, . . ., A n) on p, i.e., disagrees with A i on p.
(i) implies (ii). Now assume universal domain, and let F be non-manipulable on Y. To show monotonicity on Y, consider any proposition p∈Y, individual i, and pair of i-variants (A 1, . . ., A n), (å) ∈ Domain(F) with p ∉ A, and p∈ A i*. If p∈F(A 1, . . ., A n), then A i disagrees on p with F(A 1, . . ., A n), hence also with F(å) by non-manipulability on Y. So p ∈ F(å). To show independence on Y, consider any proposition p ∈ Y and profiles (A 1, . . ., A n), ∈ Domain(F) such that, for all individuals i, A i and A i* agree on p. We have to show that F(A 1, . . ., A n) and F(A 1*) agree on p. Starting with the profile (A 1, . . ., A n), we replace first A 1 by A 1*, then A 2 by A 2*,. . ., then A n by A n*. By universal domain, each replacement leads to a profile still in Domain(F). We now show that each replacement preserves the collective judgment about p. Assume for contradiction that for individual i replacement of A i by A i* changes the collective judgment about p. Since A i and A i* agree on p but the respective outcomes for A i and for A i* disagree on p, either A i or A i* (but not both) disagrees with the respective outcome. This is a contradiction, since it allows individual i to manipulate: in the first case by submitting A i* with genuine judgment set A i, in the second case by submitting A i with genuine judgment set A i*. Since no replacement has changed the collective judgment about p, it follows that F(A 1, . . ., A n) and (F(A 1, . . ., A n) agree on p, which proves independence on Y.
For any propositions p, q, we write p ⊧* q to mean that p conditionally entails q.
Proof that conjunctive and disjunctive agendas are path-connected. Let X be the conjunctive agenda X = {a 1,¬ a 1,. . .,a k,¬ a k, c, ¬c, r, ¬r}, where k ≥ 1 and r is the connection rule c ↔ (a 1∧⋅⋅⋅∧a k). (The proof for a disjunctive agenda is analogous.) We have to show that for any p, q∈X there is a sequence p=p 1, p 2, . . ., p k=q in X (k≥1) such that p 1 ⊧* p 2,p 2 ⊧* p 3,. . .,p k-1⊧* p k. To show this, it is sufficient to prove that
where a proposition is of type 1 if it is a possibly negated premise (a 1, ¬a 1,. . .,a k, ¬a k), of type 2 if it is the possibly negated conclusion (c, ¬c) and of type 3 if it is the possibly negated connection rule (r, ¬r). The reason is (in short) that, if (1) holds, then, for any p, q∈X of the same type, taking any s ∈ X of a different type, there is by (1) a path connecting p to s and a path connecting s to q; the concatenation of both paths connects p to q, as desired. As p ⊧* q if and only if ¬q ⊧* ¬p (use both times the same Y), claim (1) is equivalent to
We show (2) by going through the different cases (where j∈{1, . . ., k):
From type 2 to type 3: we have c⊧* r and ¬c ⊧* ¬r (take Y={a 1. . ., a k} both times), and c ⊧* ¬r and ¬c ⊧* r (take Y = {¬a 1} both times);
From type 1 to type 2: we have a j⊧* c and ¬a j⊧* ¬ c (take Y={r, a 1, . . ., a j−1, a j+1, . . ., a k} both times), and a j⊧* ¬c and ¬a j⊧* c (take Y = {¬r, a 1, . . .,a j-1,a j+1, . . .,a k} both times);
From type 1 to type 3: we have a j ⊧*r and ¬a j⊧ ¬r (take Y={c, a 1, . . ., a j−1, a j+1, . . ., a k} both times), and a j⊧* ¬r and ¬a j⊧* r (take Y = {¬c, a 1, . . ., a j-1, a j+1, . . ., a k} both times).
Proof of Theorem 2. Let X be path-connected. If F is dictatorial, it obviously satisfies universal domain, collective rationality, responsiveness and non-manipulability. Now suppose F has all these properties, hence is also independent and monotonic by Theorem 1. We show that F is dictatorial. If X contains no contingent proposition, F is trivially dictatorial (where each individual is a dictator). From now on, suppose X is not of this degenerate type. For any consistent set Z⊆X, let A Z be some consistent and complete judgment set such that Z⊆A Z (which exists by L1–L3).
Claim 1. F satisfies the unanimity principle: for any p∈X and any (A 1, . . ., A n)∈Domain(F), if p∈A i for each i then p∈F(A 1, . . ., A n).
Consider any p∈X and (A 1, . . ., A n)∈Domain(F) such that p∈A i for every i. Since the sets A i are consistent, p is consistent. If ¬p is inconsistent (i.e., p is a tautology), p∈F(A 1, . . ., A n) by collective rationality. Now suppose ¬p is consistent. As each of p, ¬p is consistent, p is contingent. So, by responsiveness, there exists a profile (B 1, . . ., B n)∈Domain(F) such that p∈F(B 1, . . ., B n). In (B 1, . . ., B n) we now replace one by one each judgment set B i by A i, until we obtain the profile (A 1, . . ., A n). Each replacement preserves the collective acceptance of p, either by monotonicity (if p∉B i) or by independence (if p∈B i). So p∈F(A 1, . . ., A n), as desired.
Claim 2. F is systematic: there exists a set W of (“winning”) coalitions C⊆N such that, for every (A 1, . . ., A n)∈Domain(F), F(A 1, . . .,A n) = {p ∈ X: {i: p ∈ A i} ∈ W}.
For each contingent p∈X, let W p be the set all subsets C⊆N such that p∈F(A 1, . . ., A n) for some (hence by independence any) (A 1, . . ., A n)∈Domain(F) with {i:p∈A i}=C. Consider any contingent p, q∈X. We prove that W p = W q. Suppose C ∈W p, and let us show that C ∈W q; this proves the inclusion W q ⊆ W p, and the converse inclusion can be shown analogously. As X is path-connected, there are p=p 1, p 2, . . ., p k=q∈X with p 1 ⊧ p 2,p 2 ⊧* p 3, . . ., p k-1⊧* p k. We show by induction that C ∈ W p j for all j=1, 2, . . ., k. If j=1, then C∈C p1 by p 1=p. Now let 1≤j<k and assume C ∈ W p j. By p j⊧* p j+1 there is a set Y⊆X such that {p j}∪Y and {¬ p j+1} ∪ Y are each consistent but {p j, p j+1}∪Y is inconsistent. It follows that each of {p j, p j+1}∪Y and {¬ p j,¬ p j+1{ ∪ Y is consistent (using L3 in conjunction with L1, L2). So we may define a profile (A 1, . . ., A n)∈Domain(F) by
Since Y⊆A i for all i, Y⊆F{A 1, . . ., A n} by claim 1. Since {i : p j ∈ A i} = C ∈ W p j, we have p j∈F(A 1, . . ., A n). So{p j}∪Y⊆F(A 1, . . ., A n). Hence, since {p j,¬ p j+1} ∪ Y is inconsistent, ¬ p j+1 ∉ F(A 1, . . .,A n), whence p j+1∈F(A 1, . . ., A n). So, as {i:p j+1∈A i}=C, we have C ∈ W p j+1, as desired.
As W p is the same set for each contingent p∈X, let W be this set. To complete the proof of the claim, it is sufficient to show that, for every (A 1, . . ., A n)∈Domain(F) and every p∈X, p∈F(A 1, . . ., A n) if and only if {i : p ∈ A i} ∈ W. If p is contingent this holds by definition of W; if p is a tautology it holds because p∈F(A 1, . . ., A n) (by collective rationality), {i : p ∈ A i} = N (by universal domain) and N ∈ W (by claim 1); analogously, if p is a contradiction it holds because p ∉ F(A 1, . . .,A n), {i : p ∈ A i} = ∅ and ∅ ∉ W.
Claim 3. (1) N ∈ W; (2) for every coalition C ⊆ N, C ∈ W if and only if N∖C ∉ W; (3) for every coalitions C, C*⊆N, if C ∈ W and C⊆C* then C* ∈ W.
Part (1) follows from claim 1. Regarding parts (2) and (3), note that, for any C⊆N, there exists a p∈X and an (A 1, . . ., A n)∈Domain(F) with {i : p ∈ A i} = C; this holds because X contains a contingent proposition p. Part (2) holds because, for any (A 1, . . ., A n)∈Domain(F), each of the sets A 1, . . ., A n, F(A 1, . . ., A n) contains exactly one member of any pair p,¬ p ∈ X, by universal domain and collective rationality. Part (3) follows from a repeated application of monotonicity and universal domain.
Claim 4. There exists an inconsistent set Y⊆X with pairwise disjoint subsets Z 1, Z 2, Z 3 such that (Y∖Z j) ∪ Z j¬ is consistent for any j∈{1, 2, 3}. Here, Z ¬ := {¬ p : p ∈ Z} for any Z⊆X.
By assumption, there exists a contingent p∈X; also ¬p is then contingent. So, by path-connectedness, there exist p = p 1,p 2, . . .,p k = ¬ p ε X and Y 1*,Y 2*, . . ., Y k-1* ⊆ X such that
From (3) and (4) it follows (using L3 in conjunction with L1, L2) that
We first show that there exists a t∈{1, . . ., k−1} such that {p t,¬ p t+1} is consistent. Assume for contradiction that each of {p 1,¬ p 2}, . . ., {p k-1, ¬p k} is inconsistent. So (using L2) each of {p 1, ¬ p 2}, {p 1,p 2,¬ p 3}, . . .,{p 1, . . .,p k-1, ¬p k} is inconsistent. As {p 1}={p} is consistent, either {p 1, p 2} or {p 1,¬ p 2} is consistent (by L2 and L3); hence, as {p 1,¬ p 2} is inconsistent, {p 1, p 2} is consistent. So either {p 1, p 2, p 3} or {p 1,p 2,¬ p 3} is consistent (again by L2 and L3); hence, as {p 1,p 2,¬ p 3} is inconsistent, {p 1, p 2, p 3} is consistent. Continuing this argument, it follows after k−1 steps that {p 1, . . ., p k} is consistent. Hence {p 1, p k} is consistent (by L2), i.e., {p,¬ p} is consistent, a contradiction (by L1).
We have shown that there is a t∈{1, . . ., k−1} such that {p t,¬ p t+1} is consistent, whence Y t* ≠ ∅ by (3). Define Y : = {p t,¬ p t+1} ∪ Y t*, Z 1 : = {p t}, and Z 2 : = {¬ p t+1}. Since {p t,¬ p t+1} is consistent, {p t,¬ p t+1} ∪ B is consistent for some set B that contains q or ¬ q (but not both) for each q ∈ Y t* (by L3 together with L1, L2). Note that there exists a Z 3 ⊆ Y t* with B = (Y t*∖Z 3) ∪ Z 3¬. This proves the claim, since:
– Y = {p t,¬ p t+1} ∪ Y t* is inconsistent by (3),
–] Z 1, Z 2, Z 3 are pairwise disjoint subsets of Y,
–] (Y Z 1) ∪ Z 1¬ = (Y∖{p t}) ∪ {¬ p t} = {¬ p t,¬ p t+1} ∪ Y t* is consistent by (4),
–[ (Y Z 2) ∪ Z 2¬ = (Y∖{¬ p t+1}) ∪ {p t+1} = {p t, p t+1} ∪ Y t* is consistent by (4),
– (Y Z 3) ∪ Z 3¬ = {p t,¬ p t+1} ∪ (Y t*∖Z 3) ∪ Z 3¬ = {p t,¬ p t+1} ∪ B is consistent.
Claim 5. For any coalitions C, C* ⊆ N, if C, C* ∈ W then C ∩ C* ∈ W.
Consider any C, C* ∈ W, and assume for contradiction that C 1 : = C ∩ C* ∉ W. Put C 2:=C*\C and C 3:=N\C*. Let Y, Z 1, Z 2, Z 3 be as in claim 4. Noting that C 1, C 2, C 3 form a partition of N, we define the profile (A 1, . . ., A n) by:
By C 1 ∉: W and N\C 1=C 2∪C 3 we have C 2 ∪ C 3 ∈: W by claim 3, and so Z 1⊆F(A 1, . . ., A n). By C ∈: W and C⊆C 1∪C 3 we have C 1 ∪ C 3 ∈: W by claim 3, and so Z 2⊆F(A 1, . . ., A n). Further, Z 3⊆F(A 1, . . ., A 3) as C 1 ∪ C 2 = C* ∈: W. Finally, Y\(Z 1∪Z 2∪Z 3)⊆F(A 1, . . ., A n) as N ∈: W by claim 3. In summary, we have Y⊆F(A 1, . . ., A n), violating consistency.
Claim 6. There is a dictator.
Consider the intersection of all winning coalitions, C C: = ∩C∈:WC. By claim 5, C ∈ W. So C ≠ ∅, as by claim 3, ∅ ∉: W. Hence there is a j ∈;: C. To show that j is a dictator, consider any (A 1, . . ., A n)εDomain(F) and pεX, and let us prove that pεF(A 1, . . ., A n) if and only if pεA j. If pεF(A 1, . . ., A n) then C := {i : p ∈;: A i} ∈;: W, whence jεC (as j belongs to every winning coalition), i.e., pεA j. Conversely, if p∉;F(A 1, . . ., A n), then ¬p ε F(A 1,. . .,A n); so by an argument analogous to the previous one, ¬p ε A j, whence p∉;A j.
Proof of Theorem 4. Let Y⊆X.
(i) First, assume F is strategy-proof for C Y. To show non-manipulability on Y, consider any proposition pεY, individual i, and profile (A 1, . . ., A n)εDomain(F), such that F(A 1, . . ., A n) disagrees with A i on p. Let (A 1,. . .,A i*,. . .,A n)ε Domain(F) be any i-variant. We have to show that F(å) still disagrees with A i on p. Define a preference relation ≿i over judgment sets by [B≿iB* if and only if A i agrees on p with B but not with B*, or with both B and B*, or with neither B nor B*]. (≿i is interpreted as individual i's preference relation in case i cares only about p.) It follows immediately that ≿i is reflexive and transitive and respects closeness to A i on Y, i.e., is a member of C Y(A i). So, by strategy-proofness for C Y, F(A 1,. . .,A n)≿_i F(A 1,. . .,A i*,. . .,A n). Since A i disagrees with F(A 1, . . ., A n) on p, the definition of ≿i implies that A i still disagrees with F(A 1,. . .,A i*,. . ., A n) on p.
(ii) Now assume that F is non-manipulable on Y. To show strategy-proofness for C Y, consider any individual i, profile (A 1, . . ., A n)εDomain(F), and preference relation ≿iεC Y(A i), and let (A 1,. . .,A i*,. . ., A n)ε Domain(F) be any i-variant. We have to prove that F(A 1,. . .,A n)≿iF(A 1,. . .,A i*,. . .,A n). By non-manipulability on Y, for every proposition pεY, if A i disagrees with F(A 1, . . ., A n) on p, then also with F(A 1,. . .,A i*,. . .,A n); in other words, if A i, agrees with F(A 1,. . .,A i*,. . .,A n) on p, then also with F(A 1, . . ., A n). So F(A 1, . . ., A n) is at least as close to A i on Y as F(A 1,. . .,A i*,. . ., A n). Hence F(A 1,. . .,A n)≿_i F(A 1,. . .,A i*,. . .,A n), as ≿iεC Y(A i).
Proof of Proposition 1. We prove this result directly, although it can also be derived from Corollary 1. Let F be premise-based voting. To show that F is strategy-proof for C Yreason, consider any individual i, profile (A 1, . . ., A n)εDomain(F), i-variant (A 1,. . .,A i*,. . .,A n)ε Domain(F), and preference relation ≿iε C Yreason (A i). The definition of premise-based voting implies that F(A 1, . . ., A n) is at least as close to A i as F(A 1,. . .,A i*,. . .,A n) on Y reason. So, by ≿i ∈;: C Yreason (A i), we have F(A 1,. . .,A n)≿_i F(A 1,. . .,A i*,. . ., A n).
Proof of Theorem 5. Consider the conjunctive agenda (the proof is analogous for disjunctive agendas). Let F and G be premise- and conclusion-based voting, respectively. Take any profile (A 1, . . ., A n)εDomain(F)=Domain(G) and any preference relations ≿iε C Youtcome(A 1),. . .,≿nε C Youtome(A n). Define (B 1, . . ., B n) by
It can easily be seen that, for each i and any pair of i-variants (D i, . . ., B i, . . ., D n), (D 1,. . .,B 1*,. . .,D n)ε Domain(F), F(D 1, . . ., B i, . . ., D n) is at least as close to A i on Youtome(={c,¬ c} as F(D 1,. . .,B i*,. . .,D n); so (D 1,. . .,B i,. . .,D n)≿i (D 1,. . .,B i*,. . .,D n) as ≿i ∈;: C Youtcome (A i). Hence, submitting B i is a weakly dominant strategy for each i under F. Second, let (C 1, . . ., C n) be (A 1, . . ., A n) (the truthful profile). Then, for each i, submitting C i is a weakly dominant strategy under G, as G is strategy-proof. Finally, it can easily be seen that F(B 1, . . ., B n) and G(C 1, . . ., C n)=G(A 1, . . ., A n) agree on each proposition in Youtcome = {c,¬ c}.