Induction and analogy have long been considered indispensable items in the uncertain reasoner’s toolbox, and yet their formal relation to probability has never been less than puzzling. One of the first mathematically well-informed attempts at grappling with the problem can be found in the penultimate chapter of Laplace’s Essai philosophique sur les probabilités. There, a key contributor to the construction of the theories of mathematical probability and statistics argues that analogy and induction, along with a ‘happy tact’, provide the principal means for ‘approaching certainty’ when the probabilities involved are ‘impossible to submit to calculus’. Laplace then hastens to warn the reader against the subtleties of reasoning by induction and the difficulties of pinning down the right ‘similarity’ between causes and effects which is required for the sound application of analogical reasoning. Two centuries on, reasoning about the kind of uncertainty which resists clear-cut probabilistic representations remains, theoretically, pretty much uncharted territory. Analogies and Theories: Formal Models of Reasoning is the attempt of I. Gilboa, L. Samuelson and D. Schmeidler to put those vexed epistemological questions on a firm decision-theoretic footing. Indeed this book can be seen as a manifesto encouraging economic theorists to boldly go where probability does not apply. For, the authors argue, Bayesian rationality, with its insistence on probability, has many merits, but when it comes to understanding the processes leading to the formation of (more or less) rational beliefs, not only does the Bayesian approach fail to have the last word, but it also relegates the path leading to those beliefs to its ‘black box’. The key message of this volume is that analogical and rule-based reasoning can help us go beyond some well-known limitations of standard probabilistic methods.
The ambitiousness of the goal and the fact that the volume collates six papers which have appeared in mathematically-oriented economic journals make it rather demanding reading, particularly to the non-specialist. However, for reasons I will explain below, it is an effort well worth making for formal epistemologists and economists with an interest in uncertain reasoning.
1. OUTLINE OF THE CONTENTS
Chapter 2, Inductive Inference: An Axiomatic Approach, sets the stage by introducing a general model of belief formation which represents the line of research envisaged in this volume. Whilst it captures several central features of Bayesian reasoning and classical statistics, it paves the way to a formal characterization of intuitively appealing and yet non-probabilistic forecasts. The idea can be summed up as follows. An agent is imagined as capable of building scenarios capturing the many ways in which the world could turn out to be. Typically, however, some scenarios will appear to be more likely than others to the agent. True to an established decision-theoretic tradition, the authors identify the conditions under which the qualitative comparison concerning the likelihood of future scenarios is represented by a ‘prediction rule’, which is constructed by gathering the ‘support’ provided to each scenario by ‘known cases’ (more on this below). The main results identify the conditions under which the prediction rule is probabilistic and those under which it generalizes classical statistical methods.
Chapter 3, Subjectivity in Inductive Inference, provides a methodological defence of the inevitability of (a degree of) subjectivity in rational inductive inference. Its main results are that, under not-so-strong assumptions, ‘purely objective’ inference would perform strictly worse at tracking true predictions than a ‘subjective’ method which lets forecasts depend on other things than just past history.
Subsequent chapters go into some detail about how analogies and rules, respectively, can be captured formally so as to complement Bayesian reasoning in a descriptively adequate and cognitively plausible way. Chapter 4, Dynamics of Inductive Inference in a Unified Framework, focuses on how and why a rational modeller may decide to use either of the modes of reasoning (case-based and rule-based) which are put forward as alternatives to classical statistical inference. Building on the results of this chapter, the following one, Analogies and Theories: The Role of Simplicity and the Emergence of Norms, develops a distinction beween ‘exogenous’ (i.e. independent of the agent’s evaluation) and ‘endogeous’uncertainty. The results of this chapter suggest that rule-based reasoning will be particularly helpful for reasoning about endogenous uncertainty, giving way to case-based reasoning when the exogenous component of uncertainty is prevalent. The authors then point out how this has interesting consequences for the analysis of social norms. For if the majority of agents converges on adopting the same ‘theory’ to guide the formation of their beliefs, this commonly held hypothesis will be adopted as a shared explanation of the data, thereby facilitating coordination in the selection among multiple equilibria.
Finally, Chapter 6, The Predictive Role of Counterfactuals, extends the framework of Chapter 2 by allowing the prediction rule to rank the likelihood of future scenarios not only based on past cases, but also on the basis of how the past could have been. So Chapter 6 explores the extent to which counterfactual reasoning, another mode of inference about which Bayesian rationality does not have much to say, can help the Bayesian reasoner in forecasting. Unsurprisingly, perhaps, the answer is: Not much.
The chapters are preceded by an Introduction in which the authors provide an overview of the book’s contents, and signal its potential relevance for a number of neighbouring fields. As noted above, however, the book speaks mostly the language of statistics with clear inflections from decision theory and machine learning. Therefore readers from other backgrounds must be prepared to make a substantial effort.
In the remainder of this review, I will bring up a small selection of broad themes. This, I hope, will encourage readers of this journal to fill in the details of Gilboa, Samuelson and Schmeidler’s contributions. In particular I will point out how this volume constitutes a very good example of the enormous potential for genuine cross-discipline collaboration in the wider field of uncertain reasoning.
2. RELEVANCE TO FORMAL EPISTEMOLOGY
It is useful to recall why, in spite of its many successful applications, probability is not always the best tool to reason about uncertainty, and how this raises a number of serious criticisms against the Bayesian foundations of rationality.
Standard probabilistic and statistical methods work well in applications where the problem leaves the modeller little or no room for subjective choices. Consider, for instance, modelling the uncertainty related to playing European roulette at a casino. In doing this you (the modeller) can assume that you have complete and reliable knowledge of the unique stochastic process generating the uncertainty which is relevant to you. Situations of this kind are usually termed ‘decision problems under risk’ and exist outside casinos too. Experience tells insurance companies that car insurance premiums can be mapped with no substantial loss of information into a suitably defined decision problem under risk. Similarly, but much less straightforwardly, meteorologists can rely on models which are capable of delivering, say, one-day temperature forecasts with a remarkable degree of accuracy. This turns your planning of the evening’s barbecue essentially into a decision under risk (as far as the rain is concerned, at least). In short, standard statistical methods have gained their reputation because they work well in a number of important practical applications, while being supported by philosophically respectable theories which, from the early proposals of de Finetti, Ramsey and Savage to their recent ‘depragmatizations’ (see e.g. Pettigrew Reference Pettigrew and Zalta2015) all lead to probability.
The key epistemological problem is that not all decision-relevant quantifications of uncertainty appear to be equally amenable to standard, albeit complex, probabilistic modelling. In economics and elsewhere, modellers rarely find themselves in the privileged position of having to find the unique distribution, state space or decision matrix, capturing all the contingencies which are relevant to the situation at hand. As the copious literature on ambiguity testifies, sometimes so little is known about the structure of the problem that a state space can hardly be thought of in the first place. And yet, even if probability models appear to be inapplicable, a decision must be made.
Gilboa, Samuelson and Schmeidler join Laplace in arguing that the difficulties related to not being able to submit the relevant predictions to the calculus of probability should not lead to an abdication of rationality. As a recurring example in this book points out, brokers often face problems analogous to assessing whether the average price of oil will go up or down over the subsequent year. This is a typical example of a forecasting problem for which standard probabilistic methods offer no ready-made solution. However, this hardly leaves brokers, and all sorts of experts, in no position to make significant distinctions among possible courses of action. The challenge is then to capture formally how such distinctions can be made in a principled manner. Gilboa, Samuelson and Schmeidler suggest that we must begin by ridding ourselves of the wrong habit of thinking about probability as the unique rational way of quantifying uncertainty:
We take it for granted that when statistical analysis is possible, rational agents will perform such analysis correctly. In contrast, our interest is in the way economists model agents who face problems that do not naturally lend themselves to statistical analysis. Predicting financial crises, economic growth, the outcome of elections, or the eruptions of wars and revolutions, are examples where it is difficult to define [independent and identically distributed] random variables and, more generally, where the assumptions of statistical models do not seem to be good approximations. (p. 88)
The key message here is that distinct problems of decision-relevant quantification of uncertainty may be best tackled by distinct modes of rational inference. As a consequence, Bayesian reasoning should not be thought of as capturing universally normative conditions. This view is close in spirit to the one which led in the second half of the past century to the successful emergence of many non-classical and applied logics. By abandoning the idea that there was one true logic waiting to be discovered, non-classical logicians paved the way for unprecedented formal and conceptual advances, which led to many applications outside the traditional mathematical and philosophical domains of logical enquiry. Now, the ideas of Chapter 5 overlap significantly with those underlying the logical investigations on abductive and non-monotonic inference. The authors make only occasional reference to some of the early contributions to the field of non-monotonic reasoning and no reference at all to its abductive relative. So it is useful to recall that abduction (see e.g. Gabbay and Woods Reference Gabbay and Woods2005) is essentially a qualitative take on the classical ‘inverse probability’ problem dear to Reverend Bayes: how to infer the probability of causes from observations. Many occurrences of the expressions ‘induction’ and ‘inductive inference’ in this volume could indeed be replaced by ‘abduction’ and ‘abductive inference’ with no alteration in meaning. And when in Chapter 3 the authors define induction as the ‘art of selecting theories based on observation’, they are effectively giving the logicians’ definition of the goal of abduction. Non-monotonic reasoning (see e.g. Makinson Reference Makinson2005), on the other hand, is naturally related to rule-based inference and therefore, as the authors suggest in Chapter 5, better equipped to model rationality under ‘endogenous uncertainty’. In turn, their close connection with theory revision (see e.g. Hansson Reference Hansson1999) suggests that non-monotonic logics could provide useful insights and a rigorous logical language for the axiomatization of the principles according to which rational agents should switch among the modes of reasoning investigated in this book. For Chapter 4 points out clearly how unexpected events may lead rational agents to abandon probabilistic models (the most normal worlds of preferential semantics) in favour, of say, case-based reasoning. However the question of spelling out a formal framework adequate to managing the dynamics of reasoning methods is left unaddressed.
This informal sketch of overlapping themes could easily go on, but I think that the point is already quite clear. Building conceptual and formal bridges – even translations – between the statistical and logical versions of reasoning with analogies and theories stands out as an exciting research question. It isn’t hard to see that the pluralistic view of rationality advocated in this volume may greatly benefit from the methodology and tools of applied logic, and that the results may impact momentously on the wider field of formal epistemology. (See Hosni (Reference Hosni and Hansson2014) for an overview of this from the point of view of subjective probability.)
Pluralism is not, as the authors recall in several chapters, the most prominent feature of the view they take issue with:
The Bayesian approach . . . holds that all prediction problems should be dealt with by a prior subjective probability that is updated in light of new information via Bayes’s rule. This requires that the predictor have a prior probability over a space that is large enough to describe all conceivable new information. (p. 20)
The concept is reinforced when the authors define ‘Bayesian reasoning’ as the
common approach in economic theory according to which all reasoning is Bayesian. Any source of uncertainty is modelled in the state space, and all reasoning about uncertainty takes the form of updating a prior probability via Bayes’s rule. (p. 89)
The main problem identified by the authors is that this view isn’t always cognitively plausible, and as such cannot serve as (the unique) foundation of formal models of rationality. This leads them to ask a very similar question to the one raised by Laplace: What is it rational to do when the problem at hand does not lend itself to probabilistic modelling? What are the desiderata for an admissible solution?
Analogies and Theories suggests that good answers must lead to cognitively plausible models of belief formation. To develop this central point in further detail, let us go back to the prediction rule of Chapter 2. As anticipated above, the key idea is that known cases may lend support to the likelihood of particular events so that an event x should be considered to be more likely than event y just if the support provided by all known cases to x is greater than the support known cases give to y. More precisely, let M be a set of known cases and x ∈ X be a case of interest. Suppose that $v(x,c)\in \mathbb {R}$ measures the support provided by c ∈ M to x. The (PR) says that an agent should consider eventuality x to be more plausible/likely/etc. than eventuality y, written x⪰y, if and only if the following holds

This rule is axiomatized, that is to say a set of purportedly reasonable properties are proved to make its application inevitable to all agents who satisfy them. Note that the inevitability of (PR) holds in the ‘as-if’ sense which is well-known from the standard representation theorems in (Bayesian) decision theory, including de Finetti’s derivation probability from coherence. The key axioms for (PR) ensure that the qualitative relation of interest (i) is an ordering, (ii) satisfies a form of additivity and (iii) leads to a real-valued weighing of the alternatives. Hence, the concepts (and methods of proof) depart very little from those which yield the (subjective) probabilistic representation of rational belief, which is captured as a special case of this framework. The main difference lies in the generality of the result obtained, for the prediction rule does not require the weights to be probabilistic. This ties in nicely with the idea that in situations of severe uncertainty, modellers should get as close as possible to probabilistic reasoning. The fact that this model extends the expressive power of subjective probability whilst retaining its methodology (including the nature of the axioms) clearly buys foundational support to this framework for analogical reasoning. Chapter 2, however, does not answer the epistemologically important question of bringing up evidence as to why (PR) should be thought of as being more cognitively plausible than its Bayesian counterpart. In particular, evidence as to why it goes some way towards cracking the Bayesian black box is lacking. Similar questions arise in later chapters, especially 4 and 5, where (PR) serves as the basis for more formally complex and ambitious models of non-probabilistic reasoning.
This brings us to a point of foundational weakness in the proposal, namely its ambiguous epistemological status. Laplace, Poincaré, Borel and de Finetti, among others, insisted on viewing the calculus of probability as a mathematical way of defining common sense. Gilboa, Samuelson and Schmeidler are more comfortable with giving this bridging role to case-based and rule-based reasoning, which they see in action both in common (parametric and non-parametric) statistical practice, as well as in the way people actually reason. And yet they offer no experimental data, neither from computer simulation nor from the laboratory/field, in support of their claim. Therefore the descriptive adequacy of their proposals is something that readers must convince themselves of. In other words, it is unclear why a modeller dissatisfied with old-fashioned statistics should switch to the proposed alternative models. This perplexity arises vividly in Chapter 4, where a highly abstract improvement of (PR) based on the construction of conjectures is proposed as a unifying framework for the three modes of inductive reasoning, namely Bayesian, case-based and rule-based. Whilst of undoubted formal interest, this model does not suggest itself as being descriptively more accurate than its criticized special case. In formal matters of reasoning, ‘more general’ and ‘more realistic’ often pull in orthogonal directions. And yet the authors make a number of suggestions to the effect that the answer lies precisely in the generality of the approach, which allows the modeller to pick and choose among the three modes of reasoning in such a way as to fit optimally with the problem at hand. So, it is suggested that ‘when completely unexpected outcomes occur, people question their probabilistic models, relying on alternative reasoning techniques until perhaps developing a new probabilistic model’(p. 88). Hopefully, future work will throw experimental light on the descriptive adequacy of this fascinating picture which, as remarked above, could be captured naturally within the framework of non-monotonic logics.
Let us finally note that an alternative way to tackle the question of the cognitive plausibility of case-based reasoning has indeed been explored in the logic-based uncertain reasoning literature. Paris and Vencovská (Reference Paris and Vencovská1993, Reference Paris, Vencovská and Dorst1996) put forward a ‘model of belief’ which resembles quite closely the intuition behind the (PR). However, its logical framework makes the belief-formation part much more evident and allows the authors to prove that their model is computationally feasible. This provides support to the claim that the underlying procedure is cognitively plausible.
The normative/descriptive ambiguity of the Analogies and Theories programme can be likened to a host of philosophical problems which appear in many forms across disciplines. Philosophical logicians, for example, will appreciate the similarity to the opposition of ‘psychologistic’ and ‘antipsychologistic’ interpretations of inference, whereas ethicists will see certainly feel the risks related to the ‘naturalistic fallacy’ lurking.
Much work clearly needs to be done by a number of research communities to put the groundbreaking contributions presented in Analogies and Theories on firmer methodological ground. But I predict the result will constitute a key part of the next mainstream view of rationality.